Login Sign Up

Data Engineer

Confidential

2 - 5 years

Mumbai

Posted: 13/04/2026

Getting a referral is 5x more effective than applying directly

Job Description

Company Description


Client: Confidential


Role Description


This is a full-time or long term Contractual, on-site hybrid role for an AWS Data Engineer based in Mumbai. The AWS Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and systems on AWS. Day-to-day tasks include data transformation, integration, monitoring data quality, optimizing performance, and collaborating with cross-functional teams to manage data infrastructure. Additional responsibilities involve troubleshooting data issues, ensuring security compliance, and staying updated with the latest AWS technologies.


Qualifications


Requirement

AWS Data Engineer

Experience: 8- 12 yrs

Location: Mumbai (Hybrid)

Job: Long Term Contractual

Package: Industry Standard.

Notice Period: Immediate


Job Description


Role Overview

We are seeking a hands-on Data Engineer with strong expertise in SQL, Python, Kafka, and AWS cloud services. The ideal candidate will have experience building and maintaining scalable data pipelines, working with modern data warehouses such as Snowflake, and using transformation tools like DBT. This role is execution-focused and requires strong technical depth in data engineering fundamentals.


Mandatory Skills (Must Have)

Strong expertise in SQL (advanced joins, window functions, performance tuning, query optimization)

Hands-on experience with Kafka

Strong programming skills in Python

Experience working with AWS Cloud

Experience building and maintaining data pipelines


Good to Have (Basic Knowledge Acceptable)

dbt (models, incremental loads, ref, snapshots)

Snowflake (basic warehouse usage, query optimization, data loading)

Terraform / CloudFormation / any IaC tool (basic understanding and usage)


Key Responsibilities

Data Engineering & Pipelines

Design, develop, and maintain scalable batch and streaming data pipelines

Implement ETL/ELT processes using #Python and #SQL

Build reliable #Kafka based ingestion pipelines

Ensure high data quality, reliability, and performance

#AWS #Cloud Implementation

Work with AWS services such as: #S3, #Redshift / #Snowflake, #Lambda, #Glue, #IAM

Deploy and maintain data infrastructure in AWS

Data Warehousing & Modeling

Develop and maintain data models to support analytics use cases

Optimize SQL queries for performance and cost efficiency

Support Snowflake-based data warehousing workflows

Streaming & Real-Time Processing

Implement Kafka producers and consumers

Manage topic configurations and partition strategies

Handle schema evolution and offset management

Infrastructure & DevOps

Support infrastructure provisioning using Terraform / #CloudFormation

Participate in #CI/CD processes for data pipelines


Technical Requirements

Skill Expected Level

#SQL :Advanced / Expert

#Python : Strong

#Kafka : Hands-on Production Experience

#AWS : Strong Working Knowledge

#dbt : Basic Working Knowledge

#Snowflake : Basic Working Knowledge

#IaC (#Terraform) : Basic


Contact: DM or mail sameerb@xytiqtechnologies.com

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.