🔔 FCM Loaded

Senior Data Engineer (Databricks | AWS | Spark)

Confidential

5 - 10 years

Bengaluru

Posted: 17/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Senior Data Engineer (Databricks | AWS | Spark)

Bengaluru, Karnataka, India

Hybrid 3 days office / 2 days remote

Day shift with partial US team overlap

Full-Time | Permanent


About the Role

We are looking for an experienced Senior Data Engineer to design, build, and scale modern cloud data platforms that power enterprise analytics and business intelligence.

You will lead architecture decisions, develop high-performance data pipelines, and help modernize legacy systems into a scalable Databricks/Spark Lakehouse ecosystem on AWS. This role combines hands-on engineering with technical leadership and cross-functional collaboration.

If you enjoy solving complex data challenges, building reliable platforms, and working at scale, this role is for you.


Key Responsibilities

Architecture & Engineering

Design and implement scalable data architectures using Databricks, Spark, and AWS

Build robust ETL/ELT pipelines using Python and SQL

Develop batch and streaming data solutions

Optimize performance, reliability, and cost of data workloads


Platform & DevOps

Orchestrate workflows using Apache Airflow

Implement CI/CD best practices

Use Infrastructure-as-Code (Terraform/CloudFormation)

Containerize solutions with Docker/Kubernetes


Data Governance & Quality

Implement data lineage, cataloging, and access control

Define standardized metrics and KPIs

Ensure data consistency and reliability across domains

Establish monitoring, alerting, and observability


Collaboration & Leadership

Partner with analytics, product, and business teams

Mentor engineers and promote best practices

Contribute to enterprise data strategy and modernization efforts


Required Qualifications

68+ years of Data Engineering or Big Data experience

Strong hands-on experience with Databricks and Apache Spark

Advanced Python and SQL expertise

AWS experience (S3, Lambda, EMR or equivalent services)

Experience building large-scale ETL/ELT pipelines

Knowledge of workflow orchestration (Airflow)

Experience with CI/CD and DevOps practices

Strong communication and stakeholder collaboration skills


Preferred

Streaming technologies (Kafka, Kinesis, Spark Streaming)

Docker/Kubernetes

Data governance or catalog tools

Databricks Data Engineer certification

AWS certification


Work Schedule

Hybrid: 3 days onsite / 2 days remote

Day shift with overlap with US stakeholders

Flexible working hours with adequate breaks


Benefits

Competitive salary

Health benefits

Paid leave

Learning & certification support

Collaborative engineering culture


Equal Opportunity

We are committed to building an inclusive workplace and encourage applications from all qualified candidates.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.