🔔 FCM Loaded

Senior Data Engineer – Data Ingestion & Pipelines (5-9 Years)

ControlShift

5 - 10 years

Bengaluru

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Senior Data Engineer Data Ingestion &

Pipelines

Location: India (Bangalore)

Experience: 59 years


About the Role

We are looking for a Senior Data Engineer who is deeply comfortable working with data

building ingestion pipelines, writing efficient transformations, exploring different databases

technologies, and ensuring reliable data movement across the organization.

You will primarily work on our Kafka ingestion flow, distributed data systems, and multiple

storage layers (SQL, NoSQL, and graph databases).


The ideal candidate enjoys working with raw data, optimizing queries, and building stable

pipelines using Python and Spark.

This role is perfect for someone who is data-heavy, curious about the data world.


Key Responsibilities

Data Engineering (Primary Focus)

Build and maintain robust ETL/ELT workflows using Python and Apache Spark.

Design and optimize transformations across SQL, NoSQL, and graph database ecosystems.

Implement data quality checks, schema handling, and consistency across pipelines.

Deal with complex, high-volume datasets and real-world production data.


Streaming & Ingestion

Work on Kafka ingestion pipelines topics, partitions, consumer logic, schema evolution.

Monitor ingestion performance, throughput, and reliability.

Build connectors/utilities for integrating various sources and sinks.


Querying & Multi-Database Work

Write and optimize complex queries across relational and NoSQL systems.

Exposure to graph databases and graph query languages (e.g., Cypher, Gremlin).

Understand indexing, modeling, and access patterns across different DB types.


Python & DevOps Awareness

Write clean and modular Python code for ETL jobs and data utilities.

Basic understanding of CI/CD, job orchestration, logging, and monitoring.

Ability to debug production ETL issues end-to-end.


Required Skills

Strong Python ETL, utilities, data processing.

Strong SQL query optimization and data modeling fundamentals.

Mandatory Spark experience batch transformations, performance tuning.

Kafka basics producers, consumers, offsets, schema handling.

Hands-on experience with relational and NoSQL databases.

Exposure to graph databases and graph query languages.

Strong debugging and data exploration skills.


Nice-To-Have

Retail or e-commerce domain experience.

Familiarity with Terraform or basic infra automation.

Experience with nested JSON, denormalized structures, and semi-structured data.

Understanding of distributed storage concepts.


Soft Skills

High ownership and end-to-end problem solving.

Enjoys working with complex or messy data.

Curious mindset comfortable exploring new databases and data flows.

Clear communication and ability to collaborate with backend and product teams.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.