🔔 FCM Loaded

ETL+Kafka Lead Developer

Tekskills Inc.

5 - 10 years

Bengaluru

Posted: 23/12/2025

Getting a referral is 5x more effective than applying directly

Job Description

Role Overview

We seek a Senior Data Engineer specializing in cloud-native ETL/ELT pipelines, modern orchestration, and real-time stream processing to build scalable data platforms. The role focuses on designing, implementing, and optimizing data workflows using AWS Glue, Google Dataflow, Azure Data Factory, Airflow, Dagster, Prefect, Apache Flink, Spark Structured Streaming, and managed services like Confluent Cloud, Fivetran + Snowflake, Informatica Cloud, Databricks Delta Live Tables, or StreamSets.

Key Responsibilities

  • Develop and maintain cloud-native ETL/ELT pipelines with AWS Glue, Google Dataflow, or Azure Data Factory for batch and incremental data processing.
  • Orchestrate complex workflows using Airflow, Dagster, or Prefect, integrating with Kafka for real-time streaming via Flink, Spark Structured Streaming, or Materialize.
  • Implement fully managed solutions like Confluent Cloud for Kafka transforms, Fivetran + Snowflake for ELT, Informatica Cloud, Databricks Delta Live Tables, or StreamSets to reduce operational overhead.
  • Optimize data pipelines for performance, cost, and reliability; monitor, troubleshoot, and scale systems in multi-cloud environments (AWS, Azure, GCP).
  • Collaborate with data scientists, analysts, and platform teams to ensure data quality, governance, and integration with downstream BI tools.

Required Skills & Experience

  • 5+ years in data engineering with hands-on expertise in cloud-native ETL/ELT (AWS Glue, Google Dataflow, Azure Data Factory).
  • Proficiency in modern orchestration (Airflow, Dagster, Prefect) and streaming (Flink, Spark Structured Streaming, Materialize, Kafka).
  • Experience with managed platforms: Confluent Cloud, Fivetran + Snowflake, Informatica Cloud, Databricks Delta Live Tables, StreamSets.
  • Strong programming in Python, SQL, PySpark; familiarity with data lakes, warehouses (Snowflake, Delta Lake), and CI/CD for data pipelines.
  • Bachelor's/Master's in Computer Science, Engineering, or related field; proven track record in production-scale data systems.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.