🔔 FCM Loaded

Lead Data Engineer - Scala/Spark/Airflow

Xebia

6 - 12 years

Bengaluru

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Job Title : Lead Data Engineer - Scala /Spark/Airflow

Job Location : Bengaluru

Exp Range : 6-12 years

Notice Period : 15 days


Position Overview :


We are seeking a Senior Data Engineer with deep expertise in Scala-based Spark development and end-to-end deployment of data pipelines on Kubernetes cluster, orchestrated via Airflow. The ideal candidate should have a strong software engineering foundation, excellent understanding of distributed systems, proficient in software design, modern project/code structuring skills, with good understanding on CI/CD processes and implementation which enables them to deliver reliable, scalable and robust data solutions. Should have overall experience of minimum 6-8 years with minimum 5Years in Hadoop, Spark.


Key Responsibilities: Design & implement robust, scalable, batch & real-time data engineering solutions using Apache Spark (Scala) & Spark structure streaming. Architect well-structured Scala projects using reusable, modular, and testable codebases aligned with SOLID principles and clean architecture principles & practices. Develop, Deploy & Manage Spark jobs on Kubernetes clusters, ensuring e icient resource utilization, fault tolerance, and scalability. Orchestrate data workflows using Apache Airflow manage DAGs, task dependencies, retries, and SLA alerts. Write and maintain comprehensive unit tests and integration tests for Pipelines / Utilities developed. Work on performance tuning, partitioning strategies, and data quality validation. Use and enforce version control best practices (branching, PRs, code review) and continuous integration (CI/CD) for automated testing and deployment. Write clear, maintainable documentation (README, inline docs, docstrings). Participate in design reviews and provide technical guidance to peers and junior engineers


Technical Skills: Primary: Languages: Scala, Java Big Data Orchestration: Airflow, Spark on Kubernetes, Yarn, Oozie Big Data Processing: Hadoop, Kafka, Spark & Spark Structured Streaming. Experience on SOLID & DRY principles with Good Software Architecture & Design implementation experience Advanced Scala experience (e.g. Functional Programming, using Case classes, Complex Data Structures & Algorithms) Proficient in developing automated frameworks for unit & integration testing. Experience with Docker and Helm and related container technologies. Proficient in deploying and managing Spark workloads on Kubernetes clusters. Experience in evaluation and implementation of Data Validation & Data Quality Devops experience in Jenkins, Maven, Github, Github actions, CI/CD

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.