Login Sign Up
🔔 FCM Loaded

Data Engineer with Scala and Azure

Impetus

2 - 5 years

Bengaluru

Posted: 15/03/2026

Getting a referral is 5x more effective than applying directly

Job Description

Hiring: Data Engineer (Scala & Azure)

Location: Bangalore, India

Experience: 6 11 Years

Notice Period: Immediate Joiners Preferred


We are looking for an experienced Data Engineer with strong expertise in Scala and Azure technologies to join our growing data engineering team. The ideal candidate will have hands-on experience building scalable data pipelines and working with large-scale distributed data processing frameworks.

This role involves designing and optimizing modern data lake and analytics platforms using Azure and Spark-based technologies.


Key Responsibilities


  • Design, develop, and maintain scalable ETL/ELT pipelines using Azure Data Factory, Azure Databricks, and Apache Spark.
  • Build and optimize large-scale data processing workflows using Scala and PySpark.
  • Implement robust data ingestion, transformation, and orchestration frameworks to process large volumes of structured and unstructured data.
  • Perform performance tuning and optimization of Spark jobs, queries, and cluster configurations to improve efficiency and reduce operational costs.
  • Work with modern data storage formats such as Delta Lake and Apache Parquet to enable high-performance analytics.
  • Collaborate with Data Scientists and ML Engineers to support feature engineering, data preparation, and ML model pipelines.
  • Develop and implement data quality checks, monitoring systems, and error-handling mechanisms to ensure reliable and accurate data processing.
  • Work closely with cross-functional teams and business stakeholders to gather requirements and deliver scalable data solutions.
  • Follow best practices for data engineering, including code quality, version control, CI/CD pipelines, and automated deployments.
  • Contribute to the design and evolution of enterprise data architecture and modern data lake platforms.


Required Skills


  • Strong hands-on experience with Scala and PySpark
  • Practical experience with Azure Databricks
  • Experience building ETL/ELT pipelines using Azure Data Factory
  • Deep understanding of **Apache Spark architecture and optimization techniques
  • Experience working with large-scale distributed data processing systems
  • Strong knowledge of modern data storage formats such as Delta Lake and Apache Parquet
  • Experience handling structured and unstructured datasets
  • Good understanding of data pipeline monitoring, debugging, and performance tuning

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.