🔔 FCM Loaded

Sr Data Engineer

ORMAE

2 - 5 years

Bengaluru

Posted: 20/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Senior Data Engineer

Location: Bangalore (On-site)

Experience: 4+ Years

Employment Type: Full-Time


Role Overview

We are seeking a Senior Data Engineer with strong hands-on expertise in building scalable data platforms, modern ingestion pipelines, and high-performance data transformation workflows on Azure and Databricks.

The ideal candidate should have deep experience in distributed data processing, orchestration, CI/CD-driven data engineering, and delivering production-grade data solutions that support analytics, AI/ML, and business decision-making.


Key Responsibilities

  • Design, build, and maintain scalable data ingestion pipelines for structured and unstructured data sources.
  • Develop and optimize ETL/ELT workflows using PySpark, Python, and Databricks.
  • Implement complex data transformations, data cleansing, and enrichment processes.
  • Manage and optimize Databricks clusters, jobs, and performance tuning.
  • Work with Azure Storage Accounts, data lakes, and cloud-native data architectures.
  • Build robust data solutions using SQL and advanced query optimization techniques.
  • Develop and integrate data services using FastAPI and REST-based interfaces when required.
  • Design high-performance data models and optimize database queries for large-scale datasets.
  • Implement CI/CD pipelines for data engineering workflows using modern DevOps practices.
  • Collaborate with Data Scientists, Architects, and Product teams to deliver reliable data products.
  • Ensure data quality, governance, monitoring, and operational excellence across pipelines.
  • Troubleshoot production issues and improve pipeline reliability and scalability.


Required Skills & Experience

Technical Skills

  • Strong experience in Azure Cloud services (Storage Accounts, Data Lake concepts).
  • Hands-on expertise with Databricks and cluster management.
  • Advanced proficiency in Python and PySpark.
  • Experience building large-scale data ingestion pipelines.
  • Strong understanding of ETL/ELT architectures.
  • Advanced SQL and database query optimization skills.
  • Experience implementing CI/CD pipelines for data workflows.
  • API development/integration experience using FastAPI.
  • Strong understanding of distributed data processing and performance tuning.


Engineering Practices

  • Data modeling and schema design.
  • Scalable pipeline architecture.
  • Logging, monitoring, and observability.
  • Version control and automated deployments.
  • Performance and cost optimization on cloud platforms.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.