Login Sign Up
🔔 FCM Loaded

Data Engineer (Azure / Databricks / ADF)

DBiz.ai

7 - 9 years

Kochi

Posted: 04/04/2026

Getting a referral is 5x more effective than applying directly

Job Description

Data Engineer (Azure / Databricks / ADF)

Experience:57 Years

Role Overview

We are seeking a Data Engineer with strong expertise in ETL development using Python and deep experience across Azure data services. The role involves building scalable data pipelines, integrating diverse data sources (including APIs), and designing analytics-ready data models using Databricks and Azure-native tools.


Key Responsibilities

Design, develop, and maintain ETL/ELT pipelines using Python and Azure Data Factory (ADF)

Build and optimize data pipelines using Azure Databricks (PySpark)

Develop ingestion frameworks for API-based, batch, and file-based data sources

Implement end-to-end data pipelines on Azure leveraging multiple data services

Design and implement data models (star schema, dimensional models) for analytics, medallion architecture

Ensure data quality, validation, and monitoring across pipelines

Optimize pipeline performance and cost within Azure ecosystem

Work independently to troubleshoot and resolve data engineering issues


Azure Data Services Required Experience

Strong hands-on experience with:

o Azure Data Factory (ADF) pipeline orchestration, data movement, triggers

o Azure Databricks PySpark development, Delta Lake, performance tuning

o Azure Data Lake Storage (ADLS Gen2) data storage, partitioning Strategies

Experience with:

o Azure Key Vault secrets and credential management

o Azure Active Directory (AAD) access control and authentication

o Azure Monitor / Log Analytics pipeline monitoring and logging


Required Skills & Experience

57 years of experience in data engineering / ETL development

Strong proficiency in Python (data processing, APIs, ETL frameworks)

Hands-on experience with Databricks (PySpark)

Strong knowledge of Azure data ecosystem and architecture patterns

Experience with API-based data extraction and integration

Good understanding of data structures and data modeling concepts

Strong foundation in data warehousing (DWH) concepts

Experience working with large-scale data processing systems

Ability to work independently and deliver end-to-end solutions


Good to Have

Experience with Delta Lake / Lakehouse architecture

Knowledge of CI/CD (Azure DevOps, Git integration)

Exposure to data governance and cataloging tools

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.