Data Engineer with Devops Skill
Senpiper PTY LTD
2 - 5 years
Gurugram
Posted: 12/02/2026
Getting a referral is 5x more effective than applying directly
Job Description
We are seeking a highly skilled Data Engineer with deep expertise in the Azure data ecosystem and Databricks, combined with strong DevOps, CI/CD, and Infrastructure as Code (Terraform) capabilities.
The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable data pipelines and platforms with full ownership of the data lifecycle in production environments.
Key ResponsibilitiesData Engineering & Architecture- Design and develop scalable, high-performance data pipelines using Azure Databricks (PySpark/SQL).
- Build batch and streaming ingestion pipelines using Azure Data Factory / Synapse / Event Hub / Kafka.
- Implement data modeling and transformation using Delta Lake, Medallion Architecture (Bronze/Silver/Gold).
- Ensure data quality, reliability, and performance tuning (partitioning, indexing, caching, cluster optimization).
- Design and manage CI/CD pipelines for data platforms using Azure DevOps / GitHub Actions.
- Automate deployment of Databricks notebooks, jobs, clusters, and workflows across environments (Dev/Test/Prod).
- Implement Infrastructure as Code (IaC) using Terraform for provisioning Azure resources and Databricks workspaces.
- Manage secrets and configurations using Azure Key Vault and secure networking (VNET, Private Endpoints).
- Own end-to-end deployment of data solutions from development to production.
- Monitor and troubleshoot production pipelines using Azure Monitor, Log Analytics, Databricks logs.
- Implement alerting, logging, retry mechanisms, and failure handling.
- Perform root cause analysis and provide long-term fixes for performance and stability issues.
- Implement data governance and access control using Unity Catalog / Azure RBAC.
- Ensure compliance with security and data privacy standards.
- Maintain documentation for pipelines, infrastructure, and operational procedures.
- Work closely with data scientists, analysts, and application teams.
- Promote engineering best practices: code reviews, testing frameworks, version control, and automation.
- Mentor junior engineers and contribute to architectural decisions.
- Strong hands-on experience with:
- Azure Databricks (PySpark, SQL, Delta Lake)
- Azure Data Factory / Synapse Analytics / Azure Storage (ADLS Gen2)
- CI/CD tools: Azure DevOps or GitHub Actions
- Terraform (IaC) for Azure & Databricks resources
- Git and branching strategies
- Experience with:
- Streaming technologies (Kafka / Event Hub / Structured Streaming)
- Monitoring & logging (Azure Monitor, App Insights, Databricks jobs)
- Linux & Shell scripting
- Strong knowledge of:
- Cloud networking (VNETs, subnets, private endpoints)
- Secrets management (Key Vault)
- Containerization (Docker is a plus)
- Experience implementing:
- Environment promotion (Dev QA Prod)
- Automated testing for data pipelines
- Bachelors / Masters degree in Computer Science, Engineering, or related field.
- 5+ years of experience in Data Engineering with at least 2+ years on Azure Databricks.
- Databricks or Azure certifications are a plus:
- Databricks Data Engineer Associate/Professional
- Azure Data Engineer (DP-203)
- Strong problem-solving and debugging skills.
- Ownership mindset with focus on reliability and scalability.
- Ability to work in Agile / DevOps environments.
- Excellent communication and documentation skills.
- Experience with Microsoft Fabric
- Experience with dbt
- Experience with Terraform modules and reusable templates
- Knowledge of data governance tools (Purview, Unity Catalog)
- Exposure to GenAI or ML pipelines on Databricks
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
