Azure Databricks Engineer
RB Consultancy Services
2 - 5 years
Bengaluru
Posted: 04/04/2026
Getting a referral is 5x more effective than applying directly
Job Description
Job Title : Azure Databricks Engineer
Location : Bangalore, Chennai, Pune, Hyderabad, Gurgoan
Experience Required: 6+ years
Work Mode: Work from Office
Joining: Immediate to 30 days joiner
Candidate should be willing to join in Third party payroll and then move to TCS payroll upon BGV clearance
Role Descriptions:
Azure Databricks Engineer Key responsibilities
- Platform Infrastructure Oversee Databricks platform configuration| resource management| workspace structuring| and cluster optimization.
- Monitor and troubleshoot performance issues across clusters| jobs| notebooks| and pipelines.
- Implement governance| security| compliance and data access control using Role-Based Access Control (RBAC) and Unity Catalog.
- Pipeline Development ArchitectureDesign and implement end-to-end data pipelines using PySpark| SQL| and Delta Lake within a medallion architecture using Data factory And Data bricks.
- Build real-time and batch DLT pipelines using Databricks Delta Live Tables with a focus on reliability and scalability.
- Optimize Lakehouse architecture for performance| cost-efficiency| and data integrity.
- Automate data ingestion| transformation| and validation| including support for streaming (Autoloader) and scheduled workflows.
- Perform data transformations| cleansing and validations using data quality rules for consistent and accurate data sets.
- Manage and monitor job orchestration| ensuring efficient pipelines run and reliability.
- CICD DevOpsDesign and maintain CICD pipelines for Databricks artifacts (notebooks| jobs| libraries) using tools such as Azure DevOps| GitHub Actions| Terraform or Jenkins.
- Support trunk-based development| deployment workflows| and infrastructure-as-code practices.
- Manage version control and automated testing using Git and related DevOps practices.
- Collaboration DeliveryCollaborate with product owners| business stakeholders| and data teams to gather requirements and translate them into technical solutions.
- Drive the adoption of best practices in coding| versioning| testing| deployment| monitoring| and security.
- Provide thought leadership on the best practices in Data Engineering| Architecture and Cloud Computing.
- Performance OptimizationDeliver optimized spark jobs and SQL queries for large scale data processing.
- Implement partitioning| caching and indexing strategies to improve performance and scalability of big data workloads.
- Conduct POCs for capacity planning and recommend appropriate infrastructure optimizations for cost effectiveness.
- Documentation Knowledge SharingCreated detailed documentations and review them for data workflows| SOPs| Architectural reviews etc.
- Mentor junior team members and promote a culture of learning and innovation.
- Promote the culture of optimization and cost saving and enable research driven development.
Required Qualifications
- Technical Expertise 5 years in data engineering| with a strong focus on Databricks and Azure ecosystems.
- Deep hands-on experience with Data Factory| Databricks Lakehouse Architecture| Delta Lake| PySpark| and Spark job optimization.
- Proficiency in Python| SQL| and optionally Scala for building scalable ETLELT pipelines.
- Strong SQL skills are essential| with hands-on experience in SQL Server or other RDBMS platforms.
- Strong experience in designing and optimizing DLT pi.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
