Lead Data Engineer (Databricks)
Dreampath Services
5 - 10 years
Hyderabad
Posted: 17/04/2026
Job Description
Designation: Lead Data Engineer Experience: 8 12 years
Work Location: Hyderabad
Job Type : Contract 2 hire
Mandatory skills: AWS, Python, SQL, Databricks
About the Role
We are looking for a highlyskilled Lead Data Engineerwho is passionate about building robust,
scalable, and high-performance data systems. The ideal candidate will have deep expertise in SQL, Python, AWS, and Databricks, with a proven track record of designing and implementing modern data pipelines and analytical frameworks.
Key Responsibilities
- Design, develop, and maintainscalable data pipelines andETL processes for data ingestion, transformation, and storage.
- Work with cross-functional teams to defineand deliver data solutionssupporting business and analytics needs.
- Optimize and fine-tune SQL queries, data models, and pipeline performance.
- Build and manage data workflows in Databricksand integrate with AWS data services(S3, Redshift, Glue, Lambda, etc.).
- Ensure data accuracy, consistency, and reliability throughdata quality checks and monitoring frameworks.
- Collaborate with Data Scientists, Analysts, and Productteams to enableself-service analytics and advanced data-driven insights.
- Follow best practices for data governance, security, and compliance.
- Continuously evaluateemerging data technologies and propose innovative solutions for process improvement.
Required Skills & Qualifications
- Bachelors or masters degreein computer science,Information Technology, or a related field.
- Overall 8+ yearsof experience and min. 6+ years of hands-on experience inData Engineering or related roles.
- Strong proficiency in SQL forcomplex query development and data manipulation.
- Expertise in Python for buildingdata processing and automation scripts.
- Experience with AWS ecosystem especially S3, Glue, Redshift, Lambda,and EMR.
- Hands-on experience with Databricks fordata processing, transformation, and analytics.
- Experience workingwith structured and unstructured datasets in large-scale environments.
- Solid understanding of ETL frameworks, data modeling, and data warehousing concepts.
- Excellent problem-solving, debugging, and communication skills.
Good to Have
- Experience with Airflow, Snowflake, or Kafka.
- Knowledge of CI/CD pipelines andInfrastructure as Code (IaC) tools such as Terraform or CloudFormation.
- Exposure to data governance, metadata management, and data cataloguing tools.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
