Databricks Engineer
ValueMomentum
2 - 5 years
Pune
Posted: 08/01/2026
Getting a referral is 5x more effective than applying directly
Job Description
Interview Date: 13th December- Walk-in Drive
Experience: 5+ Years only
Please get your Resume-Hard Copy
About the Role
We are seeking a skilled Databricks Engineer with strong expertise in PySpark , SQL , and cloud-based big data platforms . The ideal candidate will design, develop, and optimize large-scale data pipelines, ensuring high performance and reliability in a distributed computing environment.
Key Responsibilities
- Design, develop, and maintain ETL/ELT data pipelines using Databricks (PySpark, SQL, Delta Lake).
- Optimize and tune PySpark code for performance, scalability, and cost efficiency.
- Implement data quality checks , data validation frameworks, and robust testing strategies.
- Work with structured and unstructured data from various sources including batch and streaming.
- Develop and maintain Delta Lake tables, schemas, partitions, and governance processes.
- Collaborate with data architects, analysts, and business stakeholders to deliver data solutions.
- Integrate Databricks with cloud services such as Azure Data Lake , AWS S3 , Azure Data Factory , Glue , etc.
- Automate workflows using Databricks Jobs , Airflow , or cloud-native orchestration tools.
- Monitor, debug, and resolve issues in production data pipelines.
- Ensure adherence to data security, governance, and compliance policies.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
