Senior Data Engineer - Databricks
Tredence Inc.
8 - 12 years
Bengaluru
Posted: 21/02/2026
Job Description
Job Overview:
As a Data Engineering Architect specializing in Databricks, you will lead the design and implementation of scalable data solutions in the Healthcare and Life Sciences domain. Success in this role means delivering robust architecture that accelerates data-driven decision-making while ensuring data quality and governance. You will collaborate closely with cross-functional teams to translate complex business requirements into effective data engineering strategies. This position plays a critical role in advancing Tredence' s mission to drive measurable business impact through innovative data insights.
Job Locations - All Tredence India Office locations (Bangalore, Chennai, Hyderabad, Kolkata, Gurgaon & Pune)
Joining Time - Immediate to Max 30 Days, Serving Notice Period
What will your role look like?
- - Design, architect, and implement end-to-end data lake house solutions using Databricks, Delta Lake, and data catalog technologies.
- - Lead data modelling and data quality initiatives to ensure reliable, consistent, and well-governed datasets for analytics and AI applications.
- - Develop and optimize PySpark, Python, and SQL pipelines to support complex data processing workflows.
- - Collaborate with business stakeholders and technical teams to understand requirements, translate them into scalable data engineering solutions, and support last mile adoption of insights.
- - Define standards and best practices for data architecture, governance, and security aligned to Healthcare and Life Sciences regulations.
- - Mentor and guide engineering teams on architecture principles, tooling, and performance optimization in Databricks environments.
You will need:
- Demonstrate 8-12 years of relevant experience in data engineering architecture, ideally within Healthcare and Life Sciences or related industries.
- Possess deep expertise in Databricks architecture, including Delta Lake, Lakehouse, and data catalog implementations.
- Exhibit strong proficiency in data modelling, data quality frameworks, and governance best practices.
- Be skilled in developing and optimizing data pipelines using PySpark, Python, and SQL.
- Hold a BE/B.Tech degree in Computer Science, Engineering, or a related discipline.
- Communicate effectively to work across technical and business teams in a collaborative environment.
Good to Have Skills:
- - Experience with cloud platforms such as AWS, Azure, or GCP that integrate with Databricks.
- - Familiarity with Healthcare and Life Sciences data compliance standards (e.g., HIPAA) and security frameworks.
- - Knowledge of DevOps, CI/CD pipelines, and automation for data engineering workflows.
- - Certification(s) in Databricks or related data engineering technologies.
- - Exposure to AI/ML model deployment pipelines and MLOps practices.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
