Lead Data Engineer - PySpark
Relanto
5 - 10 years
Hyderabad
Posted: 28/02/2026
Job Description
About the Role:
Join our data engineering team to build and maintain large-scale data pipelines that power analytics across various products. In this role, you will process a large amount of data to deliver actionable insights for product teams and executives.
What You'll Do:
Develop Apache Airflow DAGs and PySpark ETL pipelines for high volume data processing.
Write optimized SQL queries for data transformation and aggregation.
Build data products serving Business Process, Executive KPIs, and Product Analytics.
Implement data quality and monitoring solutions.
Optimize pipeline performance and troubleshoot production issues.
Collaborate with cross-functional teams.
Production Pipeline Monitoring (KLO).
Qualifications:
Required Skills
10+ years of data engineering experience with a minimum of 7 years dedicated to the Big data stack.
Expert in Python and PySpark (DataFrame API, Spark SQL).
Advanced SQL skills (window functions, complex queries).
Production experience with Apache Airflow.
Solid background in data warehousing and dimensional modelling.
Preferred Skills
Experience with SQL, Trino, Apache Iceberg.
Knowledge of Tableau CRM/CLOUD, Salesforce platforms.
AWS/cloud data services experience.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
