Data Engineer
C5i
2 - 5 years
Bengaluru
Posted: 08/04/2026
Getting a referral is 5x more effective than applying directly
Job Description
We are looking for a skilled Data Engineer with strong expertise in PySpark and Hadoop to build and manage scalable data pipelines and support data processing across large datasets.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using PySpark
- Work with Hadoop ecosystem for distributed data processing and storage
- Develop and optimize Python-based data workflows
- Schedule, monitor, and manage workflows using Airflow
- Collaborate with cross-functional teams to ensure data availability and reliability
Must-have Skills:
- Strong hands-on experience with PySpark
- Good knowledge of Hadoop ecosystem (HDFS, Hive, etc.)
- Proficiency in Python programming
- Experience with Apache Airflow for workflow orchestration
- Understanding of data processing, ETL concepts, and large-scale data systems
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
