Login Sign Up
🔔 FCM Loaded

Pyspark Developer(Python,Airflow,Cloudera is mandate) - 8+ YOE - Onsite - Chennai - Immediate - 20 Days Joiners

ValueLabs

2 - 5 years

Chennai

Posted: 12/04/2026

Getting a referral is 5x more effective than applying directly

Job Description

Education and Experience

Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field.

8+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.


Pyspark Job Description:


Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.

Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.

Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.

Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.

Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.

Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.

Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.

Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.

Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.


Technical Skills


PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.

Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.

Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).

Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools.

Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks.

Scripting and Automation: Strong scripting skills in Linux.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.