Job Title:- Data Engineer with Pyspark
Location: - Gurgaon
Drive date : 26th April 25 (Inperson)
Job Type 3 days from office (Hybrid)
Responsibilities
Minimum 4 to 14 years of experience
Minimum 4 years of experience in build & deployment of Bigdata applications using PySpark 2+ years of Experience with AWS Cloud on data integration with Spark & AWS Glue/EMR In-depth understanding of Spark architecture & distributed systems Good exposure to Spark job optimizations Expertise in handling complex large-scale Big Data environments Able to design, develop, test, deploy, maintain, and improve data integration pipeline
Mandatory Skills
4+ years of exp in PySpark 2+ years of exp in AWS Glue/EMR Strong knowledge on SQL is required Excellent written & spoken communication skills, and time management skills.
Nice-to-Have
Any cloud skills Any ETL knowledge