🔔 FCM Loaded

Data Engineer

People Prime Worldwide

7 - 10 years

Gurugram

Posted: 27/12/2025

Getting a referral is 5x more effective than applying directly

Job Description

About Client:

Our Client is a global IT services company headquartered in Southborough, Massachusetts, USA. Founded in 1996, with a revenue of $1.8B, with 35,000+ associates worldwide, specializes in digital engineering, and IT services company helping clients modernize their technology infrastructure, adopt cloud and AI solutions, and accelerate innovation. It partners with major firms in banking, healthcare, telecom, and media.

Our Client is known for combining deep industry expertise with agile development practices, enabling scalable and cost-effective digital transformation. The company operates in over 50 locations across more than 25 countries, has delivery centers in Asia, Europe, and North America and is backed by Baring Private Equity Asia.

Job Title: AWS Databrick

Key Skills: AWS & Databrick, Python, Pyspark, Cloud platform

Job Locations: Chennai /Hyderabad/ Bangalore/ Pune/ Mumbai / Gurgaon

Experience: 7 - 10 Years

Budget: 18 - 20 LPA

Education Qualification: Any Graduation

Job Description:


We are seeking an experienced Senior Data Engineer with strong expertise in Databricks on AWS , Spark-based data pipelines , and Python/PySpark . The ideal candidate will play a key role in designing, building, and maintaining scalable data platforms and AI-driven solutions within an Agile delivery environment.

Key Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Databricks and Apache Spark on AWS
  • Build and optimize cloud-based data and AI solutions leveraging AWS services
  • Implement robust data processing workflows for analytics and machine learning use cases
  • Collaborate with cross-functional teams in Agile data development projects
  • Apply best practices in data engineering, testing methodologies, and code quality
  • Ensure solutions meet quality, compliance, and governance standards
  • Contribute to documentation, design reviews, and knowledge sharing
  • Communicate effectively with technical and non-technical stakeholders

Required Qualifications:

  • Bachelors degree (or equivalent) in Computer Engineering, Computer Science , or a related field
  • Minimum 5 years of experience in a data/software engineering role
  • At least 3 years of strong, hands-on experience with Databricks on AWS , including:
  • Spark-based data processing
  • Data pipelines
  • Analytics and machine learning tools within Databricks
  • Minimum 2 years of experience building AWS cloud-based data pipelines and AI solutions
  • Minimum 2 years of hands-on experience with Python and PySpark
  • Strong understanding of Agile delivery models for data projects
  • Excellent written and verbal communication skills

Technical Skills:

  • Databricks (on AWS)
  • Apache Spark (PySpark)
  • Python
  • AWS data services
  • Data pipeline design and optimization
  • Testing and validation methodologies for data engineering

Nice-to-Have Skills:

  • Databricks Certification
  • Experience working within a DevOps delivery model
  • Experience in quality- and compliance-driven environments , including adherence to policies, procedures, and guidelines
  • Industry experience working with large, complex data environments
  • Consultancy or vendor-side experience


Interested Candidates please share your CV to

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.