GCP Data Engineer
Talentmatics
12 - 14 years
Gurugram
Posted: 20/02/2026
Job Description
We are seeking a Lead GCP Big Data Engineer with strong expertise in designing and building scalable data pipelines, ETL/ELT workflows, and enterprise-grade big data solutions on Google Cloud Platform (GCP).
This role requires a combination of technical leadership and hands-on development, driving data engineering best practices while mentoring and guiding team members.
Note: Only immediate joiners or candidates available within 15 days should apply.
Key Responsibilities
- Design, develop, and maintain scalable ETL/ELT pipelines using PySpark, SQL, and GCP-native services
- Lead end-to-end data engineering initiatives with a focus on scalability, reliability, and performance optimization
- Develop and optimize workflows using:
- Google Cloud Dataflow
- Google Cloud Dataproc
- Google Cloud Composer
- Apache Airflow
- Establish and enforce data governance, quality, security, and performance standards
- Collaborate with product, analytics, platform, and business stakeholders for seamless solution delivery
- Mentor junior engineers and promote best practices in coding, architecture, and cloud-based data design
- Troubleshoot complex data challenges and optimize large-scale data processing systems
Mandatory Skills
Google Cloud Platform (GCP)
- Strong hands-on experience with Cloud Storage for data lake implementations
- Expertise in BigQuery for large-scale analytics and warehousing
- Experience with Google Cloud Dataproc for Spark/Hadoop-based processing
- Proficiency in Google Cloud Composer for workflow orchestration
- Hands-on experience with Google Cloud Dataflow for batch and streaming pipelines
- Knowledge of Google Cloud Pub/Sub for event-driven and real-time ingestion
- Experience with Datastream for CDC implementations
- Familiarity with Database Migration Service for migration projects
- Exposure to Analytics Hub for data sharing and governance
- Experience with Google Cloud Workflows for service orchestration
- Working knowledge of Dataform for data transformations
- Hands-on experience with Cloud Data Fusion for integration use cases
Big Data & Data Engineering
- Strong expertise in PySpark for large-scale distributed processing
- Solid understanding of the Hadoop ecosystem
- Experience designing and implementing robust ETL/ELT frameworks
- Advanced proficiency in ANSI SQL for transformation and analytics
- Hands-on experience with Apache Airflow for scheduling and monitoring pipelines
Programming Languages
- Strong proficiency in Python for data engineering and automation
- Working knowledge of Java for backend or big data applications
- Experience with Scala for Spark-based processing
Required Experience
- 312 years of experience in Data Engineering
- Strong hands-on expertise in GCP-based big data environments
- Proven experience leading or owning data platform and pipeline initiatives
- Demonstrated ability to design scalable, high-performance data architectures
- Excellent communication skills with strong stakeholder collaboration abilities
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
