🔔 FCM Loaded

Senior Data Engineer

PwC Acceleration Center India

5 - 10 years

Bengaluru

Posted: 21/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Job Title: Data Engineer

Role: Senior Associate

Tower: Data, Analytics & Specialist Managed Service

Experience: 5.5 10 years

Educational Qualification: BE / B Tech / ME / M Tech / MBA

Work Location: Bangalore

Job Description:

The Data Engineer is responsible for building, operating, and optimizing cloud based data platforms on AWS or Azure. The role designs and delivers ELT pipelines using Databricks (PySpark), Snowflake, and dbt, with a strong emphasis on data quality, reliability, security, performance, and cost efficiency. The scope is the same across levels; expectations scale with experience.

Key Responsibilities:

  • Design, build, and maintain scalable ELT pipelines and data models using Databricks (PySpark), Snowflake, and dbt.
  • Implement robust data validation and cleansing; ensure data quality, integrity, and lineage.
  • Orchestrate, schedule, and monitor pipelines; troubleshooting and resolve processing, transformation, and storage issues.
  • Optimize performance and cost for Spark and SQL workloads, tune schemas, queries, and jobs in lake and warehouse environments.
  • Apply data modeling best practices (staging, dimensional/star schemas, CDC, SCD); manage schema evolution.
  • Implement security and privacy controls (IAM/RBAC, secrets management, encryption in transit/at rest); uphold governance (metadata, lineage, MDM).
  • Follow SLAs and operate within incident, change, and problem management processes; create/run playbooks and root cause analyses. cause analyses.
  • Collaborate with analytics, data science, and product stakeholders to translate requirements into technical designs and incremental deliverables.
  • Use Git based workflows and CI/CD to develop, test, and deploy pipelines across environments. based workflows and CI/CD to develop, test, and deploy pipelines across environments.
  • Document architecture, data contracts, lineage, and operational procedures; contribute to continuous improvement and standards.
  • Manage cloud costs and usage (cluster/warehouse sizing, autoscaling, resource monitors); provide recommendations for efficiency.
  • Contribute to a hightrust, collaborative team culture and knowledge sharing.

MustHave Skills:

  • Cloud: AWS or Azure (object storage, computer, networking basics, IAM/security).
  • Databricks with PySpark; strong Python and SQL.
  • Snowflake (data modeling, performance tuning, warehousing concepts).
  • dbt (models, tests, snapshots, documentation, environment management).
  • Orchestration and monitoring for production pipelines (e.g., Airflow, Azure Data Factory, or equivalent).
  • Version control and CI/CD (e.g., GitHub/GitLab/Azure DevOps).

Preferred Skills:

  • AWS data services (e.g., Glue, Lambda, DMS) and/or Azure equivalents (e.g., Data Factory, Event Hubs).
  • Streaming and CDC (e.g., Kafka/Kinesis/Event Hubs; Snowpipe/Tasks/Streams; Structured Streaming).
  • Data observability and quality tooling (e.g., Great Expectations, Soda, OpenLineage).
  • Infrastructure as Code (e.g., Terraform) and secrets management (e.g., Key Vault/KMS).
  • BI ecosystem familiarity (e.g., Power BI/Tableau/Looker) and data contract practices.

Nice to have:

  • AWS certification/ Azure Certification

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.