Principal Data Platform Engineer
Quarks
2 - 5 years
Bengaluru
Posted: 15/04/2026
Getting a referral is 5x more effective than applying directly
Job Description
We are looking for an experienced Principal Data Platform Engineer to design and build a scalable, modern Lakehouse data platform. This is a high-impact role for someone passionate about distributed data systems, platform engineering, and building enterprise-grade data solutions.
Technology Stack
- Architecture: Lakehouse (Medallion Bronze / Silver / Gold)
- Compute: Apache Spark (Expert level)
- Storage: Delta Lake (Mandatory), Iceberg (Good to have)
- Transformation: dbt (Advanced)
- Orchestration: Airflow, Cosmos
- Cloud: GCP (Preferred), Databricks
- Architecture Patterns: Microservices, Event-driven systems, CI/CD, Infrastructure as Code (Terraform)
Key Requirements
- Strong expertise in Apache Spark internals (RDDs, DataFrames, Spark SQL, Catalyst Optimizer, partitioning, memory management)
- Experience building scalable, resilient ELT/ETL pipelines
- Deep understanding of Lakehouse architecture (ACID transactions, time travel, file compaction)
- Strong programming skills in Python / Scala / Java with software engineering best practices
- Experience with microservices, APIs (REST/gRPC), and messaging systems (Kafka/PubSub)
- Advanced knowledge of data modeling (Kimball, Data Vault, OBT)
- Strong hands-on experience with dbt (macros, packages, testing, CI/CD integration)
- Experience in cloud-native architectures and large-scale data migrations
Good to Have
- MLOps experience (feature stores, model deployment pipelines)
- GCP ecosystem (BigQuery, Dataproc, Cloud Composer)
- Data observability tools (Great Expectations, Monte Carlo, OpenTelemetry)
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
