Platform Lead (Databricks & Azure)
Seosaph-infotech
5 - 10 years
Hyderabad
Posted: 05/03/2026
Job Description
We are seeking an experienced Platform Lead with strong expertise across Databricks, Azure Cloud, data engineering modernization, cost optimization, observability, automation, and performance tuning. The ideal candidate will architect scalable lakehouse platforms, lead modernization and migration programs, enable platform readiness and adoption, optimize compute workloads, and guide teams on platform standards, emerging features, and best practices. Exposure to FinOps and familiarity with AWS is an added advantage.
Key Responsibilities
- Lead the design and development of end-to-end data and analytics solutions using Databricks and Azure, ensuring scalable, high-quality, and business-focused platforms.
- Lead modernization and migration of legacy ETL/ELT pipelines into scalable, high-performing Spark and Delta Lake workflows.
- Drive platform readiness and adoption by defining engineering guardrails, enablement plans, and onboarding patterns for teams.
- Implement Databricks best practices across Unity Catalog, Delta Lake, governance, security, observability, and data quality.
- Optimize clusters, SQL warehouses, autoscaling, jobs, workflows, and serverless compute for performance and cost efficiency.
- Conduct in-depth Spark performance tuning including partitioning, shuffle optimization, caching, AQE, skew handling, and I/O improvements.
- Design and implement observability and monitoring frameworks covering Spark metrics, job performance, compute usage, lineage, and governance signals.
- Build automation frameworks for cluster provisioning, job orchestration, CI/CD, data quality checks, and environment standardization.
- Analyse Databricks billing, cluster utilization, SQL warehouse usage, and FinOps metrics, and drive actionable cost optimization initiatives.
- Define platform-wide standards, frameworks, design patterns, reusable libraries, and operational best practices.
- Evaluate and implement new Databricks capabilities such as Unity Catalog enhancements, Serverless Compute, Lakehouse Federation, Delta Live Tables, and platform-native optimizations.
- Partner with architects, data engineers, SRE, and leadership to ensure reliable operations, scalability, security, and continuous improvements.
- Review workload performance, troubleshoot scaling issues, and architect high-availability, secure cloud solutions aligned with enterprise architecture.
Required Skills & Experience
- 8+ years of experience in data engineering, cloud architecture, or platform engineering.
- 5+ years hands-on experience with Databricks or Apache Spark.
- Strong expertise in Azure services including ADLS Gen2, ADF/Synapse Pipelines, Key Vault, Azure Functions, monitoring/alerting, and networking/PE configurations.
- Deep understanding of Delta Lake internals, OPTIMIZE/Z-ORDER, schema evolution, versioning, and data lifecycle management.
- Strong knowledge of cluster sizing, workload patterns, autoscaling strategy, SQL warehouse optimization, and serverless compute.
- Experience driving FinOps, including tagging strategy, cost governance, budget controls, and utilization optimization.
- Experience building observability and monitoring dashboards for Spark jobs, pipelines, and warehouse workloads.
- Hands-on experience with platform automation, CI/CD, reusable frameworks, migration playbooks, and standardized deployment patterns.
- Ability to design secure, scalable cloud landing zonealigned architectures.
- Strong analytical, documentation, communication, and architectural decision-making skills.
Good to Have
- Exposure to AWS services (S3, Glue, Lambda, EMR).
- Familiarity with MLflow, Feature Store, or MLOps patterns.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
