Manager, Product Development (Data Engineering/AWS/AI)
Greenway Health
5 - 10 years
Bengaluru
Posted: 01/03/2026
Job Description
Job Summary
The Engineering Manager, Data Lakehouse & AI Engineering is responsible for leading the architecture, technical design, engineering execution, and operational excellence of Greenways AWS-based Data Lakehouse platform.
This role provides technical leadership across multiple scrum pods (24 teams of 45 engineers each) and establishes engineering best practices across all phases of the software development life cycle. The incumbent will drive high-quality, scalable, secure, and cost-efficient data solutions while leveraging AI-powered engineering tools to improve productivity, automation, and code quality.
This position combines deep technical expertise with strong people leadership, stakeholder management, and strategic vision to ensure alignment between engineering execution and business objectives.
Essential Duties & Responsibilities
Provide architectural leadership and technical oversight for the AWS-based Data Lakehouse platform, ensuring scalability, security, reliability, and cost optimization.
Develop and enhance architectural design frameworks to ensure high-quality, compliance, and performant data systems aligned with business objectives.
Lead technical design reviews, architecture reviews, and code quality initiatives (Gerrit-based workflows).
Establish and enforce best practices across:
- Infrastructure as Code (Terraform)
- CI/CD pipelines
- Automated testing and quality engineering
- Data governance and security standards
- Non-Functional Requirements (NFRs): scalability, availability, resiliency, observability, and performance
Ensure effective deployment of AWS technologies including but not limited to S3, Glue, EMR, Redshift, Athena, Lambda, ECS/EKS, IAM, VPC, CloudWatch, and relevant AI services such as Bedrock and SageMaker.
Drive AI-assisted engineering practices using tools such as GitHub Copilot, Claude, ChatGPT, MCP, and other emerging technologies to improve development efficiency, automation, and documentation quality.
Implement guardrails and governance for responsible AI usage within engineering teams.
Provide leadership, vision, and strategy to ensure daily operations of development teams align with both present and long-term business goals.
Manage technically focused scrum teams across multiple locations, ensuring predictable and high-quality delivery.
Partner closely with Product, Analytics, Data Science, Security, and business stakeholders to translate requirements into scalable technical solutions.
Drive cloud cost optimization and efficiency initiatives (FinOps mindset).
Mentor, coach, and develop engineering talent; build succession plans and foster a high-performance culture.
Experience
Minimum 10 years of progressive experience in software or data engineering.
35+ years of experience leading engineering teams in a managerial capacity.
Proven experience designing and implementing Data Lakehouse architecture on AWS.
Experience managing multiple scrum teams (24 pods preferred).
Experience driving technical strategy and engineering standards across distributed teams.
Education
Bachelors degree in computer science or related field required.
Masters Degree preferred
Minimum Qualifications
Strong understanding of distributed systems and large-scale data platforms & data Lakehouse architecture.
Deep experience with AWS cloud architecture and services.
Strong hands-on experience with Infrastructure as Code (Terraform).
Experience with Gerrit or comparable enterprise code review systems.
Strong understanding of distributed systems and large-scale data platforms.
Experience implementing CI/CD, automated testing, and DevOps best practices.
Demonstrated ability to define and enforce NFRs across enterprise systems.
Experience driving engineering productivity improvements through AI tools.
Strong stakeholder management and executive communication skills.
Skills/Knowledge
- Strategic thinker and proven leader with strong communication and collaboration skills.
- Strong technical depth in:
- Data Lakehouse technologies (e.g., Delta Lake, Iceberg, Hudi)
- ETL/ELT pipelines
- Streaming frameworks (Kafka/Kinesis)
- Data modeling and governance
- Experience with cloud architecture and DevOps practices.
- Understanding of AI concepts including LLMs, Agentic AI, and RAG.
- Experience using AI-assisted development tools such as GitHub Copilot, Claude, ChatGPT, MCP.
- Experience with AWS AI services (e.g., Bedrock, SageMaker) preferred.
- Ability to determine clear prioritization and manage trade-offs across roadmap, resources, and delivery timelines.
- Strong people management skills with demonstrated ability to mentor and grow high-performing engineering teams in a fast-paced environment.
- Strong executive presence and ability to communicate complex technical concepts to non-technical stakeholders.
- DevOps.
- Understanding of AI concepts like LLMs, Agentic AI, RAG
- Experience using tools like GitHub Copilot
- Experience with AWS Bedrock
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
