🔔 FCM Loaded

DevOps Engineer III [T500-22471]

McDonald's Global Office in India

6 - 10 years

Hyderabad

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

About McDonalds:

One of the worlds largest employers with locations in more than 100 countries, McDonalds Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe.


Job Title: DevOps Engineer III

Skills Required:

  • GCP, Dataproc, Dataflow, BigQuery, Pub/Sub, Dataplex, Cloud Functions, Cloud Run, GKE, DBT, Terraform, CI/CD, GitHub, GitHub Actions, JFrog, Sonar cube, Airflow, Python, SQL, Vertex AI, AWS, Confluent Kafka, API Management

Experience Range: 6-10 years


Position Summary:

We are looking for an experienced DevOps Engineer to design, build and operate scalable, reliable, and secure CI/CD and cloud infrastructure platforms (GCP). The ideal candidate should be proficient in designing, maintaining and optimizing CI/CD pipelines, Terraform Automations, third party tool integrations, and orchestration frameworks.

Primary Responsibilities:

CI/CD & Automation:

  • Automate build, test, and deployment workflows.
  • Implement code quality checks, security scans, and artifact management.
  • Manage environmental promotion and rollback strategies.
  • Integrate monitoring and observability for deployments.
  • Ensure compliance and governance in pipeline operations.


Cloud and Infrastructure Management:

  • Provision and manage infrastructure using Infrastructure as Code (IaC)
  • Manage environments (dev, test, prod)
  • Ensure scalability, availability, and resilience


Containerization & Orchestration:

  • Build and manage Docker images
  • Deploy and operate applications on Kubernetes
  • Support container security and best practices


Reliability and Operations:

  • Implement monitoring, logging and alerting
  • Participate in on-call, incident response, and RCA


Governance, Security & Compliance:

  • Embed security checks in CI/CD pipelines
  • Manage secrets, IAM, and access controls
  • Ensure compliance with enterprise security standards


Continuous Improvement:

  • Identify bottlenecks in delivery and operations
  • Improve tooling, automation, and processes
  • Adopt new technologies where it adds value


Enablement & Collaboration:

  • Enable stakeholders with Self-Service environments and tooling.
  • Create and maintain runbooks, documentation, and operational playbooks.
  • Collaboration with stakeholder teams.


Required Qualifications:

  • 7+ years of DevOps or Platform Engineering in GCP/AWS environments.
  • Minimum 5 years of experience working in GCP environments.
  • Working with GCP services such as -

Dataproc, Dataflow, Dataplex, Cloud Storage, Big Query, Cloud Composer, Pub / Sub, Dataflow, Cloud functions

  • Knowledge of best practices in cloud security, Infrastructure as code (IaC), CI/CD pipelines, and engineering excellence.
  • Proficiency with third party tool integrations like DBT and Airflow.
  • Strong analytical and debugging skills for troubleshooting issues in distributed, high-volume environments.
  • Python scripting and SQL for diagnostics, data movement, and automation
  • Familiarity with Confluent Kafka, API Management.
  • Experience implementing observability, error budgets, SRE principles, and reliability metrics.


Work location: Hyderabad, India

Work pattern: Full time role.

Work mode: Hybrid.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.