Senior Data Engineer
dunnhumby
5 - 10 years
Gurugram
Posted: 06/03/2026
Job Description
dunnhumby is the global leader in Customer Data Science, empowering businesses everywhere to compete and thrive in the modern data-driven economy. We always put the Customer First.
dunnhumby(a Tesco company) Hq. in London with offices across countries employs nearly 2,500 experts in offices throughout Europe, Asia, Africa, and the Americas working for transformative, iconic brands such as Tesco, Coca-Cola, Meijer, Procter & Gamble and Metro.
dunnhumby helps retailers and brands deliver better experiences through Customer First strategies.
Our mission: to enable businesses to grow and reimagine themselves by becoming advocates and champions for their Customers. With deep heritage and expertise in retail one of the worlds most competitive markets, with a deluge of multi-dimensional data dunnhumby today enables businesses all over the world, across industries, to be Customer First.
Retail Media is transforming how advertisers connect with consumers through personalized and targeted campaigns across retailers' digital and physical touchpoints. Retail Media Measurement plays a pivotal role in ensuring the effectiveness of these campaigns, driving value for advertisers, retailers, and consumers alike.
This role focuses on designing, building, and scaling solutions that enable the accurate measurement of retail media campaigns across various channels. By providing actionable insights, it empowers stakeholders to optimize media investments, improve ROI, and enhance the overall customer experience.
We are seeking a talented and self-driven Senior Data Engineer to design, develop, and optimize real-time and batch data pipelines that power our retail media measurement solutions. In this role, you will work with Python, Apache Spark, and modern streaming frameworks to process and analyze data, enabling near-real-time decision-making for critical business applications in the retail media space.
Beyond traditional data engineering, you will also contribute to MLOps practicesbuilding scalable infrastructure to support machine learning workflows, automating model deployment, monitoring performance, and ensuring reproducibility across environments. Your work will help bridge the gap between data engineering and machine learning, enabling seamless integration of predictive models into production pipelines.
You will collaborate closely with Data Scientists, Analysts, Lead Engineers, and Product Managers to deliver robust, efficient, and production-ready data solutions. As a Senior Data Engineer, you will focus on designing scalable pipelines, mentoring junior engineers, and championing best practices in data engineering and mlOps.
Your contributions will ensure the reliability, scalability, and performance of our data and ML infrastructure, driving actionable insights and measurable impact for the business. This role offers an excellent opportunity to deepen your expertise in modern data engineering and MLOps practices while working with cutting-edge technologies in a fast-evolving industry.
What We Expect from you:
Experience:
- Bringing 69 years of expertise in data engineering, with a proven track record of designing and optimizing scalable solutions.
Technical Expertise:
- Strong expertise in big data technologies such as SQL, Pyspark and Hive
- Experience with any workflow orchestrators like Argo Workflows, Airflow
- Hands-on experience with cloud-based data stores like Redshift or Bigquery (preferred).
- Proficiency in any cloud platforms, preferably GCP or Azure.
Development Practices:
- Strong programming skills in Python, with experience in frameworks like FastAPI or similar API frameworks.
- Proficiency in unit testing and ensuring code quality.
- Hands-on experience with version control tools like Git.
- Hands-on experience ensuring reliability of production-grade big data pipelines through robust logging, monitoring, and alerting.
Optimization & Problem Solving:
- Ability to analyze complex data pipelines, identify performance bottlenecks, and suggest optimization strategies.
- Work collaboratively with infrastructure teams to ensure a robust and scalable platform for data science workflows.
Collaboration & Communication:
- Excellent problem-solving skills and the ability to work effectively in a team environment.
- Proven mentoring and communication skills, fostering collaboration across teams and effectively sharing technical expertise.
Nice To Have:
- Experience with microservices architecture, containerization using Docker, and orchestration tools like Kubernetes.
- Exposure to MLOps practices or machine learning workflows using Spark.
- Working knowledge of machine learning workflows with feature engineering, model training, deployment, and monitoring etc.
- Good working knowledge with NoSQL databases such as MongoDB, Cassandra, or DynamoDB.
This role is ideal for someone eager to grow their expertise in modern data engineering practices while contributing to impactful projects in a collaborative environment.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
