Login Sign Up
🔔 FCM Loaded

Senior Data Engineer

Varahe Analytics Pvt Ltd

5 - 10 years

Noida

Posted: 04/04/2026

Getting a referral is 5x more effective than applying directly

Job Description

Job Description - Senior Data Engineer

Department - Analytics & Engineering

Team- Quantitative Analytics

Location- Noida

Experience - 4+years


About Varahe Analytics

Varahe Analytics is Indias leading integrated political consulting firm, delivering data-driven, end-to-end election management solutions. We combine advanced analytics, strategic advisory, and deep on-ground intelligence to design and execute 360-degree electoral strategies. Our work spans research and insights, field outreach, media and communications, and campaign operations - built to influence narratives, improve decision-making, and drive measurable outcomes. Backed by a high-calibre, multidisciplinary team from premier institutions, we partner closely with political leadership to deliver high-impact programmes at scale.


Role Overview

As a Senior Data Engineer (L1) at Varahe Analytics, you will build and own reliable, scalable data workflows that power analysis, modelling, reporting, and leadership decision-making. You will integrate data from multiple sources (field, survey, electoral, digital, and internal systems), standardize it into trusted datasets, and ensure teams operate on consistent metric definitions and a clear single source of truth.

This is a hands-on role with strong ownership: you will ensure day-to-day pipeline health, implement structured reliability improvements, and set the bar on documentation, review practices, traceability, and governance - especially important given the sensitivity of political and citizen-level data. You will also mentor junior engineers and drive operational discipline as the data function scales.


What You'll Do

  • Build production-grade data pipelines that support polling, constituency analytics, modelling, dashboards, and reporting.
  • Integrate and harmonize data from multiple sources, ensuring clean joins and consistent identifiers.
  • Create trusted datasets and curated marts that teams can use repeatedly without rework - reducing manual effort and improving turnaround time.
  • Strengthen governance: access controls, secure sharing, auditability, and compliance-aligned handling of sensitive data.
  • Raise engineering standards: code reviews, testing discipline, documentation, runbooks, and reproducible workflows.
  • Mentor junior engineers through pairing, design reviews, and structured coaching.


Key Responsibilities

Data Platform & Architecture

  • Architect, build, and operate scalable, fault-tolerant data pipelines for structured and unstructured data.
  • Drive operational excellence: SLAs, monitoring, incident response, and pipeline performance tuning.
  • Design and maintain data lakes/warehouses and ELT/ETL frameworks across cloud and hybrid setups.
  • Own and evolve AWS-based data architecture,designing secure, scalable pipelines and storage/compute patterns, optimizing cost and performance, and ensuring reliable deployments through monitoring, alerting, and infrastructure best practices.

Governance, Security & Quality

  • Establish best practices for data lineage, access controls, secure handling, documentation, and observability.
  • Implement automated validation checks, anomaly detection, and runbooks; conduct RCA for recurring issues.

Stakeholder Partnership

  • Partner closely with analysts, application engineers, and strategy/campaign teams to translate business needs into robust datasets and metrics.
  • Ensure consistent metric definitions and prevent multiple versions of the truth.


Must-Have Qualifications

  • 5+ years of hands-on Data Engineering experience with a proven track record of building and operating production systems.
  • Hands-on experience designing and operating scalable, secure data architectures on AWS, including core services such as S3, IAM, Glue, EMR/EKS/ECS, Lambda, Redshift, CloudWatch, with a strong understanding of VPC networking, security best practices, cost optimization, fault tolerance, and high availability.
  • Advanced proficiency in Python, & SQL.
  • Strong experience with big-data/distributed systems such as Airflow, Hive, Presto/Trino (or equivalents).
  • Hands-on expertise with lake/warehouse architectures: Delta Lake, Snowflake, BigQuery, Redshift (or similar).
  • Strong understanding of ETL/ELT, data modelling (star/snowflake schemas), orchestration patterns, and engineering best practices.
  • Exposure to streaming/real-time systems: Kafka/Kinesis/PubSub, Spark Structured Streaming, Flink, etc.
  • Experience with CI/CD for data systems, containerization, and infrastructure-as-code (e.g., Docker, Terraform).
  • Strong ownership mindset - ability to run systems end-to-end and improve reliability over time.


Good-to-Have Qualifications

  • Familiarity with PII/consent-sensitive data handling, access governance, and audit logging practices.
  • Working knowledge of GIS concepts (admin boundaries, ward/booth mapping, spatial joins) and tools (PostGIS/QGIS).


What Success Looks Like ( First 180 Days)

  • Junior engineers are enabled through documentation, templates, and review discipline.
  • Review existing codebase and suggest improvements wherever necessary
  • Critical pipelines run reliably with clear SLAs, monitoring, and runbooks in place.
  • Key datasets are standardized with stable identifiers and consistent metric definitions.
  • Data quality issues reduce measurably through automated checks and structured RCA.
  • Analysts and stakeholders experience faster turnaround times and fewer reconciliation loops.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.