Data Engineer
Aramya
2 - 5 years
Gurugram
Posted: 27/12/2025
Job Description
About Aramya
Our vision is to build some of the world's most loved fashion and lifestyle brands and enable people to express themselves.
With a fast-moving team driven by creativity, technology, and customer obsession, we're building a movement that celebrates every woman's unique journey.
We're well funded, with $12M raised from marquee investors like Accel, Z47, and industry veterans.
Our first brand, Aramya, launched in 2024, achieved 40 Cr in revenue in its very first year, powered by a proprietary supply chain, in-house manufacturing, and data-led design. Today, we're operating at a 200 Cr ARR and scaling fast.
As we expand across India, launch new stores, and roll out fresh collections weekly, we're reimagining what modern ethnic wear can look and feel like inclusive, comfortable, stylish, and accessible.
Join us on this journey of building a house of lifestyle brands.
About the Role
Were looking for a passionate Data Engineer with a strong foundation. The ideal candidate should have a solid understanding of D2C or e-commerce platforms and be able to work across the stack to build high-performing, user-centric digital experiences.
Roles & Responsibilities
- Design, build, and maintain scalable ETL/ELT pipelines using Apache Airflow, Spark/PySpark, Databricks, and AWS EMR.
- Own and manage data lakes and data warehouses on AWS (Redshift / Snowflake / BigQuery).
- Build and maintain batch and streaming pipelines using Kafka / Kinesis.
- Optimize SQL queries, data models, and transformations for analytics, performance, and reliability.
- Ensure data quality, validation, lineage, and monitoring across pipelines.
- Implement logging, alerting, and observability for data workflows.
- Collaborate with product, analytics, and business teams to define data contracts and schemas.
- Contribute to infrastructure-as-code (Terraform / CDK) and CI/CD for data systems.
- Take end-to-end ownership of data pipelines from ingestion to consumption
Key Qualifications & Skills
- Candidates from Tier-1 engineering institutes (IITs / NITs / BITS or equivalent) preferred.
- 46 years of relevant experience in Data Engineering.
- Strong expertise in AWS, including EMR, S3, and data platform components.
- Hands-on experience with Databricks, Spark, PySpark, and large-scale data processing.
- Experience with Kafka (or equivalent streaming platforms).
- Strong programming skills in Python, PySpark, and SQL.
- Proven experience building scalable, reliable data platforms in production.
- Solid understanding of ELT/ETL pipelines, workflow orchestration, and Apache Airflow.
- A quick learner who thrives in fast-paced, ambiguous environments.
- Strong problem-solving mindset with a high sense of ownership.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
