Data Engineer
idigilogic
1 - 3 years
Bengaluru
Posted: 14/03/2026
Getting a referral is 5x more effective than applying directly
Job Description
Location: Bangalore (or as applicable)
Team: Data Platform & Engineering
Overview
We are looking for a motivated SDE-I to join our Data Platform team. In this role, you will be hands-on
with code and data, building reliable components for our petabyte-scale infrastructure. You will work
closely with senior engineers to develop high-performance pipelines, write complex data transformations,
and troubleshoot production issues to ensure data accuracy and availability.
Key Responsibilities
Core Development & Coding
Pipeline Development: Write clean, maintainable, and efficient code in Scala or Java to build
and extend ETL/ELT pipelines using Apache Spark.
Component Implementation: Implement specific features and components for the data
platform, adhering to coding standards and engineering best practices.
Unit Testing: Write comprehensive unit and integration tests to ensure code robustness and
prevent regressions.
SQL & Data Analytics
Complex Querying: Write and optimize complex SQL queries to transform, analyze, and
extract insights from large datasets in BigQuery or Spark SQL.
Data Analysis: Perform ad-hoc data analysis to validate hypotheses, verify data integrity, and
support business requirements.
Logic Translation: Translate business logic and analytical requirements into efficient technical
implementations and SQL transformations.
Troubleshooting & System Reliability
Root Cause Analysis: Actively troubleshoot pipeline failures, data discrepancies, and latency
issues to identify root causes and implement fixes.
Performance Debugging: Analyze application logs and metrics to debug performance
bottlenecks in Spark jobs and SQL queries.
Operational Support: Monitor data pipelines and alert systems to ensure high availability and
timely data delivery.
Required Qualifications
Experience: 1-3 years of relevant industry experience in software development or data
engineering.
Technical Skills:
Strong Coding: Proficiency in at least one programming language (Scala, Java, or
Python) with a good grasp of data structures and algorithms.
Strong SQL: Expert-level SQL skills (Window functions, Joins, CTEs) with the ability to
write and optimize queries for large datasets.
Big Data Basics: Familiarity with distributed computing frameworks like Apache Spark or
Hadoop.
Problem Solving: Strong analytical and troubleshooting skills; ability to dive deep into data and code to find issues.
Education Qualification
Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a
related quantitative field.
Preferred Qualifications
Analytics Background: Experience working with data visualization tools or analyzing raw data
to derive insights.
Machine Learning: Basic understanding of ML concepts or experience supporting ML
pipelines.
Familiarity with Cloud platforms (GCP/AWS) and containerization (Docker/Kubernetes).
Why Join Us?
Hands-on mentorship from industry experts on petabyte-scale systems.
Deep dive into modern data stack technologies like Iceberg and Spark.
A culture that values code quality, engineering excellence, and ownership
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
