Big Data Developer
BSH Home Appliances India
2 - 5 years
Bengaluru
Posted: 12/02/2026
Getting a referral is 5x more effective than applying directly
Job Description
Overview
We are looking for a Senior Big Data Analyst with 812 years of experience to design and operate modern, cloudnative data pipelines on AWS. The role involves building scalable ETL workflows, developing highperformance Spark jobs, automating infrastructure with Terraform, and ensuring strong data quality, security, and monitoring. Youll collaborate with global, crossfunctional teams, support production systems, optimize cloud costs, and drive engineering best practices in an agile environment.
Key Responsibilities
- Design, build, test, deploy, and maintain scalable, cloudnative ETL and data pipelines.
- Translate business and ML requirements into technical designs, data models, and reliable datasets with defined SLAs.
- Work handson with AWS services including S3, EMR, Glue, Lambda, Kafka, Step Functions, Athena, Redshift.
- Develop and optimize Spark jobs using Scala or PySpark for batch and streaming workloads.
- Automate cloud infrastructure using Terraform or Ansible.
- Build CI/CD pipelines with automated testing, linting, and code/security scanning using SonarQube.
- Write clean, testable Python and SQL code with unit, integration, and endtoend tests.
- Implement IAM roles, encryption, data masking, governance frameworks, and compliance controls.
- Set up monitoring using logs, metrics, and tracing; troubleshoot issues and perform RCA.
- Participate in oncall rotations and ensure the reliability of production pipelines.
- Collaborate with analysts, data scientists, product owners, and international teams using Jira and Confluence.
- Prepare architecture diagrams, runbooks, onboarding documents, and postincident reports.
- Mentor team members and conduct code reviews.
- Drive cloud cost optimization through partitioning strategies, architecture improvements, and workflow tuning.
- Manage version control, releases, rollbacks, and safe deployments across environments.
Required Skills & Experience
- 812 years of experience in Data Engineering, Big Data, or Data Warehousing.
- Proven expertise in designing and deploying cloudnative data pipelines.
- Deep handson experience with AWS: EMR, Glue, Lambda, S3, Kafka, Step Functions, Athena, Redshift.
- Strong Spark programming skills in Scala or PySpark.
- Strong proficiency in Python and SQL.
- Experience with Terraform (or Ansible) for cloud automation.
- Handson with Docker; exposure to Kubernetes or Nomad is a plus.
- Experience integrating SonarQube or similar tools into CI/CD pipelines.
- Strong understanding of IAM, encryption, data protection, and cloud security practices.
- Experience working in Agile environments using Jira, Confluence, GitHub.
- Experience in international/global teams is an added advantage.
- Strong software engineering fundamentals, documentation skills, and ability to mentor peers.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
