Data Engineer (Pyspark)_ Investment Banking
Atyeti Inc
2 - 5 years
Bengaluru
Posted: 10/12/2025
Getting a referral is 5x more effective than applying directly
Job Description
Background:
This position will be responsible for design, build and maintenance of data pipelines running on Airflow, Spark on the AWS Cloud platform at Bank.
Roles and Responsibility:
- Build and maintain all facets of Data Pipelines for Data Engineering team.
- Build the pipelines required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Spark, Python and Airflow.
- Work with internal and external stakeholders to assist with data-related technical issues and data quality issues.
- Engage in proof of concepts, technical demos and interaction with customers and other technical teams
- Participate in the agile ceremonies
- Ability to solve complex data-driven scenarios and triage towards defects and production issues
Technical Skills
- Must Have Skills:
- Proficient with Python, PySpark and Airflow
- Strong understanding of Object-Oriented Programming and Functional Programming paradigm
- Must have experience working with Spark and its architecture
- Knowledge of Software Engineering best practices
- Advanced SQL knowledge (preferably Oracle)
- Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources.
- Good to Have Skills:
- Knowledge of Data related AWS Services
- Knowledge of GitHub and Jenkins
- Automated testing
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
