AWS PySpark Databricks Developer
Tata Consultancy Services
2 - 5 years
Patna, Vishakhapatnam
Posted: 08/01/2026
Job Description
Role - AWS PySpark Databricks Developer
Experience - 5 to 8 years
Location - Vishakhapatnam
Job description
Technical/Functional Skills
56 years of total experience in data engineering or big data development.
23 years hands-on experience with Databricks and Apache Spark.
Proficient in AWS cloud services (S3, Glue, Lambda, EMR, Redshift, CloudWatch, IAM).
Strong programming skills in PySpark, Python, and optionally Scala.
Solid understanding of data lakes, lakehouses, and Delta Lake concepts.
Experience in SQL development and performance tuning.
Familiarity with Airflow, dbt, or similar orchestration tools is a plus.
Experience in CI/CD tools like Jenkins, GitHub Actions, or Code Pipeline.
Knowledge of data security, governance, and compliance frameworks.
Develop and maintain scalable data pipelines using Apache Spark on Databricks.
Build end-to-end ETL/ELT pipelines on AWS using services like S3, Glue, Lambda, EMR, and Step Functions.
Collaborate with data scientists, analysts, and business stakeholders to deliver high-quality data solutions.
Design and implement data models, schemas, and Lakehouse architecture in Databricks.
Optimize and tune Spark jobs for performance and cost-efficiency.
Integrate data from multiple structured and unstructured data sources.
Monitor and manage data workflows, ensuring data quality, consistency, and security.
Follow best practices in CI/CD, code versioning (Git), and DevOps practices for data applications.
Write clean, reusable, well-documented code using Python / PySpark / Scala.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
