Python/Pyspark, Bigquery with GCP
Tata Consultancy Services
2 - 5 years
Hyderabad
Posted: 08/01/2026
Getting a referral is 5x more effective than applying directly
Job Description
Role**: Python/PySpark Developer
Desired Experience Range: 05 - 7 yrs
Location of Requirement: Hyderabad
We are currently planning to do a Walk-In Interview on 20th December 2025 (Saturday) at TCS Hyderabad.
Date 20th December 2025 (Saturday)
Venue - Hyderabad (TCS SP_NSEZ)
Job Description:
Must-Have:
- Strong proficiency in Python programming.
- Hands-on experience with PySpark and Apache Spark .
- Knowledge of Big Data technologies (Hadoop, Hive, Kafka, etc.).
- Experience with SQL and relational/non-relational databases.
- Familiarity with distributed computing and parallel processing .
- Understanding of data engineering best practices.
- Experience with REST APIs , JSON/XML , and data serialization.
- Exposure to cloud computing environments
Good-to-Have:
- 5+ years of experience in Python and PySpark development.
- Experience with data warehousing and data lakes .
- Knowledge of machine learning libraries (e.g., MLlib) is a plus.
- Strong problem-solving and debugging skills.
- Excellent communication and collaboration abilities.
Responsibility of / Expectations from the Role :
- Develop and maintain scalable data pipelines using Python and PySpark .
- Design and implement ETL (Extract, Transform, Load) processes.
- Optimize and troubleshoot existing PySpark applications for performance.
- Collaborate with cross-functional teams to understand data requirements.
- Write clean, efficient, and well-documented code.
- Conduct code reviews and participate in design discussions.
- Ensure data integrity and quality across the data lifecycle.
- Integrate with cloud platforms like AWS , Azure , or GCP
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
