🔔 FCM Loaded

Python/Pyspark, Bigquery with GCP, Apache Iceberg

Tata Consultancy Services

2 - 5 years

Hyderabad

Posted: 12/12/2025

Getting a referral is 5x more effective than applying directly

Job Description

TCS Hiring !!Virtual Drive *** 12-Nov-25 Python/Pyspark, Bigquery with GCP, Apache Iceberg

TCS - Hyderabad

12 PM to 1 PM

Immediate Joiners

5 to 7 years



Role** Python/Pyspark, Bigquery with GCP, Apache Iceberg


Exp - 5 to 7 years


Please read Job description before Applying


NOTE: If the skills/profile matches and interested, please reply to this email by attaching your latest updated CV and with below few details:

Name:

Contact Number:

Email ID:

Highest Qualification in: (Eg. B.Tech/B.E./M.Tech/MCA/M.Sc./MS/BCA/B.Sc./Etc.)

Current Organization Name:

Total IT Experience- 5 to 7 years

Location : Hyderabad

Current CTC

Expected CTC

Notice period: Immediate

Whether worked with TCS - Y/N


Must-Have**

(Ideally should not be more than 3-5)

  • Strong proficiency in Python programming.
  • Hands-on experience with PySpark and Apache Spark .
  • Knowledge of Big Data technologies (Hadoop, Hive, Kafka, etc.).
  • Experience with SQL and relational/non-relational databases.
  • Familiarity with distributed computing and parallel processing .
  • Understanding of data engineering best practices.
  • Experience with REST APIs , JSON/XML , and data serialization.
  • Exposure to cloud computing environments.

Good-to-Have

  • 5+ years of experience in Python and PySpark development.
  • Experience with data warehousing and data lakes .
  • Knowledge of machine learning libraries (e.g., MLlib) is a plus.
  • Strong problem-solving and debugging skills.
  • Excellent communication and collaboration abilities.


SN

Responsibility of / Expectations from the Role

1

  • Develop and maintain scalable data pipelines using Python and PySpark .
  • Design and implement ETL (Extract, Transform, Load) processes.
  • Optimize and troubleshoot existing PySpark applications for performance.
  • Collaborate with cross-functional teams to understand data requirements.
  • Write clean, efficient, and well-documented code.
  • Conduct code reviews and participate in design discussions.
  • Ensure data integrity and quality across the data lifecycle.
  • Integrate with cloud platforms like AWS , Azure , or GCP .
  • Implement data storage solutions and manage large-scale datasets.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.