Login Sign Up
🔔 FCM Loaded

Data Engineer

Tata Consultancy Services

2 - 5 years

Bengaluru

Posted: 21/03/2026

Getting a referral is 5x more effective than applying directly

Job Description

Greetings from TCS!


Job Title: IBM Cloud Pak for Data Engineer (Azure / OpenShift / Kubernetes)

Required Skillset: IBM, Cloud Pak, Azure, Kubernetes, OpenShift

Location: PAN INDIA

Experience Range: 4+ years



Job Description:

MUST HAVE:

Hands-on experience with CP4D versions, including installation, configuration, upgrading components, and managing clusters, services, and user access controls in both on-premises and cloud environmentsSolid understanding of container platforms, specifically Kubernetes and Red Hat OpenShift, as CP4D runs natively on this architectureExpertise in designing and implementing scalable ETL/ELT pipelines using tools available within CP4D and open-source frameworksUnderstanding data security principles, PII data security, data lineage, and utilizing tools like IBM OpenPages for governance and complianceAbility to troubleshoot complex data issues, system bottlenecks, and performance problems, often requiring analytical and critical thinking skillsAbility to understand client needs and deliver solutions that provide tangible business value and strategic insight


Responsibility of / Expectations from the Role:

Installing, configuring, and upgrading CP4D components (such as DataStage, Watson Knowledge Catalog, and Watson Machine Learning) in cloud (AWS, Azure, IBM Cloud) or on-premises environments, often leveraging Red Hat OpenShift and Kubernetes for container orchestrationDesigning and building robust Extract, Transform, Load (ETL) or Extract, Load, transform (ELT) pipelines to ingest, transform, and manage data from various disparate sources into a unified platformUtilizing CP4D tools like Data Virtualization to break down data silos and integrate data from across the enterprise without physical movement, ensuring a single source of truth for all usersSupporting data scientists and analysts by preparing data for model building, deploying and managing machine learning models, and enabling visualization and reportingImplementing and enforcing data governance policies, lineage tracking, and metadata management using tools like Watson Knowledge Catalog to ensure compliance with data privacy regulations and security standardsMonitoring the performance, scalability, and reliability of the CP4D services and underlying infrastructure, identifying bottlenecks, and troubleshooting issues related to the platform or data pipelinesDeveloping software build and automation scripts using languages and tools such as Python, Bash, Ansible, or Terraform to streamline deployment and operational tasks in a DevOps/CI/CD environmentWorking within agile, cross-functional teams to understand requirements, propose solutions, and create comprehensive technical documentation and standard operating procedures (SOPs)


Good to have:

Strong hands-on experience with IBM Cloud Pak for Data and its integrated servicesSolid understanding of Kubernetes and Red Hat OpenShift environmentsExperience with cloud platforms (AWS, Azure, IBM Cloud) and related servicesFamiliarity with big data technologies like Apache Spark or Kafka is a plusStrong problem-solving, analytical, and communication skills




Thanks & Regards,Ria Aarthi A.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.