Azure Data engineer with data bricks
Tata Consultancy Services
2 - 5 years
Bengaluru
Posted: 01/01/2026
Getting a referral is 5x more effective than applying directly
Job Description
Dear Candidate,
Greetings from TATA Consultancy Services!!
Thank you for expressing your interest in exploring a career possibility with the TCS Family.
Hiring For:- Azure Data engineer with data bricks
Location: Bangalore, Chennai, Delhi, Mumbai, Pune, Hyderabad
Experience: 5+ yrs
Roles & Responsibilities
Expertise:
- Good Knowledge of Data Brick lakehouse and Azure DataLake concept
- Knowledge of Data Bricks delta concept Delta live tables (DLT)
- Strong hands-on experience in ELT pipeline development using Azure Data factory and Databricks Autoloader, Notebook scripting and Azure Synapse Activity Copy, Data Flow Task
- Strong knowledge of metadata-driven data pipeline, metadata management, dynamic logic
- In-depth knowledge of data storage solutions, including Azure Data Lake Storage (ADLS), and Azure Serverless SQL Pool.
- Experience with data transformation using Spark, and SQL technologies.
- Solid understanding of design patterns, and best practices of the cloud stack.
- Experience with code management and version control using Git or similar tools.
- Strong problem-solving and debugging skills in ETL workflows and data pipelines.
- Strong understanding of Azure Data bricks and Azure Synapse internals features and capabilities.
- Knowledge of Azure DevOps and continuous integration and deployment (CI/CD) process.
- Knowledge of data quality and data profiling techniques, with experience in data validation and data cleansing.
Hands-on Duties:
- Conducting technical sessions, design reviews, code reviews, and demos of pipelines and their functionality
- Developing technical specification for Data pipelines and workflow and getting sign-off from Architect and Stryker leads.
- Developing, deploying, and maintaining workflows and data pipelines using Azure Data bricks and Azure Synapse.
- Developing pipeline/notebook for Delta live tables ( DLT) Databricks Auto Loader, Notebook scripting and Azure Synapse Activity Copy, Data Flow Task.
- Collaborating with data architects, data analysts, and other stakeholders to design and implement ETL solutions that meet business requirements.
- Writing efficient and high-performing ETL code using PySpark, and SQL technologies.
- Building and testing data pipelines using Azure Data bricks and Azure Synapse.
- Ensuring the accuracy, completeness, and timeliness of data being processed and integrated.
- Troubleshooting and resolving issues related to data pipelines and notebooks.
- Performance benchmarking of data ingestion and Data flow pipeline/notebook and ensuring consistency
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
