Azure Data Engineer
Innvectra Info Solutions Pvt. Ltd.
2 - 5 years
Hyderabad
Posted: 07/05/2026
Job Description
Company Description
Innvectra is an Offshore Product Engineering and Software Development company specializing in web and application solutions that address customer-specific needs. The company prides itself on gaining deep insights into clients' requirements and delivering tailored, high-quality solutions. With a commitment to excellence, Innvectra has facilitated technology-driven transformations, meeting stringent international quality standards. Their services are designed to empower businesses through innovation and efficiency.
Role Description
This is a full-time on-site role for an Azure Data Engineer located in Hyderabad. The Azure Data Engineer will be responsible for designing, building, and maintaining data infrastructure and pipelines. Daily tasks will include implementing Extract, Transform, and Load (ETL) processes, crafting efficient data models, managing data warehousing solutions, and performing comprehensive data analytics to support business decision-making. The role also involves collaborating closely with cross-functional teams to deliver robust data solutions that align with organizational goals.
What we need
- Proficiency in core Data Engineering concepts and practices, including knowledge of data integration and processing.
- Experience in Data Modeling and building scalable data solutions using best practices.
- Familiarity with Extract, Transform, Load (ETL) tools and processes to optimize data workflows.
- Expertise in Data Warehousing and managing large-scale data storage systems.
- 6+ years in a data engineering role, with at least 2 years working on cloud-based modern data platforms (Azure preferred). Insurance or financial services industry experience preferred.
- Hands-on production experience with Microsoft Fabric: Lakehouse (Delta / Parquet), Fabric Warehouse (TSQL), Data Pipelines, Notebooks, Dataflows Gen2, and OneLake. Direct Lake mode configuration and monitoring.
- Advanced TSQL for DDL authoring, stored procedures, incremental load patterns, window functions, and adhoc debugging. Awareness of Fabric Warehouse constraints.
- Proficient in PySpark for Fabric Notebookbased ELT, Delta Lake operations (merge, upsert, Zorder, VOrder), and data quality framework implementation.
- Deep understanding of dimensional modelling: fact vs. dimension tables, grain definition, surrogate key management, slowly changing dimensions, and snapshot vs. transactional fact tables.
- Exceptional problem-solving and technical communication skills.
- Relevant bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
