Data Engineer
WaferWire Cloud Technologies
2 - 5 years
Hyderabad
Posted: 26/02/2026
Job Description
Job Title: Data Engineering
Job Location: Hyderabad, India
Worksite: Onsite (100%)
About WCT:
WaferWire Technology Solutions (WCT) specializes in delivering comprehensive Cloud, Data and AI solutions through Microsoft's technology stack. Our services include Strategic Consulting, Data/AI Estate Modernization, and Cloud Adoption Strategy. We excel in Solution Design encompassing Application, Data, and AI Modernization, as well as Infrastructure Planning and Migrations. Our Operational Readiness services ensure seamless DevOps, ML Ops, AI Ops, and Sec Ops implementation. We focus on Implementation and Deployment of modern applications, continuous Performance Optimization, and future-ready innovations in AI, ML, and security enhancements. Delivering from Redmond-WA, USA, Guadalajara, Mexico and Hyderabad, India, our scalable solutions cater precisely to diverse business requirements and multiple time zones (US time zone alignment).
Responsibilities:
Python + Data Engineering
Strong Python for ETL/ELT: requests, pandas, pyarrow, retry/backoff, logging, config management
API ingestion patterns: OAuth/keys, pagination, delta tokens, throttling, idempotency
File ingestion: CSV/JSON/Parquet handling; schema inference vs explicit schema; large file chunking/streaming
Microsoft Fabric Warehouse / Data Loading
Understanding of Fabric ingestion approaches such as pipelines/copy jobs into Warehouse; ability to implement full/incremental patterns.
SQL/T-SQL for warehouse DDL/DML, loading strategies, and validation queries.
Data Warehouse Architecture (Scalable)
Dimensional modeling (facts/dimensions), star schema, SCD handling
Data lifecycle layers: raw/staging/curated; metadata-driven pipelines
Scalability concepts: parallelism, partitioning strategy, incremental loads, late-arriving data, CDC design
Understanding modern DW engine traits discussed for Fabric DW (e.g., compute/storage separation, optimized for open formats).
Observability + Reliability
Monitoring: job metrics, failure handling, alerting
Data quality checks: row counts, null checks, referential integrity, schema drift detection
CI/CD basics and version control best practices (often expected in Fabric/Azure data engineer roles).
Required Qualifications:
35+ years building data pipelines and data warehouse solutions (common baseline in Azure/Fabric data engineering JDs).
Strong SQL + Python, plus hands-on delivery on a modern analytics platform.
Good to Have (Optional):
Familiarity with broader Fabric ecosystem (Lakehouse, semantic models, governance patterns)
DP-600 (Fabric) / Azure data certifications (commonly preferred in similar JDs)
Spark/PySpark experience for large-scale transformations (useful where ingestion requires heavy processing)
Equal Employment Opportunity Declaration:
WCT is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic
information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
