Skill: Java, Spark, Kafka
Experience: 10 to 16 years
Location: Hyderabad
· Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
· Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders
· Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization.
· Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform.
· With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines.
· Design, document, test and deploy ETL/ELT processes
· Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement
· Monitor data processing efficiency and propose solutions for improvements.
• Have the discipline to create and maintain comprehensive project documentation.
• Build and share knowledge with colleagues and coach junior profiles.