🔔 FCM Loaded

Data Engineer

The Hartford

2 - 4 years

Hyderabad

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Experience Level: 2-4 years

Key Responsibilities

  • Data and AI Engineer responsible for Implementing AI data pipelines that bring together structured, semi-structured and unstructured data to support AI and Agentic solutions.
  • Implement efficient Retrieval-Augmented Generation (RAG) architectures and integrate with enterprise data infrastructure.
  • Build and maintain scalable and robust real-time data streaming pipelines using technologies such as Apache Kafka, AWS Kinesis, Spark streaming, or similar.
  • Develop data domains and data products for various consumption archetypes including Reporting, Data Science, AI/ML, Analytics etc.
  • Develop AI-driven systems to improve data capabilities
  • Ensure the reliability, availability, and scalability of data pipelines and systems through effective monitoring, alerting, and incident management.
  • Collaborate closely with DevOps and infrastructure teams to ensure seamless deployment, operation, and maintenance of data systems.

Required Skills & Experience:

  • Bachelor's degree in Computer Science, Artificial Intelligence, or a related field.
  • 2 years of data engineering experience including Data solutions, SQL and NoSQL, Snowflake, ETL/ELT tools, CICD, Bigdata, Cloud Technologies (AWS/Google/AZURE), Python/Spark, Datamesh, Datalake or Data Fabric.
  • Less than 2 years experience will be considered with advanced degree & applicable internship experience.
  • 1+ years' experience with cloud platforms (AWS, GCP, or Azure)
  • 1+ years of data engineering experience focused on supporting AI technologies.
  • 1+ years implementing AI data solutions.
  • 1+ years with prompt engineering techniques for large language models.
  • 1+ years in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models.
  • 1+ years implementing AI driven data systems supporting agentic solution (AWS Lambda, S3, EC2, Langchain, Langgraph).
  • 1+ years of programming skills in Python
  • 1+ years with building AI pipelines that bring together structured, semi-structured and unstructured data.
  • 1+ years in vector databases, graph databases, NoSQL, Document DBs, including design, implementation, and optimization. (e.g., AWS open search, GCP Vertex AI, Neo4j, Spanner Graph, Neptune, Mongo, DynamoDB etc.).
  • Strong written and verbal communication skills
  • Able to communicate effectively with technical teams
  • Team player who collaborates effectively across teams
  • Strong organization and execution skills.
  • Strong interpersonal and time management skills
  • Ability to work successfully in a lean, agile, and fast-paced organization, leveraging Agile principles and ways of working.
  • Ability to translate technical topics into business solutions and strategies

Nice to Have

  • Certifications in AI, or GCP or Snowflake

What We Offer

  • Collaborative work environment with global teams.
  • Competitive salary and comprehensive benefits.
  • Continuous learning and professional development opportunities.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.