Graph Data Engineer
IDFC FIRST Bank
2 - 5 years
Mumbai
Posted: 10/12/2025
Job Description
Summary
We are seeking a highly motivated and experienced Data Engineer with a strong focus on Graph Databases to join our growing data team. In this role, you will be responsible for designing, building, and maintaining robust and scalable graph-based data solutions. You will work closely with data scientists, analysts, and application developers to leverage the power of graph databases for complex data analysis, knowledge representation, and real-time applications.
Roles & Responsibilities
Graph Data Modelling and Design:
- Design and implement efficient and scalable graph data models using appropriate graph database technologies (e.g., Neo4j, Amazon Neptune, TigerGraph, etc.).
- Translate business requirements into effective graph schemas and data structures.
- Optimize graph models for performance and query efficiency.
Graph Database Development and Management :
- Develop and maintain graph database instances, including installation, configuration, and performance tuning.
- Implement data ingestion and transformation pipelines to populate graph databases from various data sources.
- Develop and optimize Cypher, Gremlin, or other graph query languages for complex data retrieval and analysis.
- Implement data security and access control mechanisms for graph databases
Monitor and maintain the health of graph databases
- Design and implement ETL/ELT processes to integrate data from diverse sources into graph databases.
- Develop and maintain data pipelines using tools like Apache Kafka, Apache Airflow, or similar technologies.
- Ensure data quality and consistency throughout the data integration process
Graph Analytics and Application Development:
- Collaborate with data scientists and analysts to develop graph-based analytics solutions.
- Performance Optimization and Troubleshooting
Desired Candidate Profile
2-5 years of experience in Data Engineering or a related field with expertise in Graph Databases (Neo4J).
Strong proficiency in Python programming language with hands-on experience working with PySpark.
Required Skills:
Graph Databases: Neo4j, Amazon Neptune, TigerGraph (or similar)
Query Languages: Cypher, Gremlin, SPARQL
Ontology & Semantic Modelling: RDF, OWL
Cloud Platforms: AWS, Azure, or GCP
Big Data Technologies: Hadoop, Spark, Kafka
Strong understanding of data modelling, ETL pipelines, and distributed systems
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
