🔔 FCM Loaded

C;loud Data Engineer

Bhavitha Tech, CMMi Level 3 Company

3 - 5 years

Bengaluru

Posted: 12/12/2025

Getting a referral is 5x more effective than applying directly

Job Description

We are seeking a talented and experienced Data Engineer with 3 -5 years experience to join our team, focusing on building and optimizing our data pipelines and architecture. The ideal candidate will be a hands-on contributor with a strong background in real-time data processing, modern data warehousing, and robust database design.


Key Responsibilities

Design, develop, and maintain scalable, high-performance ETL/ELT pipelines for both batch and real-time data processing.

Implement and manage data solutions using Apache Flink for stream processing and low-latency data ingestion.

Work extensively with Snowflake to manage and optimize our cloud data warehouse environment, focusing on cost-efficiency and query performance.

Develop robust data models and schemas within StarRocks DB and other analytical stores to support business intelligence and reporting needs.

Write clean, efficient, and well-tested code, primarily in Java, for data transformation and integration services.

Ensure data quality, integrity, and compliance across all data assets.

Collaborate with data scientists and analysts to implement and optimize models, including those related to statistical modeling and sampling.

Monitor, troubleshoot, and optimize the data platform infrastructure and tools.


Education & Experience

Experience: 3 - 5 years of professional experience in a Data Engineering, Software Engineering, or similar role.

Academic: Bachelor of Engineering or Master of Computer Applications in Computer Science, Information Technology, or a related quantitative field.


Core Technical Skills

Stream Processing: Deep expertise in Apache Flink (or similar technologies like Apache Kafka Streams/Spark Streaming).

Programming: Strong proficiency in Java. Python is a significant plus.

Data Warehousing: Hands-on experience with cloud data warehouses, specifically Snowflake.

Databases: Expertise in analytical databases like StarRocks DB and strong familiarity with traditional relational (e.g., PostgreSQL, MySQL) and NoSQL databases.

Data Fundamentals: Strong understanding of Data Engineering principles, including data modeling (e.g., Dimensional Modeling, Data Vault), schema design, and data governance.


Desired Skills and Tools

Cloud Platforms: Experience with major cloud providers (AWS, Azure, or GCP) services relevant to data (e.g., S3/ADLS/GCS, EMR/Dataproc, Lambda/Cloud Functions).

Data Orchestration: Proficiency with workflow management tools like Apache Airflow or similar (e.g., Dagster, Prefect).

Big Data Ecosystem: Familiarity with the broader Apache ecosystem, particularly Apache Kafka for message queuing and Apache Spark (batch processing).

Containerization: Working knowledge of Docker and Kubernetes for deploying and managing data services.

DataOps/DevOps: Experience with CI/CD practices and tools (e.g., Git, Jenkins) applied to data pipelines.

Data Governance & Quality: Understanding of tools and methods for metadata management, lineage tracking, and automated data quality checks (e.g., Great Expectations).

Other Skillsets: Practical application of statistical modeling and sampling techniques in a data pipeline context.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.