🔔 FCM Loaded

Scala Developer

LTIMindtree

2 - 5 years

Pune

Posted: 12/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Position : Scala Data Engineer


Exp: 4 to 12 yrs


Location: Pune/Mumbai/Chennai/Bangalore/Hyderabad


Notice Period : Immediate to 30 days


Skills :Scala + Spark


Handson technical lead responsible for designing, developing, optimizing and stabilizing core ScalaSpark data pipelines while mentoring junior engineers and ensuring delivery quality.


Core Responsibilities (Scala + Spark Delivery HandsOn Ownership)


Design and implement Scala + Spark pipelines using Dataset/DataFrame APIs with strong emphasis on typed, performant, and modular code.

Translate functional requirements into efficient transformations, ingestion logic, and data models using bestpractice Scala design patterns.

Build reusable libraries/utilities for data parsing, validation, transformation, and Spark job orchestration.

Analyze Spark jobs using Spark UI, event logs, and metrics to identify bottlenecks such as skew, shuffles, and spills.

Apply optimization techniques such as broadcast joins, partitioning strategies, filesize tuning, caching, and minimizing wide transformations.

Ensure robust data handling with checkpointing/recovery logic (if streaming adoption is part of the project).

Follow and enforce engineering standards for Scala coding, functional purity, immutability, typesafety, naming conventions, and errorhandling.

Participate in code reviews, ensuring high quality, maintainability, and production readiness.

Work with testing teams to define unit, integration, and regression test coverage for pipelines and utility modules.

Support sprint planning, estimation, technical grooming, and production migration activities.

Collaborate with architects, product owners, QA, operations/SRE, cloud platform teams and other dependent systems.

Drive troubleshooting and rootcause analysis for issues encountered across environments.


MustHave Technical Skills


Strong grasp of Scala fundamentals: collections, patternmatching, functional constructs, immutability, errorhandling (Option/Try/Either), APIs, and modular code design.

Experience writing reusable Scala functions, caseclassbased models, and typed Dataset operations.

Handson experience with Spark Core, Spark SQL, Spark Datasets, Spark optimization, and understanding of execution plans (explain).

Knowledge of Catalyst optimizer basics and ability to interpret query plans.

Understanding of shuffles, partitions, caching, broadcast joins, and narrow/wide transformations.

Strong SQL (joins, window functions, incremental logic, aggregations).

Knowledge of schema evolution, data modeling for analytical pipelines, and modern lakehouse table formats (Delta/Iceberg/Hudi).



Deliverables & KPIs

Highquality ScalaSpark modules, utilities, and transformation pipelines.

Readable, maintainable code with supporting test suites.

Design notes, runbooks, performance notes, and environmentspecific tuning recommendations.

Code Quality: Low defects, high code review acceptance, strong test coverage.

Performance: Reduced job runtimes, minimized shuffle volumes, predictable SLA behavior.

Delivery: Ontime module completion; smooth integration with upstream/downstream components.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.