Skill: - Spark Scala
Experience: 5 to 14 years
Location: - Kochi (Walkin on 22nd March)
Job description
· Design, develop, and optimize distributed systems and Big Data solutions.
· Implement and maintain batch and streaming pipelines using Scala and Spark.
· Leverage experience in the Hadoop ecosystem, including technologies like Hive, Oozie, and Kafka.
· Build and maintain CI/CD pipelines to ensure efficient code deployment and integration.
· Apply design patterns, optimization techniques, and locking principles to enhance system performance and reliability.
· Scale systems and optimize performance through effective caching mechanisms.
· Demonstrate strong computer science fundamentals, logical reasoning, and problem-solving abilities.
· Collaborate with cross-functional teams to influence and drive innovative solutions.
· Provide technical leadership, mentorship, and presentations to guide team members and stakeholders.