ETL Developer
Tekskills Inc.
5 - 7 years
Bengaluru
Posted: 17/12/2025
Getting a referral is 5x more effective than applying directly
Job Description
8+ years of experience in Data warehouse and Data Lake Implementations as a Data Engineer/ETL Developer
- Deep understanding of Kafka architecture, concepts (topics, partitions, clustering,brokers), and related tools (Kafka Connect, Kafka Streams, Schema Registry).
- Strong experience with relevant AWS services for data streaming and infrastructure management (e.g., MSK, EC2, S3, Lambda, IAM, VPC).
- Expertise in one or more programming languages commonly used with Kafka, such as Java, Scala, or Python.
- Strong knowledge and experience in Data Lake and Data Warehouse architecture and concepts in cloud
- Expert in Data Ingestion (ETL and ELT), Dimension Modeling, Data Quality and Data Validation
- Strong working experience in Snowflake using warehouses, stored procedures, streams, snow pipes, tasks, stages, storage integration, ingestion frameworks and tools etc.
- Ability to develop ELT/ETL pipelines to move data to and from Snowflake data store using combination of Python, Advanced SQL and Snowflake SnowSQL.
- Strong hands-on experience with requirements gathering analysis, coding, testing, implementation, maintenance, and review.
- Extensive development experience using Advanced SQL, Python and other scripting languages (Perl, shell etc.)
- Experience in working with core data engineering services in AWS, MS Azure, or other cloud providers.
- Good hands-on experience on workflow management tools such as Airflow, Control M etc.
- Solid experience with application Support & resolving production issues.
- Great customer support skills and adaptable to changing business needs.
- Good project management, experience in Agile methodologies (Kanban and SCRUM), communication, and interpersonal skills.
- Lead project efforts with the technology solutions to perform proof of concept (POC) analysis.
- Experience with containerization technologies (e.g., Docker, Kubernetes)
- Experience with DevOps, Git, CI/CD
- Experience directly managing a team in an onsite/offshore development environment.
- Experience in Service Now Data is preferred.
Strongly preferred to have experience: 3 to 5 years of experience working with Kafka platforms in below areas
- Design, implement, and maintain Kafka producers, consumers, and stream processing applications using languages like Java, Scala, or Python.
- Deploy, manage, and optimize Kafka clusters and related applications on AWS services such as Amazon Managed Streaming for Apache Kafka (MSK), EC2, S3, Lambda, and CloudWatch.
- Develop and manage end-to-end data pipelines involving Kafka Connect, Kafka Streams, and other data integration tools
- Ensure the performance, scalability, and reliability of Kafka-based systems, including cluster tuning, monitoring, and troubleshooting.
- Implement security best practices for Kafka on AWS, including authentication, authorization (ACLs), and data encryption.
- Manage Kafka Schema Registry for data serialization and evolution
- Develop real-time stream processing applications using Apache Spark Streaming, Kafka Streams, or AWS Lambda
- Implement complex event processing (CEP) patterns for real-time analytics
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
