AWS Snowflake Data Engineer
Nice Software Solutions Pvt. Ltd.
2 - 5 years
Pune
Posted: 14/02/2026
Getting a referral is 5x more effective than applying directly
Job Description
AWSSnowflake Data Engineer
Location: Pune / Nagpur (Work From Office)
Experience: 58 Years
Role Overview
We are looking for a Senior AWS Data Engineer with strong Snowflake expertise to design, build, and optimize scalable, secure, and high-performance data platforms.
You will own end-to-end cloud data pipelines, work with diverse data sources, and deliver analytics-ready datasets by collaborating closely with data science, analytics, and business teams.
Key Responsibilities
- Design and build scalable ETL/ELT pipelines on AWS using Glue, Lambda, Step Functions, EventBridge, loading data into Snowflake and S3 data lakes
- Ingest and process data from multiple sources:
- S3, SFTP (AWS Transfer Family)
- REST / SOAP / FAST APIs
- Relational databases
- Real-time streams (Kinesis)
- Third-party SaaS platforms
- Architect and manage Snowflake-centric data platforms, including:
- Staging Warehouse Consumption layers
- Virtual warehouses, zero-copy cloning, time travel, streams & tasks
- Optimize Snowflake performance and cost, including clustering, materialized views, search optimization, query profiling, auto-suspend, and resource monitors
- Optimize Amazon Redshift performance and cost; manage S3 data lake zones (raw, refined, curated)
- Ensure data quality, reliability, lineage, and consistency using validation frameworks, idempotent pipelines, and automated testing
- Orchestrate event-driven workflows using Step Functions and EventBridge
- Implement CI/CD pipelines using CodePipeline and/or GitHub Actions with Git version control
- Design and enforce enterprise-grade security:
- VPC isolation, IAM least-privilege policies
- KMS encryption, Secrets Manager, PrivateLink
- CloudTrail and GuardDuty for auditing and monitoring
- Set up monitoring, alerting, and logging using CloudWatch, Snowsight, and automated recovery mechanisms
- Manage AWS Glue Data Catalog and Lake Formation for metadata, governance, and access control
- Create technical documentation, architecture diagrams, and runbooks
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into scalable data solutions
- Continuously identify and resolve performance bottlenecks, cost inefficiencies, and technical debt
Required Qualifications
- Bachelors or Masters degree in Computer Science, Engineering, or related field
- 58 years of hands-on Data Engineering experience, with 4+ years on AWS
- Strong expertise in Snowflake, including:
- Architecture, Snowpipe, Streams & Tasks
- Snowpark
- Performance and cost optimization
- Expert-level proficiency in Python and SQL
- Extensive experience with AWS data services, including:
- Glue (Crawlers, Jobs, Spark)
- Lambda, Step Functions, EventBridge
- S3, Kinesis, API Gateway, Athena
- Redshift, Lake Formation
- SnowflakeAWS integrations
- Solid understanding of cloud security and compliance, including IAM, VPC, KMS, Secrets Manager, Transfer Family, CloudTrail, and GuardDuty
- Proven experience building and supporting production-grade data pipelines handling large-scale (TB+) data with strict SLAs
Preferred Qualifications
- AWS Certified Data Engineer (Associate/Professional) or equivalent certifications
- SnowPro Associate or Professional certification
- Experience with Snowpark, dbt, Airflow (MWAA)
- Hands-on experience with real-time analytics and streaming (Kinesis, Lambda, MSK)
- Familiarity with big data ecosystems (Spark, EMR) and cost optimization strategies (Savings Plans, Reserved Instances, S3 Intelligent-Tiering)
- Strong communication skills with the ability to mentor junior engineers
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
