🔔 FCM Loaded

Senior Web Scraping Engineer

Sasvat Infotech

5 - 10 years

Vadodara

Posted: 28/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Experience: 4 to 7 years

Location: Vadodara / Ahmedabad or Remote

Job Type:6 months contract (extendable)

Work Hours: 1:30 PM IST 10:00 PM IST (US Eastern overlap preferred)


Company Description

Sasvat Infotech specializes in high-end application development, offering secure, scalable, and feature-rich solutions. Our applications are designed to enhance user experience and distinctly represent your brand. We are committed to delivering responsive and functional digital solutions tailored to meet client needs.


Role Description

We are hiring a Senior Web Scraping Engineer to help us migrate and rebuild a large-scale production crawling ecosystem.

We are accelerating and modernizing an existing distributed crawling platform that must survive blocking, scale horizontally on Azure, and deliver clean, reliable data at high throughput.

We need an engineer who treats spiders like distributed systems not scripts. Someone who understands anti-blocking, system design, observability, and production stability. This role is about making crawlers stable at scale not just making them work once.


Key Responsibilities

  • Rebuild and migrate existing crawling systems into scalable, production-grade architecture.
  • Design and develop industrial-grade spiders using Python and Scrapy.
  • Integrate Playwright for JS-heavy, dynamic, and protected environments.
  • Engineer advanced unblocking strategies including - Session lifecycle control, Traffic shaping and throttling, Fingerprint consistency, Structured retry taxonomy, Stateful browser flows when required
  • Design crawlers that are Stateless wherever possible, Queue-driven, Horizontally scalable on Azure
  • Optimize Scrapy internals including concurrency, middleware, pipelines, and scheduling.
  • Deploy and scale crawlers using Azure containers and cloud-native infrastructure.
  • Own system reliability including: Structured logging, Metrics collection, Failure classification, Observability and monitoring
  • Ensure data quality, validation, and structured output pipelines.
  • Troubleshoot blocking, performance bottlenecks, and scaling limitations.
  • Contribute through disciplined GitHub PR workflows and maintain clean, extensible code.
  • Write code that another senior engineer can extend without rewriting.


Required Skills

  • 47 years of hands-on experience in web scraping and crawler engineering.
  • Strong production-level Python expertise.
  • Deep understanding of Scrapy internals Concurrency, Middleware, Throttling, Pipelines
  • Hands-on production experience with Playwright.
  • Strong knowledge of HTTP protocol, sessions, cookies, headers, and request lifecycle.
  • Proven experience handling bot detection and anti-scraping mechanisms.
  • Experience designing systems with balance between throughput, stealth, and cost.
  • Experience deploying and scaling systems on Azure (containers, scaling, monitoring).
  • Experience with SQL and/or NoSQL data storage.
  • Strong debugging mindset and system-thinking approach.
  • Experience with Git and structured PR/code review workflows.


Good to Have

  • Experience migrating legacy crawling systems to distributed cloud architecture.
  • Exposure to proxy orchestration and IP rotation strategies.
  • Experience designing distributed crawler clusters.
  • CI/CD experience for crawler deployment pipelines.
  • Familiarity with observability tools and monitoring frameworks.
  • Experience working in large-scale data migration or platform modernization projects.


Interested candidates are invited to send their resumes to hr@sasvatinfotech.com

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.