Login Sign Up

AI Enabled Data Scraping Engineer – Mid Level ( 2 to 4 yrs )

AIMLEAP

4 - 6 years

Bengaluru

Posted: 15/05/2026

Getting a referral is 5x more effective than applying directly

Job Description

AI Enabled Data Scraping Engineer Mid Level

Experience: 2 to 4 Years

Location: Remote (Work from Home) / Bangalore / India

Mode of Engagement: Full-time

No of Positions: 3

Educational Qualification: B.E / B.Tech / MCA / Computer Science / IT

Industry: AI / Data Engineering / Automation / SaaS

Notice Period: Immediate


What We Are Looking For:

  • 2 4 years of experience in Python-based web scraping, browser automation, and large-scale data extraction projects.
  • Strong hands-on experience with Scrapy, Selenium, Playwright, Requests, BeautifulSoup, or similar scraping frameworks.
  • Experience handling dynamic websites, login sessions, cookies, CAPTCHA handling, proxy rotation, browser fingerprinting, and anti-bot protections.
  • Working knowledge of AI/LLM-powered automation workflows using OpenAI APIs, ChatGPT, Claude, Gemini, LangChain, or similar tools.
  • Experience working with APIs, JSON/XML data handling, databases, cloud platforms, and automation scripting using Python.
  • Familiarity with Docker, Linux, Git, scheduling tools, and scalable scraping architectures.
  • Good understanding of data cleaning, transformation, validation, monitoring, and structured/unstructured data processing workflows.


Responsibilities:

  • Develop, maintain, and optimize scalable web scraping and browser automation pipelines for structured and unstructured web data extraction.
  • Build advanced scraping workflows using Scrapy, Selenium, Playwright, APIs, and Python automation frameworks.
  • Handle dynamic websites, anti-bot protections, login sessions, proxies, cookies, and browser automation challenges efficiently.
  • Work on AI-powered data extraction, enrichment, classification, and automation workflows using LLMs and AI tools.
  • Perform data cleaning, validation, transformation, storage, and monitoring for analytics and AI applications.
  • Collaborate with senior engineers, AI teams, product teams, and clients for scalable data acquisition projects.
  • Monitor crawler performance, debug failures, improve scraping efficiency, and maintain data quality standards.


Qualifications:

  • Bachelors degree in Computer Science, Engineering, IT, or related field.
  • Strong proficiency in Python programming and scraping frameworks such as Scrapy, Selenium, Playwright, or BeautifulSoup.
  • Good understanding of APIs, automation workflows, databases, JSON/XML handling, cloud concepts, and scalable scraping techniques.
  • Experience with AI tools, LLM APIs, browser automation, and modern scraping workflows is preferred.
  • Familiarity with Docker, Linux, Git, AWS, or cloud-based deployment environments is an added advantage.
  • Strong analytical, debugging, and problem-solving skills with the ability to work in fast-paced environments.

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.