Login Sign Up

SwarmBench Task Engineer - 75243

Turing

2 - 5 years

Hyderabad

Posted: 03/05/2026

Getting a referral is 5x more effective than applying directly

Job Description

About Turing:

Turing is one of the worlds fastest-growing AI companies, accelerating the advancement and deployment of powerful AI systems. Turing helps customers in two ways: working with the worlds leading AI labs to advance frontier model capabilities in thinking, reasoning, coding, agentic behavior, multimodality, multilinguality, STEM, and frontier knowledge; and leveraging that work to build real-world AI systems that solve mission-critical priorities for companies.


Role Overview:

We are looking for experienced SwarmBench Task Engineers Code / SWE to design and build high-quality multi-agent benchmark tasks based on real-world software engineering workflows.


In this role, you will create tasks grounded in real open-source code changes such as bug fixes, migrations, and refactors. These tasks are used to evaluate how effectively AI agents can understand large codebases, apply precise modifications, and produce correct, testable outputs.


You will work within a structured evaluation framework (Harbor), define clear task instructions, design verification logic, and decompose complex engineering problems across multiple specialized agents.


What does day-to-day look like:

  • Build multi-agent benchmark tasks based on real-world open-source code changes (bug fixes, migrations, refactors)
  • Work with the Harbor evaluation framework to run and validate tasks inside Docker environments
  • Write clear, precise task instructions specifying file paths, function signatures, expected behavior, and constraints
  • Design and implement Python-based verification scripts to validate correctness of agent-generated code changes
  • Create decomposition strategies that split complex code changes across multiple independent sub-agents
  • Run, debug, and refine tasks within containerized environments to ensure reproducibility and determinism
  • Evaluate task performance signals and improve task quality, clarity, and difficulty


Requirements:

  • 5+ years of experience in Python and JavaScript development
  • Experience with AI coding benchmarks (e.g., SWE-bench, Terminal-Bench)
  • Strong experience reading and navigating large open-source codebases (e.g., Django, Flask, FastAPI, Node.js, or similar)
  • Familiarity with Git workflows, including pull requests, diffs, cherry-picking, and working with specific commits
  • Comfortable working with Docker (writing Dockerfiles, building images, debugging container issues)
  • Experience writing test scripts (pytest, unittest, or custom assertion-based testing)
  • Ability to write clear, precise, and unambiguous technical specifications
  • Perks of Freelancing With Turing
  • Work on cutting-edge AI projects with leading foundation model companies
  • Collaborate on high-impact work at the frontier of LLM evaluation and reasoning
  • Remote, flexible opportunities with global teams


Offer Details:

  • Commitments Required: 8 hours per day with a 4-hour overlap with PST.
  • Employment Type: Contractor position (Note: this role does not include medical/paid leave).
  • Duration of Contract: 4 weeks; [expected start date is next week].

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.