🔔 FCM Loaded

Senior Software Test Automation Engineer

Linnk Group

5 - 10 years

Kochi

Posted: 21/02/2026

Getting a referral is 5x more effective than applying directly

Job Description

Role: Senior Software Test Automation Engineer

Location: Kochi, Kerala (On-Site)


Role Overview

  • Lead an automation-first quality engineering strategy for SaaS and AI-native platforms.
  • Define and enforce automation coverage targets (80%+ regression), eliminating manual dependency.
  • Drive end-to-end UI, API, and backend automation embedded into engineering workflows.
  • Own CI/CD quality gates and release readiness, ensuring predictable, high-confidence deployments.
  • Systematically eliminate repetitive manual testing cycles through scalable automation frameworks.
  • Enable faster innovation and safer releases through measurable quality, performance, and reliability controls.


Key Responsibilities

Automation Framework & Quality Strategy

  • Design and build automated QA testing frameworks from scratch, covering UI, API, backend, and AI validation layers.
  • Define automation coverage targets (unit, integration, regression) and continuously improve coverage.
  • Validate unit and integration test coverage in collaboration with engineering teams.
  • Drive regression automation, especially after guardrail, security, and model updates.
  • Ensure test frameworks are scalable, maintainable, and CI/CD ready.


CI/CD, Quality Gates & Release Control

  • Integrate automated QA test suites into CI/CD pipelines.
  • Own automated quality gates, including:

Regression, API, and integration test pass/fail thresholds

Performance and latency benchmarks

  • Control release readiness based on automated test outcomes.
  • Provide fast, reliable feedback on every commit and deployment.


UI, API & Backend Automation

  • Develop UI automation using Playwright.
  • Implement BDD-based automation using Cucumber where appropriate
  • Build fully automated API test suites, including:

Contract and integration testing

API rate-limiting behavior validation

  • Perform backend and data validations using SQL.
  • Validate timeout handling, incomplete input scenarios, and failure paths.


AI / LLM Quality Engineering & Validation

  • Own automated validation of AI/LLM behavior in collaboration with AI and Python engineering teams.
  • Validate AI model accuracy and response quality for large-scale models (e.g., 70B parameter models running on NVIDIA platforms).
  • Design and execute automation for AI guardrails, including:

Scope control testing (off-topic, restricted advice, HR-sensitive, and competitor-related queries)

Prompt injection and SQL injection prevention testing

  • Validate graceful degradation during LLM, vector DB, or backend failures.
  • Ensure user-friendly error messaging for AI and system failures.


Multilingual & Compliance-Focused Testing

  • Perform automated multilingual testing, with a focus on Arabic language accuracy and RTL behavior
  • Validate linguistic correctness, intent preservation, and response consistency across languages
  • Execute AI bias and fairness testing across:

Gender, age, ethnicity, education, and EEOC-related dimensions.

  • Ensure compliance with internal AI ethics, governance, and regulatory guidelines.


Performance, Scalability & Reliability Testing

  • Conduct performance benchmarking across application and AI layers
  • Measure and validate:

Response time benchmarks (P95 / P99 latency)

Concurrent user load and stress behavior

Endurance and soak testing under sustained loads

  • Validate API throttling, retry mechanisms, and rate-limit enforcement
  • Ensure system stability and predictable degradation under peak loads


Metrics, Reporting & Continuous Improvement

  • Define and track automation-driven success metrics, including:

Unit, integration, and regression automation coverage

CI pipeline pass rates and failure trends

Escaped defects to production

Time from commit to release

  • Use metrics to continuously refine automation strategy and platform quality.


Required Skills & Experience

  • 10+ years of experience in Software Quality Engineering / SDET roles.
  • Proven expertise in designing automated QA frameworks from scratch.
  • Strong hands-on experience with Playwright for UI automation.
  • Experience implementing BDD frameworks such as Cucumber.
  • Strong programming skills in JavaScript / TypeScript; Python experience for AI test automation.
  • Experience with automated API, contract, and integration testing.
  • Solid SQL knowledge for backend and data validation.
  • Strong understanding of CI/CD pipelines and automated quality gates.
  • Hands-on experience with performance and reliability testing.
  • Good understanding of AI security, bias, fairness, and ethical testing principles.


Tools & Technologies

  • Automation: Playwright, Cucumber
  • Programming: JavaScript / TypeScript, Python
  • AI Platforms: LLMs (70B scale), NVIDIA-based inference stacks
  • API & Backend Testing: Automated frameworks, SQL
  • CI/CD: Jenkins, GitLab Actions, GitLab CI
  • Defect Tracking: Jira


Nice to Have

  • Experience testing AI-native or LLM-powered SaaS platforms
  • Exposure to multi-tenant SaaS systems
  • Experience with HRMS, ATS, ERP, or compliance-driven platforms
  • Familiarity with AI governance, guardrails, and responsible AI frameworks
  • Prior experience leading automation-first QA transformations

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.