Technical Specialist — AI Security
Nexora Tech Solutions
2 - 5 years
Mumbai
Posted: 21/04/2026
Job Description
NEXORA TECH | Board Ready Program Technical Specialist AI Security Contract Per Engagement 34 Weeks
This is not a general cybersecurity role. Traditional penetration testing experience does not qualify. This engagement requires hands-on, LLM-specific adversarial testing capability.
About the Engagement
Nexora Tech's Board Ready Program is a structured six-month AI governance advisory engagement for boards and senior leadership, led by Aparna Kumar (Founder, Nexora Tech; former CIO at SBI and HSBC). Month 4 of every engagement is the Adversarial Testing and Resilience phase where the governance framework built in Months 13 is stress-tested against real-world AI failure modes.
The Technical Specialist is contracted solely for this phase: a fixed 34 week engagement with defined deliverables that feed directly into the client's AI governance record and board reporting. The specialist works under the Engagement Delivery Lead and in close coordination with Aparna Kumar, who reviews all client-facing output before delivery.
Nexora Tech is building a standing panel of 46 approved Technical Specialists for consistent engagement across multiple programmes. Panel familiarity with methodology and deliverable standards significantly reduces onboarding friction.
Scope of Engagement
01 Adversarial Testing
- Conduct direct prompt injection testing across client-deployed LLMs and GenAI systems covering customer-facing tools, internal productivity systems, and AI-assisted decisioning assessing susceptibility to instruction override, jailbreak, and role-play exploitation.
- Conduct indirect prompt injection testing via adversarial content embedded in documents, emails, or data the AI system retrieves during execution.
- Execute agentic AI excessive agency testing probing permission boundaries, goal-hijacking scenarios, and whether agent actions can be redirected through adversarial prompting.
- Conduct hallucination assessments measuring hallucination rate, confidence calibration, and conditions producing plausible but false outputs.
- Assess data poisoning risk vectors in the client's training and fine-tuning pipelines where applicable.
- Document all test scenarios, findings, and risk ratings in Nexora Tech's deliverable templates, producing a Technical Findings Register for the AI Governance Dashboard and board reporting.
02 Resilience & Incident Readiness
- Validate kill-switch and safe-stop protocols for all material AI systems under adversarial conditions and at operational speed.
- Design and facilitate a tabletop exercise simulating a material AI incident covering at minimum: a hallucination harm event, a data leakage event through an AI interface, and a deepfake/synthetic media misuse scenario.
- Assess deepfake and synthetic media threat protocols evaluating the client's capability to detect and respond to voice cloning, document forgery, and identity spoofing.
- Review the client's AI incident response plan for AI-specific adequacy escalation thresholds, containment procedures, and board notification protocols.
- Produce a Resilience Gap Assessment documenting the delta between current readiness and the standard required for the client's AI risk profile and regulatory exposure.
03 Reporting & Knowledge Transfer
- Translate all technical findings into board-readable governance language output must be actionable by board committee members, CROs, and independent directors without requiring technical expertise.
- Submit all draft deliverables to the Engagement Delivery Lead and Aparna Kumar for review and sign-off before any client delivery.
- Participate in a structured findings debrief before the client session, contextualising findings within the client's governance posture and programme progress.
- Brief the Governance Analyst post-engagement on findings suitable for incorporation into the evidence trail, regulatory documentation, and board packs.
- For panel specialists: contribute to continuous improvement of Nexora Tech's adversarial testing methodology flagging emerging attack vectors, LLM-specific vulnerabilities, and regulatory developments.
The Qualification Distinction
Does NOT Qualify
Does Qualify
Traditional pen-testing / network intrusion
LLM adversarial testing prompt injection, jailbreak, indirect injection
Web application vulnerability scanning
Agentic AI excessive agency testing permission probing, goal hijacking
Generic red team exercises
AI red team exercises hallucination persistence, data poisoning vectors
Malware analysis or SIEM operations
Deepfake protocol design and tabletop facilitation
Cloud security/infrastructure hardening
Kill-switch validation and safe-stop protocol testing
Essential Requirements
Experience
- Demonstrated, hands-on LLM-specific adversarial testing experience direct/indirect prompt injection, jailbreak design, agentic AI permission boundary testing with evidence of client or research engagements.
- Structured findings reports from AI security assessments used in professional, regulatory, or audit contexts not research papers or CTF write-ups alone.
- Ability to frame technical risk findings in terms of NIST AI RMF, EU AI Act obligations, and board-level fiduciary implications.
- Prior tabletop facilitation or incident response simulation experience (preferred; can be developed with Nexora Tech support).
Technical Knowledge
- Deep working knowledge of LLM attack surfaces: prompt injection taxonomy (direct, indirect, multi-turn, jailbreak, role-play), model inversion, training data extraction, and adversarial example generation.
- Working knowledge of agentic AI architecture and associated risk vectors: tool use, multi-agent orchestration, memory persistence, excessive agency, and goal hijacking.
- Understanding of GenAI data risk: training data contamination, synthetic data feedback loops, and model behaviour degradation.
- Familiarity with deepfake and synthetic media techniques as an assessor of organisational vulnerability not as a creator.
- Working knowledge of the OWASP Top 10 for LLMs and NIST's Generative AI Risk Management Profile.
Profile
- AI/ML security researcher, red team practitioner with AI specialisation, or specialist from an AI security consultancy or Big Four cyber practice with demonstrable LLM-specific experience, not general cyber credentials.
- Able to work within a defined methodology and deliverable template structure the scope is fixed; the specialist operates within Nexora Tech's framework.
- Professional written communication to board-readable standard technical findings that cannot be translated into governance language are not fit for purpose.
This Engagement Will Not Work If
- Your adversarial testing experience is limited to network pen-testing, web application scanning, or infrastructure security these do not transfer directly to LLM-specific work.
- You cannot translate technical findings into governance language board-readable output is non-negotiable.
- You prefer to present findings directly to clients all client delivery is coordinated through the Engagement Delivery Lead and Aparna Kumar.
- You are unavailable for a defined 34 week window scheduled 46 weeks in advance.
- You are seeking a research or exploratory engagement scope, deliverables, and output standards are fixed.
Engagement Terms
Type
Contract Fixed Scope, Per Engagement
Duration
34 weeks (Month 4 of the Board Ready Program)
Location
Remote; client site for tabletop exercises as required
Compensation
TBD per engagement (pass-through within programme pricing)
Reporting
Engagement Delivery Lead (day-to-day); Aparna Kumar (sign-off)
Scheduling
Confirmed 46 weeks before Month 4 commencement
Panel
Preference for a standing panel of 46 specialists across multiple programmes
How to Apply
Email hr@nexoratechsolutions.com | Subject: Technical Specialist AI Security Panel
Your application must address:
- One LLM-specific adversarial testing engagement systems tested, methodology used, and findings produced (anonymised as required).
- Your working knowledge of indirect prompt injection specifically how you test for it in agentic systems that retrieve external content.
- Your interest in a panel arrangement single engagement or ongoing panel relationship.
Shortlisting is based on evidence of LLM-specific adversarial testing experience not certifications.
Nexora Tech | Clarity. Architecture. Governance. Impact. hr@nexoratechsolutions.com www.nexoratechsolutions.com WhatsApp: +91 9699746985
Services you might be interested in
Improve Your Resume Today
Boost your chances with professional resume services!
Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.
