Login Sign Up

Sr AI Security Engineer

H&R Block India

2 - 5 years

Thiruvananthapuram

Posted: 27/04/2026

Getting a referral is 5x more effective than applying directly

Job Description

Senior Security Engineer (AI Secure Design)

The AI Secure Design team is responsible for independent security evaluation of AI technologies, tools, and platforms that our organization may adopt. The AI Secure Design & Evaluation Senior Engineer carries out enterprise security evaluations of AI technologies, tools, and platforms proposed for use. The team acts as the authoritative security review body for AI toolingensuring that AI technologies are adopted in a secure, wellgoverned, and enterpriseready manner before they are made available business or Information Technology teams. This team does not review application implementations or code. Its focus is technology research, security capability assessment, risk analysis, and guidance. This is a technologyfocused security role, not an application security or SDLC role.


Scope of Responsibility


  • AI developer tools (e.g., coding assistants, copilots)
  • AI platforms and services (LLMs, GenAI APIs, agentic platforms, RAG frameworks)
  • Open-source AI tooling proposed for enterprise use


Core Responsibilities


  • Continuously research emerging AI technologies, tools, SDKs, platforms, and frameworks relevant to developers.
  • Define and maintain the AI Security Evaluation Framework used across the organization.
  • Perform deep security capability assessments of AI technologies prior to approval.
  • Monitor emerging AI technology trends and security implications.
  • Influence enterprise AI adoption strategy through proactive security research.
  • Track architectural trends such as:
  • LLM hosting models (SaaS vs selfhosted)
  • Agentic platforms and tooluse patterns
  • RetrievalAugmented Generation (RAG) ecosystems
  • Maintain an internal inventory and taxonomy of AI technologies under evaluation or approved use.
  • Evaluate vendors and platforms across areas including:
  • Data handling, retention, and isolation
  • Prompt and input handling controls
  • Output handling and downstream risk exposure
  • Model access control and tenancy isolation
  • Logging, auditability, and administrative controls
  • Vendor security posture (certifications, transparency, maturity)
  • Identify designlevel security risks inherent to the technology (not developer misuse).
  • Analyze and document technologyspecific risk profiles
  • Clearly articulate risk conditions, assumptions, and constraints under which the technology can be safely used.
  • Produce formal AI Security Evaluation Reports for each technology, including:
  • Executive summary for leadership
  • Security architecture overview
  • Key risks and mitigations
  • Approved, restricted, or disallowed usage scenarios
  • Provide recommendations on:
  • Whether the technology is suitable for enterprise use
  • What classes of use cases are allowed/disallowed
  • Required safeguards or governance controls prior to adoption
  • Collaborate with Information Security stakeholders to ensure enterprise standards for approved AI technologies are defined, including:
  • Acceptable and prohibited usage patterns
  • Data categories allowed in AI interactions
  • Integration constraints with enterprise systems
  • Identity, access, and permission expectations
  • Publish clear, developerconsumable guidance explaining:
  • What AI tools are approved
  • How they may be used safely
  • What developers must not do
  • Serve as a trusted advisor to security leadership, architecture boards, and engineering leadership on AI adoption risk.
  • Provide early security input during technology selectionnot after tools are already embedded.
  • Act as the single point of security opinion on AI tool approval decisions.


Required Skills & Qualifications


  • Bachelors or Masters degree in Computer Science, Cybersecurity, or related field.
  • 4+ years of experience in Information Security with a focus on application and product security
  • Ability to evaluate AI security platforms/tools/software.
  • Solid understanding of:
  • Cloud and SaaS security models
  • Identity and access control concepts
  • Data protection and isolation mechanisms
  • Familiarity with modern AI system components:
  • LLM APIs and hosting models
  • Agent frameworks and tool invocation
  • RAG pipelines and vector stores
  • Understanding of GenAI and LLMspecific risks, including:
  • Prompt injection and indirect prompt injection
  • Insecure output handling
  • Model abuse and misuse
  • Data poisoning and supplychain risk
  • Ability to translate AIspecific risks into enterprise security language for decisionmakers.
  • Strong research and evaluation mindset
  • Ability to produce clear, defensible evaluation reports
  • Comfortable presenting tradeoffs and riskbased recommendations
  • Ability to say not suitable with evidence, when required
  • Experience evaluating or approving thirdparty developer platforms or SaaS tools
  • Exposure to AI governance or AI risk management frameworks
  • Experience working with product, platform, or architecture review boards


Key Outputs (What This Team Delivers)

  • AI Security Evaluation Reports (per tool/platform)
  • Approved / Restricted / Disallowed AI Technology List
  • Business/ITFacing Guidance on AI Tool Usage

Services you might be interested in

Improve Your Resume Today

Boost your chances with professional resume services!

Get expert-reviewed, ATS-optimized resumes tailored for your experience level. Start your journey now.