Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Awign • India
Role & seniority: Senior AI Quality Engineer (Sr. AI QE)
AI/ML QA: LLMs, prompt engineering, embeddings, RAG, agent frameworks
Test automation: Cypress, JMeter (or equivalents); Python scripting for AI test automation
Platforms/APIs: Azure OpenAI / OpenAI APIs, AI Search, vector databases
Testing focus: API testing, integration testing, load/stress testing, failure-mode testing
Data: test datasets, synthetic data, golden response sets
Design and execute AI-specific test strategies for agent workflows (agent interactions, tool calls, RAG pipelines) and validate outputs for accuracy, bias, safety, and policy compliance
Create automated test frameworks for prompt regression, agent/model routing, and decision consistency; perform load, stress, and scalability testing
Support production readiness reviews, DR testing, release sign-offs; maintain test datasets and golden responses; validate RAG quality and integration with internal/external systems
8–12 years QA/Test Engineering experience; minimum 3+ years in AI/ML or GenAI systems
Deep understanding of LLMs, prompt engineering, embeddings, RAG, and agent frameworks
Experience with test automation tools (Cypress/JMeter or equivalents); Python scripting
Familiarity with responsible AI, red-teaming, model risk management; API testing experience
Experience with Azure OpenAI / OpenAI APIs, AI search, vector databases
Role: Sr. AI QE (Quality Engineer)
# of Position: 1
Location: Bangalore/Pune/ Vadodra/ Chennai/ Hyderabad
Shift Hours: 3:30PM to 12:30AM IST
Summary
The Senior AI Quality Engineer is responsible for ensuring reliability, accuracy, safety, and performance of the Agentic AI platform. This role goes beyond traditional QA, focusing on LLM behavior validation, agent orchestration testing, hallucination detection, prompt regression, and production hardening of AI systems.
Key Responsibilities
Design and execute AI-specific test strategies for agentic workflows (agent interactions, tool calling, RAG pipelines). Validate LLM outputs for accuracy, relevance, bias, safety, and policy compliance. Create automated test frameworks for prompt regression, agent / model routing, and decision consistency. Perform Integration testing with internal and external systems
Test AI failure modes: hallucinations, partial responses, tool failures, latency spikes, and token exhaustion. Perform load, stress, and scalability testing for AI agents and orchestration services. Validate RAG quality (retrieval precision, grounding, citation accuracy). Support production readiness reviews, DR testing, and release sign-offs. Maintain test datasets, synthetic data, and golden response sets.
Required Skills & Experience
8–12 years in QA/Test Engineering with 3+ years in AI/ML or GenAI systems. Strong understanding of LLMs, prompt engineering, embeddings, RAG, and agent frameworks. Experience with test automation tools (Cypress / JMeter / or equivalent). Exposure to responsible AI, red-teaming, and model risk management. Python scripting for AI test automation. Familiarity with Azure OpenAI / OpenAI APIs, AI Search, vector databases. Experience testing APIs,
Nice to Have
Experience in regulated domains (Medicare, Healthcare).