Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Cognizant • London, England, United Kingdom
Role & seniority
Stack/tools
AI/LLM concepts, data validation, API testing, test scripting (Python or JavaScript)
Git version control
Cloud platforms (Azure; AWS/GCP familiarity is a plus)
Automation/testing frameworks: basic familiarity with Playwright/Cypress (nice-to-have)
Top 3 responsibilities
Execute test cases to validate AI-generated outputs (accuracy, relevance, hallucination detection, bias indicators)
Test Retrieval-Augmented Generation (RAG) workflows and prompt templates; perform API and data validation (JSON, structured datasets, model responses)
Contribute to automated regression tests for AI-enabled features; document defects and AI quality risks; support governance evidence trail; participate in sprints and cross-functional collaboration; assist in accessibility/usability validation of AI-driven interfaces
Must-have skills
Strong foundation in software testing (functional and non-functional) and data validation
Proficiency in Python or JavaScript for test scripting
API testing and data validation techniques
Basic understanding of AI/LLM concepts and prompt-based systems
Analytical mindset with attention to detail; clear written communication; familiarity with Git
Azure certification is a plus
Nice-to-haves
Experience with automation frameworks (Playwright, Cypress)
Deeper knowledge of RAG concepts and cloud environments (AWS/Az
We are building and scaling AI-enabled digital products that integrate Large Language Models (LLMs), data pipelines, and modern cloud-native services. This role offers the opportunity to embed AI assurance practices early in the product lifecycle, ensuring AI features are reliable, measurable, and responsibly governed before reaching production. As a AI Assurance Quality Engineer, you will support the validation of AI-generated outputs, structured and unstructured data flows, and automation frameworks. You will work closely with engineering, data, and product teams to operationalize repeatable AI testing practices and contribute to responsible AI delivery. This is an entry-level growth role within a modern AI engineering environment, combining software testing, data validation, and AI evaluation techniques.
Key Responsibility :Execute test cases to validate AI-generated outputs (accuracy, relevance, hallucination detection, bias indicators )Support testing of Retrieval-Augmented Generation (RAG) workflows and prompt template sPerform API and data validation testing (JSON, structured datasets, model responses )Contribute to automated regression test scripts for AI-enabled feature sDocument defects, AI quality risks, and contribute to governance evidence trail sParticipate in sprint ceremonies, test planning, and cross-functional collaboratio nSupport accessibility and usability validation for AI-driven interface sKey Skills and Experience :Strong understanding of software testing fundamentals (functional and non-functional testing )Good knowledge of API testing and data validation technique sGood proficiency in Python or JavaScript for test scriptin gBasic understanding of AI/LLM concepts and prompt-based system sStrong analytical skills with attention to detail when reviewing AI output sGood written communication skills for documenting defects and risk sFamiliarity with Git version control workflow sCertified in Azure A INice to have Skill :Exposure to automation frameworks (Playwright, Cypress or similar) .Understanding of Retrieval-Augmented Generation (RAG) concept sFamiliarity with cloud environments (AWS, Azure, or GCP )Awareness of Responsible AI principles (bias detection, explainability, fairness )Experience testing data-driven or ML-enabled system sUnderstanding of CI/CD pipelines and DevOps practice sQualifications :Degree in Computer Science, Software Engineering, Data Science, or related discipline; or equivalent practical experience .Demonstrated interest in AI systems, data quality, or automation testin
g