Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Cognizant • St Albans, England, United Kingdom
Role & seniority: AI Assurance Quality Engineer, entry-level growth role in modern AI engineering; focus on software testing, data validation, and AI evaluation.
Stack/tools: AI-enabled digital products; LLMs; data pipelines; cloud-native services (Azure/AWS/GCP); API and data validation; Python or JavaScript for test scripting; Git; Azure AI certification. Nice-to-have: Playwright, Cypress; RAG concepts; CI/CD.
Execute test cases validating AI-generated outputs (accuracy, relevance, hallucination bias) and RAG workflows/prompts.
Perform API and data validation tests (JSON, structured data, model responses) and contribute automated regression scripts for AI-enabled features.
Document defects, AI quality risks, and governance evidence; participate in sprint planning and cross-functional collaboration; support accessibility/usability validation.
Strong software testing fundamentals (functional and non-functional)
API testing and data validation expertise
Python or JavaScript proficiency for test scripting
Basic understanding of AI/LLM concepts and prompt-based systems
Strong analytical skills with attention to detail; good written communication
Familiarity with Git workflows
Azure AI certification
Experience with automation frameworks (Playwright, Cypress)
Deeper knowledge of RAG concepts, cloud environments (AWS/Azure/GCP)
Awareness of Responsible AI (b
We are building and scaling AI-enabled digital products that integrate Large Language Models (LLMs), data pipelines, and modern cloud-native services. This role offers the opportunity to embed AI assurance practices early in the product lifecycle, ensuring AI features are reliable, measurable, and responsibly governed before reaching production.
As a AI Assurance Quality Engineer, you will support the validation of AI-generated outputs, structured and unstructured data flows, and automation frameworks. You will work closely with engineering, data, and product teams to operationalize repeatable AI testing practices and contribute to responsible AI delivery.
This is an entry-level growth role within a modern AI engineering environment, combining software testing, data validation, and AI evaluation techniques.
Key Responsibility
Execute test cases to validate AI-generated outputs (accuracy, relevance, hallucination detection, bias indicators) Support testing of Retrieval-Augmented Generation (RAG) workflows and prompt templates Perform API and data validation testing (JSON, structured datasets, model responses) Contribute to automated regression test scripts for AI-enabled features Document defects, AI quality risks, and contribute to governance evidence trails Participate in sprint ceremonies, test planning, and cross-functional collaboration Support accessibility and usability validation for AI-driven interfaces
Key Skills And Experience
Strong understanding of software testing fundamentals (functional and non-functional testing) Good knowledge of API testing and data validation techniques Good proficiency in Python or JavaScript for test scripting Basic understanding of AI/LLM concepts and prompt-based systems Strong analytical skills with attention to detail when reviewing AI outputs Good written communication skills for documenting defects and risks Familiarity with Git version control workflows Certified in Azure AI
Nice To Have Skill
Exposure to automation frameworks (Playwright, Cypress or similar). Understanding of Retrieval-Augmented Generation (RAG) concepts Familiarity with cloud environments (AWS, Azure, or GCP) Awareness of Responsible AI principles (bias detection, explainability, fairness) Experience testing data-driven or ML-enabled systems Understanding of CI/CD pipelines and DevOps practices
Qualifications
Degree in Computer Science, Software Engineering, Data Science, or related discipline; or equivalent practical experience. Demonstrated interest in AI systems, data quality, or automation testing