Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

bebo Technologies • Bhubaneshwar, Odisha, India
Role & seniority: Mid-Senior level AI QA Engineer (QA for AI/ML focus)
Stack/tools: LLM/AI agents, RAG, vector databases, embeddings; Python or Java; Playwright, Selenium, or REST Assured; CI/CD; data/automation scripting
Test and validate LLM outputs for accuracy, completeness, consistency, usability, and hallucinations
Evaluate RAG systems (retrieval accuracy, document relevance, context construction, full response flows)
Test AI agents/autonomous workflows (decision-making, task execution, error handling) and validate fixes with new examples
QA methodologies, test design, functional/non-functional testing, defect lifecycle
LLM evaluation, hallucination types, prompt behavior, quality metrics
RAG concepts (vector databases, embeddings, retrieval relevance)
Coding in Python or Java; automation with Playwright/Selenium/REST Assured; CI/CD integration
Ability to analyze large AI output sets for patterns and systemic issues
Strong analytical reasoning and communication
Experience with vector search tools and embedding quality analysis
Automation framework maintenance and evaluation dataset development
Collaboration with ML/engineering, product, and QA teams to drive AI quality improvements
Location: not specified
Work type: Full-time
Job Responsibilities Test and validate LLM outputs, ensuring accuracy, correctness, completeness, consistency, usability, and hallucination analysis. Evaluate RAG systems, including retrieval accuracy, document relevance, context construction, and full response generation flows. Test AI agents and autonomous workflows, validating decision-making, task execution, and error handling.
Design and execute AI-specific test strategies: dataset creation, edge-case testing, adversarial testing, pattern-based testing, and regression validation. Develop evaluation frameworks, scoring rubrics, and benchmarking models for AI quality assessment. Analyze large volumes of AI-generated responses to identify patterns, root causes, and issue clusters, instead of isolated defects. Knowledge of testing conversational AI, workflows, or agent-based systems. Exposure to vector search tools and embedding quality analysis. Validate fixes using new examples from the same pattern category, ensuring true model improvement. Collaborate closely with AI/ML engineers, QA teams, and product managers to improve AI accuracy and performance. Contribute to continuous improvement of AI QA practices, automation, tools, and evaluation datasets. Job Requirement
B. Tech or equivalent degree in Computer Science (or related field).
Seniority level Mid-Senior level Employment type Full-time Job function Information Technology Industries Software Development