Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.
Accenture Federal Services • Hyderabad, Telangana, India
Role & seniority: Quality Engineer (Tester) / AI Test/QA Engineer; mid-level (3+ years of experience)
Stack/tools: Python; test frameworks (pytest/unittest/Robot Framework); REST/JSON; Postman; CI/CD (GitHub Actions, Jenkins, GitLab CI); cloud platforms (AWS, GCP, Azure); SQL; observability tools (Grafana, Datadog)
Develop and maintain test plans for AI-driven features (prompt workflows, API responses, model-based predictions) and execute them
Define evaluation standards for LLMs, retrieval systems, and AI integrations; design test datasets, edge cases, and scoring rubrics
Implement automated tests in Python and integrate into CI/CD; conduct regression testing and perform defect/root-cause analysis
AI-focused testing capability (AI/LLM, dynamic/non-deterministic behavior)
Proficiency in Python; experience with test automation frameworks; API testing experience (REST/JSON) and Postman
3+ years in software QA/test automation or data validation; strong analytical/detail orientation
Experience testing AI/LLM-powered apps; familiarity with RAG, embeddings, vector databases
Observability/test data versioning tools; bias/fairness/Responsible AI validation; CI/CD exposure
Cloud workflows and strong cross-functional collaboration skills
Location & work type: Location and work type not specified; educational requirement: bachelor’s in CS/engineering or equivalent; 15 years of full-t
Project Role: Quality Engineer (Tester)
Project Role Description: Enables full stack solutions through multi-disciplinary team planning and ecosystem integration to accelerate delivery and drive quality across the application lifecycle. Performs continuous testing for security, API, and regression suite. Creates automation strategy, automated scripts and supports data and environment configuration. Participates in code reviews, monitors, and reports defects to support continuous improvement activities for the end-to-end testing process.
Must have skills: AI Penetration Testing
Good to have skills: Test Automation Strategy
Minimum 3 Year(s) Of Experience Is Required
Educational Qualification: 15 years full time education
Summary: As an AI Test/QA Engineer at Vertex, you will ensure the quality, reliability, and performance of AI-powered SaaS features across enterprise-scale applications. This role combines software testing fundamentals with a deep curiosity for AI behavior, focusing on validating large language models (LLMs), data pipelines, and AI-driven user experiences. You will design and execute testing frameworks that safeguard accuracy, fairness, and compliance while enabling rapid innovation in AI solutions. Test Strategy & Execution: Develop and maintain automated and manual test plans for AI-driven features, including prompt workflows, API responses, and model-based predictions. Quality Criteria Definition: Collaborate with engineers and data scientists to define evaluation standards for LLMs, retrieval systems, and AI integrations. Dataset & Rubric Design: Create test datasets, edge cases, and scoring rubrics to measure accuracy, consistency, and robustness of AI outputs. Regression Testing: Conduct regression tests for AI components during releases to prevent performance degradation. Defect Analysis: Perform root-cause analysis of defects and provide actionable feedback to engineering and prompt-design teams. Automation Development: Implement automated testing scripts in Python and integrate them into CI/CD workflows. Observability & Metrics: Assist in building dashboards to track KPIs such as model accuracy, latency, and hallucination rates. Responsible AI Validation: Validate AI behavior for fairness, factuality, and compliance with ethical AI principles. Documentation: Maintain detailed test documentation, including reproducibility steps, coverage reports, and performance summaries. Cross-Functional Collaboration: Work closely with QA, ML, and product teams to improve quality processes and release reliability. 1–3 years of experience in software QA, test automation, or data validation (AI/ML experience preferred). Proficiency in Python and familiarity with test automation frameworks (pytest, unittest, Robot Framework). Understanding of RESTful APIs, JSON structures, and testing tools (e.g., Postman). Ability to define and execute test cases for dynamic, non-deterministic AI behaviors (LLMs, chatbots, generative models). Familiarity with SQL, basic data structures, and cloud workflows (AWS, GCP, Azure). Strong analytical and problem-solving skills with attention to reproducibility and detail. Bonus Points: Experience testing LLM-powered or generative AI applications (OpenAI, Anthropic, Vertex AI). Exposure to RAG systems, embeddings, or vector databases. Knowledge of observability tools (Grafana, Datadog) and test data versioning systems. Interest in Responsible AI evaluation—bias detection, interpretability testing. Familiarity with CI/CD tools (GitHub Actions, Jenkins, GitLab CI). Strong communication skills and curiosity about AI product development. Communicate with Clarity - Be clear, concise and actionable. Be relentlessly constructive. Seek and provide meaningful feedback. Act with Urgency - Adopt an agile mentality - frequent iterations, improved speed, resilience. 80/20 rule – better is the enemy of done. Don t spend hours when minutes are enough. Work with Purpose - Exhibit a We Can mindset. Results outweigh effort. Everyone understands how their role contributes. Set aside personal objectives for team results. Drive to Decision - Cut the swirl with defined deadlines and decision points. Be clear on individual accountability and decision authority. Guided by a commitment to and accountability for customer outcomes. Own the Outcome - Defined milestones, commitments and intended results. Assess your work in context, if you re unsure, ask. Demonstrate unwavering support for decisions. Educations : Bachelor s degree in Computer Science, Engineering, or related technical field (or equivalent practical experience). COMMENTS: The above statements are intended to describe the general nature and level of work being performed by individuals in this position. Other functions may be assigned, and management retains the right to add or change the duties at any time., 15 years full time education Show more Show less