Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Xerox • Cebu, Isabela, Philippines
Role & seniority: AI Test Lead/Coordinator (mid to senior level, newly established role)
Stack/tools: AI/ML platforms (Azure ML, Databricks, SageMaker, MLflow), AI testing tools, Python scripting, data validation practices; strong emphasis on governance, explainability, and model monitoring
Own end-to-end AI testing strategy, standards, and responsible AI guidelines; ensure coverage for accuracy, robustness, bias, drift, explainability, and data quality
Plan, coordinate, and execute AI testing cycles across Data Science, Engineering, QA, and Business teams; manage environments, data prep, scenarios, and model validation
Lead model/defect management and UAT collaboration; run defect triage, validate outputs and drift, and translate results for stakeholders
BS in CS/Data Science/IT/Engineering or related field
3–5 years in software QA/testing or data validation; strong coordination across multiple teams
Foundational AI/ML concepts (accuracy, bias, drift, model lifecycle); strong analytical and documentation abilities
Excellent communication and stakeholder management
Experience with AI/ML platforms (Azure ML, Databricks, SageMaker, MLflow)
Python scripting; exposure to model evaluation, explainability, LLM validation
Knowledge of responsible AI practices; relevant certifications (ISTQB, Agile/Scrum, AI)
Location & work type: Location and work type not specified; posit
About Xerox Holdings Corporation
For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com.
We are seeking a highly organized, analytical, and forward-thinking AI Test Lead/Coordinator to manage and execute testing activities for AI-enabled solutions across the enterprise. This role combines leadership in AI test strategy with hands-on coordination of AI testing workflows.
As AI becomes increasingly embedded into business systems, this role is responsible for ensuring that all AI components — machine learning models, predictive analytics, and generative AI features — are tested accurately, safely, and responsibly. The AI Test Lead/Coordinator will define test approaches, prepare and validate datasets, coordinate test cycles, and manage defect triage across multiple teams including Data Science, Engineering, QA, and Business stakeholders.
This is a newly established role designed to bring structure, governance, and quality assurance to AI testing efforts across high-impact projects.
Key Responsibilities
AI Testing Strategy & Governance
Own the end-to-end AI testing strategy, including methodology, standards, and responsible AI guidelines. Define testing coverage for AI systems, including accuracy, robustness, bias, drift, explainability, and data quality. Ensure consistent understanding and adoption of AI testing processes across all participating teams.
Test Cycle Management
Plan and execute all AI testing cycles, including environment readiness, test data preparation, scenario creation, and model validation activities. Coordinate schedules and deliverables across Data Science, Engineering, QA, and Business teams to ensure timely completion. Ensure appropriate tools and frameworks are available for AI model testing and monitoring.
Model & Defect Management
Establish and maintain model defect reporting and AI issue-tracking procedures. Lead AI defect triage sessions across Data Science, Engineering, and business stakeholders. Validate model outputs, data anomalies, drift patterns, and unexpected behavior.
UAT & Business Collaboration
Lead UAT support for AI-driven features in close partnership with business SMEs. Ensure readiness, alignment, and clear communication during UAT execution for AI workflows. Translate model behavior and test results into business-friendly insights.
Vendor & Tool Oversight
Review and approve AI vendor testing plans, validation documentation, and timelines. Evaluate AI testing tools and automation solutions for accuracy, explainability, and enterprise suitability.
Reporting & Communication
Track and report on AI testing progress, model quality metrics, risks, and compliance indicators. Provide clear documentation and reporting for AI transparency, audit readiness, and responsible AI governance. Communicate testing outcomes and model risks to project and leadership stakeholders.
Qualifications
Required
Bachelor’s degree in Computer Science, Data Science, IT, Engineering, or related field. 3–5 years of experience in software testing, QA, or data validation roles. Foundational understanding of AI/ML concepts (accuracy, bias, drift, model lifecycle). Experience coordinating testing efforts across multiple teams. Strong analytical, documentation, and problem‑solving skills.
Preferred
Experience with AI/ML platforms (Azure ML, Databricks, SageMaker, MLflow). Exposure to model evaluation, Python scripting, or AI testing tools. Familiarity with responsible AI practices, explainability techniques, or LLM validation. ISTQB, Agile/Scrum certifications, or relevant AI certifications.
Skills
Excellent communication and stakeholder management. Ability to explain complex AI behaviors to non‑technical audiences. Strong coordination skills and ability to manage multiple parallel test efforts. Comfort working under pressure in fast-paced, evolving environments.
DELIVERABLES
AI Test Strategy and Validation Framework AI Test Plans, Scenarios, and Data Checklists Model Validation Reports and Explainability Summaries AI Defect Logs and Resolution/Triage Reports UAT Support Documentation Weekly AI Testing Status and KPI Reports