Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Vantage Point • Richardson, Texas, United States
Role & seniority
Role: Automation Engineer / QA Engineer (specializing in AI/ML test automation)
Stack / tools
GCP (Cloud), CI/CD pipelines
API and UI testing automation (Playwright framework)
Python for test scripts
Automated testing of containerized microservices
AI model evaluation frameworks and automated reporting
Top 3 responsibilities
Design, implement, and execute API test plans for AI models/services (functional, performance, security, contract testing); validate data integrity across endpoints
Develop and maintain automated test scripts (Python) and test automation frameworks; automate testing of GCP-based microservices within CI/CD
Build and validate automated evaluation/testing for AI offerings, including automated reporting/analysis and running tests across multiple language models via APIs
Must-have skills
GCP experience and CI/CD integration
API testing automation (functional, performance, security, contract)
UI testing automation (Playwright)
Python-based test automation
Experience with containerized microservices in cloud environments
Nice-to-haves
GenAI/test automation use cases exposure
Performance testing scripting for API/backend services
Automated reporting and analytics for test results
Collaboration with software engineers and data scientists; ability to communicate technical results to non-technical stakeholders
Location & work type
Location: not specified
Work type: not specified
Job Description
Key Skills: GCP Test Automation, CI/CD, API & UI testing automation, experience with Playwright framework and exposure to GenAI for test automation use case
What You Will Do
Design, develop, and execute comprehensive API test plans and test cases for AI models and services, covering functional, performance, security, and contract testing. Conduct rigorous testing of new language models (commercial and open source) via their APIs, focusing on accuracy, performance, scalability, and cost effectiveness. Validate data integrity and consistency across various API endpoints and integrations. Implement and maintain API test automation frameworks and tools.
Automation Development And Execution
Develop and maintain automated test scripts using Python and relevant testing frameworks to maximize test coverage and efficiency. Automate the testing of containerized microservices running in Google Cloud Platform (GCP) using appropriate CI/CD pipelines. Champion automation best practices and drive continuous improvement in automation coverage. Develop and maintain performance testing scripts for API endpoints and backend services.
AI Model Evaluation Framework Validation
Ensure the evaluation framework provides a robust and reliable environment for testing the latest AI offerings via automated processes. Develop automated tests to validate the functionality and performance of the evaluation framework itself. Implement automated reporting and analysis of test results. Run automated tests against one or multiple language models simultaneously through API interfaces.
Collaboration And Communication
Work closely with software engineers, data scientists, and other stakeholders to understand requirements and ensure the delivery of high quality AI solutions. Clearly communicate technical concepts and testing results to both technical and non technical stakeholders.