Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Maarut Inc • Canada
Role & seniority
Stack/tools
Python (automation/scripting)
Agentic Test Automation Frameworks (e.g., LangChain, multi-agent patterns)
QA/automation tools: pytest, Selenium, Playwright
CI/CD: Jenkins, GitLab CI, GitHub Actions; Git
AI/ML concepts, data pipelines; basic familiarity with cloud/container tech (preferable)
Top 3 responsibilities
Design, build, and maintain an Agentic Test Automation Framework to autonomously generate test scenarios and interact with AI systems
Define and execute AI-specific QA strategy (model drift, data integrity, fairness/bias, performance, edge cases)
Integrate agentic tests into CI/CD, debug/analyze failures with AI Scientists/Engineers, and report key quality metrics (accuracy, latency, robustness)
Must-have skills
3+ years Software QA; ≥2 years in test automation development
Expert Python for automation
Experience with intelligent/agent-based systems or Agentic Test Automation Frameworks
CI/CD and version control (Git)
Foundational ML/AI knowledge; broad testing expertise (functional, non-functional, integration, performance, security)
Nice-to-haves
Experience testing LLMs, Generative AI, or complex decision systems
MCP servers, cloud (AWS/Azure/GCP), and containers (Docker/Kubernetes)
Data science, statistics, or formal verification
Synthetic data generation for testing
Location & work type
Location: Not specified
Work type: Not
We are seeking an innovative and experienced QA/Dev Automation Engineer to join our team, focusing on testing cutting-edge Artificial Intelligence (AI) projects. This role is pivotal in ensuring the quality, reliability, and performance of our AI models and applications, specifically utilizing an Agentic Test Automation Framework. The ideal candidate will have a strong background in software testing, deep knowledge of Python or similar languages, and hands-on experience in developing and deploying intelligent, autonomous test agents capable of exploring, simulating, and validating complex AI behaviors. Key Responsibilities
Design and Develop Agentic Tests: Architect, build, and maintain an Agentic Test Automation Framework capable of autonomously generating test scenarios, interacting with AI systems (e.g., LLMs, predictive models, decision-making systems) as a user or another agent, and reporting outcomes.
Test Strategy for AI: Define and implement comprehensive QA strategies tailored for AI systems, focusing on areas like model drift, data integrity, fairness, bias, performance under load, and edge case identification.
Automation Framework Management: Extend and optimize existing automation infrastructure using languages like Python and standard QA tools (e.g., pytest, Selenium, Playwright), integrating specialized AI testing libraries.
Continuous Integration/Deployment (CI/CD): Integrate the agentic testing pipeline into the CI/CD process to ensure continuous validation and immediate feedback on every build.
Debugging and Analysis: Investigate, reproduce, and diagnose complex issues found by the test agents, collaborating closely with AI Scientists and Software Engineers to propose solutions.
Metrics and Reporting: Develop and report on key quality metrics specific to AI projects (e.g., accuracy, latency, robustness, coverage of the state space explored by agents).
Requirements Required Qualifications
Experience: 3+ years of experience in Software QA, with at least 2+ years focused on test automation development.
AI/ML Knowledge: Foundational understanding of Machine Learning (ML) concepts (e.g., model training, inference, data pipelines, common failure modes like overfitting/underfitting).
Testing Methodologies: Proven expertise in various testing types, including functional, non-functional, integration, performance, and security testing.
Problem-Solving: Strong analytical and problem-solving skills with an ability to reverse-engineer complex systems and anticipate AI behavior.
Preferred Qualifications Experience testing Large Language Models (LLMs), Generative AI, or complex decision-making systems. Experience with using/building MCP servers. Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization technologies (Docker, Kubernetes). Background in data science, statistics, or formal verification. Experience with synthetic data generation for testing purposes. Show more Show less