Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Nimbus • Seattle, Washington, United States
Salary: $70,000 - $110,000 / year
Role & seniority
Stack/tools
AI/ML testing for LLM/conversational systems
APIs, JSON
Test automation frameworks
Monitoring systems
Collaboration with prompt engineers, product, and engineering
Top 3 responsibilities
Define and operationalize QA strategy and frameworks for conversational agents, workflow automations, and partner models
Lead test planning and execution: design test cases, evaluation rubrics, regression suites, and automated pipelines for agent behavior
Establish quality metrics and monitoring to track accuracy, consistency, guardrails, and performance over time
Must-have skills
5+ years QA experience; at least 2 years in management/lead role
Experience testing AI/ML products, LLMs, or conversational systems; understands non-deterministic behavior
Strong analytical skills; ability to define measurable quality standards from outputs
Ability to build testing frameworks from scratch (test case libraries, criteria, automation)
Experience with APIs, JSON, test automation, monitoring tools
Excellent written and verbal communication; able to document bugs, write test plans, and explain issues to technical and non-technical stakeholders
Leadership experience building/managing QA teams, processes, and culture
Nice-to-haves
Experience scaling QA for agentic AI at pace
Familiarity with prompt engineering and model evaluation
Prior experience hiring/mentoring QA staff
Location: Seattle (in-person)
Salary: $70,000–$110,000 depending on experience
About Nimbus AI Nimbus AI builds the fastest way for companies to create, train, and resell branded conversational and workflow agents. Our platform automates data capture, optimization, and deployment so teams can transform conversations and workflows into continuously improving, revenue-generating AI products. Role Overview We're hiring a QA Manager to build and lead the quality assurance function for Nimbus's agentic AI systems. You'll establish testing frameworks, develop evaluation criteria, and ensure our conversational agents and workflow automations perform reliably across all customer deployments. You'll work cross-functionally with product, engineering, and customer teams to catch edge cases, validate model behavior, and maintain the quality standards that make Nimbus agents trustworthy at scale. This role is perfect for someone who loves building QA processes from the ground up, has a sharp eye for AI-specific failure modes, and can translate ambiguous agent behaviors into concrete test cases and quality metrics. What You'll Own QA strategy & framework development for conversational agents, workflow automations, and partner-specific models across multiple verticals. Test planning and execution—designing test cases, evaluation rubrics, regression suites, and automated testing pipelines for agent behavior. Quality metrics and monitoring to track agent accuracy, consistency, guardrail effectiveness, and performance degradation over time. Cross-functional collaboration with prompt engineers, product, and engineering teams to identify, document, and resolve quality issues. Agent validation processes to ensure new releases, prompt changes, and training updates maintain reliability standards. Team building and leadership as we scale—hiring, mentoring, and growing the QA function. What You Bring 5+ years of QA experience, with at least 2 years in a management or lead role. Experience testing AI/ML products, LLM applications, or conversational systems—you understand non-deterministic behavior and how to test it. Strong analytical skills—comfortable evaluating agent outputs, identifying patterns in failures, and defining measurable quality standards. Ability to build testing frameworks from scratch, including test case libraries, evaluation criteria, and automation strategies. Experience with technical testing tools (APIs, JSON, test automation frameworks, monitoring systems). Excellent communication skills—you can clearly document bugs, write test plans, and explain quality issues to technical and non-technical stakeholders. Leadership experience building or managing QA teams, processes, and culture. Why Join Nimbus Build the QA function from the ground up—define how quality works for agentic AI at scale. Be part of a small, fast team where your quality standards will directly impact hundreds of deployed agents. Work with cutting-edge LLMs and agentic systems—testing challenges that don't exist anywhere else yet. Grow into a senior leadership role as our platform, customer base, and team expand.