Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Precision Technologies • Dallas, Texas, United States
Role & seniority: Hands-on AI Engineer, senior-level QA/Testing lead for agentic/multi-agent AI systems; leads QA from Dev to Prod.
Languages: Python, TypeScript/JavaScript
Test artifacts: test harnesses, simulators, fixtures, scenario configs, prompt libraries
AI/testing: LLM evaluation (exact/soft match, BLEU/ROUGE, embedding-based similarity), guardrails, prompt testing; agent orchestration tools (LangChain, LangGraph, LlamaIndex, DSPy, OpenAI/ Azure orchestration)
CI/CD & observability: GitHub Actions, Azure DevOps; OpenTelemetry, Prometheus/Grafana, Datadog; feature flags/canaries
Architectures: distributed systems, multi-agent workflows, orchestration/queues, chaos engineering, resilience patterns, load/stress testing
Data/ops: MLOps concepts, datasets, versioning, incident response, security/compliance (PII, policy)
Define and own the QA strategy for agentic/multi-agent AI across dev, staging, prod; establish testing standards and review practices; embed QA in SDLC and incident response.
Design and implement macro validations for complex multi-step agent workflows, state management, orchestration correctness, resilience, latency, and reliability testing (including fuzzing, chaos, canaries, and automated rollbacks).
Mentor QA engineers, build reusable test artifacts, and partner with Ops/Data/ML/Platform teams to ensure production readiness and post-deployment validation playbooks.
Must-
We are seeking a hands-on AI Engineer to design and execute end-to-end testing strategies for agentic AI solutions, including multi-agent systems in production-grade environments. This role partners with the Agentic Operations Team to ensure resiliency, reliability, accuracy, latency, orchestration correctness, and scale. You will establish QA frameworks, build reusable test artifacts, drive macro-level validations across complex workflows, and lead the QA function for Agentic AI from Dev to Prod. Key Responsibilities Quality Strategy & Leadership Agentic & Multi‑Agent Testing Reliability, Resiliency, and Latency Accuracy & Macro-Level Validations Scale & Orchestration Dev → Prod Readiness Define and own the QA strategy for agentic/multi-agent AI systems across dev, staging, and prod. Mentor a team of QA engineers; establish testing standards, coding guidelines for test harnesses, and review practices. Partner with Agentic Operations, Data Science, MLOps, and Platform teams to embed QA in the SDLC and incident response. Design tests for agent orchestration, tool calling, planner-executor loops, and inter-agent coordination (e.g., task decomposition, handoff integrity, and convergence to goals). Validate state management, context windows, memory/knowledge stores, and prompt/graph correctness under varying conditions. Implement scenario fuzzing (e.g., adversarial inputs, prompt perturbations, tool latency spikes, degraded APIs).
Create resilience testing suites: chaos experiments, failover, retries/backoff, circuit-breaking, and degraded mode behavior. Establish latency SLOs and measure end-to-end response times across orchestration layers (LLM calls, tool invocations, queues). Ensure reliability through soak tests, canary verifications, and automated rollbacks. Define ground-truth and reference pipelines for task accuracy (exact match, semantic similarity, factuality checks). Build macro validation frameworks that validate task outcomes across multi-step agent workflows (e.g., complex data pipelines, content generation + verification agent loops). Instrument guardrail validations (toxicity, PII, hallucination, policy compliance). Design load/stress tests for multi-agent graphs under scale (concurrency, throughput, queue depth, backpressure). Validate orchestrator correctness (DAG execution, retries, branching, timeouts, compensation paths). Engineer reusable test artifacts (scenario configs, synthetic datasets, prompt libraries, agent graph fixtures, simulators). Integrate tests into CI/CD (pre-merge gates, nightly, canary) and production monitoring with alerting tied to KPIs. Define release criteria and run operational readiness (performance, security, compliance, cost/latency budgets). Build post-deployment validation playbooks and incident triage runbooks. Required Qualifications 7+ years in Software QA/Testing, with 2+ years in AI/ML or LLM-based systems; hands-on experience testing agentic/multi-agent architectures. Strong programming skills in Python or TypeScript/JavaScript; experience building test harnesses, simulators, and fixtures. Experience with LLM evaluation (exact/soft match, BLEU/ROUGE, BERTScore, semantic similarity via embeddings), guardrails, and prompt testing. Expertise in distributed systems testing latency profiling, resiliency patterns (circuit breakers, retries), chaos engineering, and message queues. Familiarity with orchestration frameworks (LangChain, LangGraph, LlamaIndex, DSPy, OpenAI Assistants/Actions, Azure OpenAI orchestration, or similar). Proficiency with CI/CD (GitHub Actions/Azure DevOps), observability (OpenTelemetry, Prometheus/Grafana, Datadog), and feature flags/canaries. Solid understanding of privacy/security/compliance in AI systems (PII handling, content policies, model safety). Excellent communication and leadership skills; proven ability to work cross-functionally with Ops, Data, and Engineering. Preferred Qualifications Experience with multi-agent simulators, agent graph testing, and tooling latency emulation. Knowledge of MLOps (model versioning, datasets, evaluation pipelines) and A/B experimentation for LLMs. Background in cloud (AWS), serverless, containerization, and event-driven architectures. Prior ownership of cost/latency/SLAs for AI workloads in production.