
AI Quality Engineer (Remote)
Hire Feed • United Kingdom
Salary: $30 - $50 / hour
**Role & seniority: ** AI Quality Engineer (remote); mid-level—hands-on technical evaluation/troubleshooting of autonomous AI agents and LLM workflows.
**Stack/tools: **
-
Languages: Python, JavaScript, Go, or Java (at least two preferred)
-
SQL databases
-
Agent/LLM evaluation: rubric writing, trace debugging, edge-case testing
-
Preferred integrations/APIs: Supabase, Gmail, other APIs
-
Top 3 responsibilities:
-
Evaluate AI agents: write objective pass/fail rubrics; debug agent traces; identify failure patterns
-
Stress test systems: test edge cases including prompt injection and tool misuse
-
Provide technical assessment/feedback: analyze modular production architectures; give high-density feedback to improve LLM training
-
-
Must-have skills:
-
Backend/automation/complex integration experience
-
Ability to build/maintain production-grade modular software (separation of concerns)
-
Strong in 2+ major languages + SQL
-
Experience working in live/non-mocked environments; handle multi-turn interactions
-
-
Nice-to-haves:
-
Integrating agents with live tools/APIs (e.g., Supabase, Gmail)
-
Familiarity with persistent state/session tracking
-
Security awareness: privacy leaks, authority escalation, indirect prompt injection
-
-
Location & work type: Fully remote (United States, Canada, United Kingdom, Australia). Flexible task assignment
Full Description
Job Title: AI Quality Engineer (Remote)
Location: Remote (United States, Canada, United Kingdom, Australia)
Work Mode: Fully Remote
Role Overview Help design and evaluate autonomous AI agents across multiple LLMs, spanning health, education, daily life, and other real-world domains (all coding work). Shape the future of agentic AI systems by providing expert human feedback to leading AI organisations. Help train Large Language Models (LLMs) for complex, multi-step architectural workflows.
Key Responsibilities AI Agent Evaluation Write evaluation rubrics with objective pass/fail criteria Debug agent traces to identify failure patterns Stress test agents against edge cases, prompt injection, and tool misuse Technical Assessment Assess production-grade modular software architecture Analyse multi-turn system interactions and behaviours Provide high-density technical feedback for LLM training Project Workflow Create an account and upload a resume/ID Complete the onboarding assessment Start earning through flexible task assignments
Qualifications Experience in backend engineering, AI automation, or complex systems integration Proven ability to build and maintain production-grade software with modular separation (e.g., distinct services for data parsing, logic processing, and reporting) Strong command of at least two major languages (e.g., Python, JavaScript, Go, or Java) and experience working with SQL databases Practical experience building for live, non-mocked environments and handling multi-turn system interactions
Preferred (Nice to Have) Experience integrating agents with live tools such as Supabase, Gmail, and other APIs Familiarity with persistent state and session-tracking patterns Experience identifying privacy leaks, authority escalation, or indirect prompt injection vulnerabilities
Compensation Hourly compensation ranges from USD $30–$50, depending on experience and task complexity Payments are issued weekly via supported payout platforms (e.g., PayPal or AirTM) Full compensation details are provided prior to task acceptance
Equal Opportunity Statement Selection decisions are based solely on skills, qualifications, and project requirements. We are committed to inclusive and fair engagement practices and consider all qualified applicants without regard to legally protected characteristics.
Apply Now!