Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Meril • Gujarat, India
Role & seniority
Stack/tools
Frontend: Next.js, React (SSR, hydration, hooks)
Backend/API: REST/GraphQL, Node.js/Python logic; API testing (Postman, curl)
AI/LLM: OpenAI API, Ollama, local model orchestration; RAG pipelines; vector stores (Pinecone, FAISS/Weaviate)
Testing: Concurrency, race conditions, system-level failures
Test tooling: Jira, Confluence; Playwright, Cypress, PyTest; CI/CD familiarity; Docker (nice-to-have)
Data: Databases, vector and relational schemas; logging/monitoring
Top 3 responsibilities
Full-Stack & Architecture Testing: deep functional/integration testing of Next.js frontend; validate API contracts, auth flows, multi-user scenarios; ensure data integrity across vector and relational stores
AI & LLM Module Validation: verify RAG relevance, vector search accuracy, autonomy of multi-step LLM agents, hallucin‑detection, and model evaluation for patent domain accuracy
Quality Ownership & Engineering: code reviews for testability, scalable test design for complex multi-user workflows, production readiness focusing on logging, monitoring, failover, and real-world failure analysis
Must-have skills
3–6 years QA experience on full-stack web apps
Deep Next.js/React expertise; SSR/CSR, hooks, DevTools
API testing proficiency; Node.js/Python logic understanding
Hands-on AI/LLM experience: OpenAI, Ollama, or local orchestration
Vector tech and RAG pipelines; familiarity with Pinecone/FAISS or simila
Next.js Frontend: Perform deep functional and integration testing. Analyze components, hooks, and state management to identify SSR/CSR edge cases and performance bottlenecks.
Backend & API: Validate REST/GraphQL API contracts, payload integrity, and authentication flows. Perform multi-user concurrent testing to identify race conditions.
Database Integrity: Test CRUD operations, transactions, and rollbacks. Ensure data consistency across vector databases (Pinecone/FAISS) and relational schemas.
Patent Search & RAG: Validate relevancy ranking, vector search accuracy, and the quality of retrieved context.
Agent Workflows: Test LLM-powered multi-step agents for autonomy behaviors, "looping" issues, and edge-case handling.
Model Evaluation: Evaluate outputs for hallucinations, factual accuracy (specifically for patent law), and consistency using tools like OpenAI/Ollama.
Fine-Tuning Pipelines: Validate datasets and monitor training runs to benchmark model performance.
Code Review: Review frontend and backend code from a testability perspective, identifying anti-patterns and suggesting better error handling.
Test Design: Write scalable, reusable test cases for complex multi-user workflows.
Production Readiness: Validate logging, monitoring, and failover recovery. Analyze real-world failure scenarios and production bugs.
Experience: 3–6 years in QA Engineering, with significant experience in Full-Stack web applications.
Frontend Mastery: Deep understanding of Next.js/React (SSR, hydration, client-side hooks) and Browser DevTools.
Backend & API: Expert at testing APIs (Postman, curl) and understanding Node.js/Python logic.
LLMs: OpenAI API, Ollama, or local model orchestration.
Vector Tech: RAG pipelines and vector databases (Pinecone, Weaviate, etc.).
Prompt Engineering: Ability to identify issues with prompts and agentic logic.
Testing Mindset: Proven ability to test for concurrency, race conditions, and system-level failures.
Tools: Proficiency in Jira/TestRail and exposure to automation frameworks like Playwright, Cypress, or PyTest. JIRA + Confluence exposure must. Nice-to-Have Skills Familiarity with the Intellectual Property (IP) / Patent domain. Experience with Docker, CI/CD pipelines, and cloud platforms (AWS/GCP). Experience with LLM evaluation frameworks (e.g., RAGAS, DeepEval). Performance/Load testing exposure using tools like k6 or Locust. What We Expect From You
You are a System Breaker: You don't just test features; you look for ways the system might fail under stress.
You Think Like a Developer: You can read code to understand where the bugs are likely hiding.
You are a Quality Advocate: You are comfortable challenging implementations when quality or user experience is at risk.
You are AI-Curious: You stay updated on the latest in LLMs and agentic frameworks. What We Offer Opportunity to work at the intersection of Generative AI and LegalTech. A highly technical environment where QA is treated as an engineering discipline. Freedom to explore and implement new testing methodologies for AI.