Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Gloroots AI • Bengaluru, Karnataka, India
Role & seniority: QA Automation Engineer; mid-senior level (5–10 years QA automation experience)
Stack/tools: Python (PyTest), Playwright (end-to-end web/mobile), LangSmith/LangChain (LLM testing), CI/CD pipelines, API/microservices testing, React Native (JS/TS, automation)
Design and implement QA automation frameworks for AI/ML systems and large-scale platforms
Build automated test suites for end-to-end AI workflows, LLM validation, hallucination detection, and multi-agent coordination
Develop performance/load testing, CI/CD quality gates, and collaboration with AI engineers on model routing/orchestration tests
5–10 years of hands-on QA automation for large-scale distributed systems
Expert Python automation (PyTest) and Playwright for web/mobile
Experience automating CI/CD pipelines and API/microservice testing
AI/ML testing frameworks and validation of ML outputs
JavaScript/TypeScript knowledge for React Native automation
Experience with LangSmith/LangChain or similar LLM testing tools
Background in conversational AI, chatbots, or agent-based systems
Performance testing tools for high-scale consumer apps; AI model evaluation metrics
Experience in fintech/consumer tech/AI-first companies
Location & work type: Mumbai, India; Full-time
Notes: Company focuses on India-scale AI platform serving 100M+ users; compensation described as highly competitive.
Role: QA Automation Engineer
Function: Quality Assurance
Location: Mumbai, India
Type: Full-time
Compensation: highly competitve
Industry: AI/ML, Consumer Technology
About Company The company is building the AI layer for Bharat at India-scale. Backed by partnerships with global tech leaders like Meta and Google, the team is creating AI that serves the entire Indian user base—across languages, contexts, and daily needs. This is AI designed for real adoption, not experiments.
They bring a rare combination of deep India-first AI capability and unmatched India-scale distribution. The focus is a platform-and-product stack that makes AI useful, reliable, and safe for everyday consumers. It’s engineered from day one for massive scale—100M+ users early and 1B-ready constraints on latency, cost, reliability, and safety.
If you want to be part of a fast-moving, high-ambition team building technology with real-world reach, this is that opportunity. The culture emphasises engineering excellence, strong collaboration, and tangible impact across sectors that matter to India—while building toward a category-defining consumer AI experience.
Position Overview You'll build comprehensive testing frameworks for a Core Intelligence platform, agent orchestration layer, and consumer-facing app that serves 100M+ users. You'll design automation strategies that ensure quality, reliability, and performance at massive scale. You'll validate everything from product features to LLM outputs to multi-agent workflows across the consumer AI platform.
Role & Responsibilities Design and implement comprehensive QA automation frameworks for AI/ML systems and large-scale distributed platforms Build automated testing suites using Playwright and PyTest for end-to-end validation of AI workflows Develop specialized testing frameworks for LLM validation, hallucination detection, and AI safety guardrails using LangSmith Create automated test scenarios for multi-agent coordination, task delegation, and partner integrations Implement CI/CD pipelines with automated quality gates for continuous deployment of AI features Build performance and load testing frameworks to validate 100M+ user scale constraints Collaborate with AI engineers to establish testing standards for model routing and orchestration systems
Must Have Criteria 5-10 years of hands-on QA automation experience with large-scale distributed systems Expert-level proficiency in Python automation frameworks, specifically PyTest Strong hands-on experience with Playwright for end-to-end web and mobile application testing Proven experience building and maintaining automated testing integration of CI/CD pipelines Experience with AI/ML testing frameworks and validation of machine learning model outputs Strong knowledge of JavaScript/TypeScript for React Native mobile app automation Experience testing APIs, microservices, and distributed system architectures
Nice to Have Experience with LangSmith, LangChain evaluation tools, or similar LLM testing frameworks Background in testing conversational AI, chatbots, or agent-based systems Experience with performance testing tools for high-scale consumer applications Knowledge of AI model evaluation metrics and automated quality assessment Previous experience in fintech, consumer tech, or AI-first product companies
What We Offer Opportunity to build QA frameworks for India's largest AI platform serving 100M+ users Work directly with cutting-edge AI technologies and multi-agent systems High-ownership environment with direct impact on product quality and user experience Competitive compensation and equity in a high-growth AI company Collaborative culture with top-tier AI engineers and product teams Show more Show less