Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Gloroots AI • Bengaluru, Karnataka, India
Role & seniority: QA Automation Engineer, mid-senior (5–9 years in QA automation)
Stack/tools: Python; PyTest; Playwright; ML model validation/testing; LLM/ Conversational AI testing; distributed AI platforms
Lead design/implementation of automated testing frameworks for large-scale distributed AI platforms
Develop comprehensive test suites for ML model validation and performance testing
Build end-to-end automation pipelines (Playwright) for AI application UI; design/execute testing for LLM/conversational AI; mentor junior QA engineers
5–9 years QA automation experience on large-scale distributed platforms (>100M users)
ML testing and validation of ML systems
Python for test automation and ML validation
Experience with PyTest
Hands-on testing of LLM applications and conversational AI
Playwright for end-to-end automation
LLM evaluation frameworks and model performance metrics
Performance testing for high-throughput AI apps
Cloud-native testing strategies for AI workloads
Experience with containers/Kubernetes
Location & work type: Bangalore or Mumbai; Full-time
Role: QA Automation Engineer
Function: Quality Assurance
Location: Bangalore or Mumbai
Type: Full-time
Industry: Artificial Intelligence, Technology
About Company The company is building the AI layer for Bharat at India-scale. Backed by partnerships with global tech leaders like Meta and Google, the team is creating AI that serves the entire Indian user base—across languages, contexts, and daily needs. This is AI designed for real adoption, not experiments.
They bring a rare combination of deep India-first AI capability and unmatched India-scale distribution. The focus is a platform-and-product stack that makes AI useful, reliable, and safe for everyday consumers. It’s engineered from day one for massive scale—100M+ users early and 1B-ready constraints on latency, cost, reliability, and safety.
If you want to be part of a fast-moving, high-ambition team building technology with real-world reach, this is that opportunity. The culture emphasises engineering excellence, strong collaboration, and tangible impact across sectors that matter to India—while building toward a category-defining consumer AI experience.
Position Overview You'll lead quality assurance for large-scale AI systems and machine learning platforms serving millions of users across India. You'll design comprehensive testing strategies for distributed AI applications that power real-time language processing and enterprise solutions. This role offers direct mentorship opportunities and collaboration with global tech leaders on projects that transform entire industries.
Role & Responsibilities Lead design and implementation of automated testing frameworks for large-scale distributed AI platforms Develop comprehensive test suites using PyTest for ML model validation and performance testing Build end-to-end automation pipelines using Playwright for AI application user interface testing Design and execute comprehensive testing strategies for LLM applications and conversational AI systems Mentor junior QA engineers and establish testing best practices across AI development teams Design and implement AI-based validation systems for continuous model quality monitoring Collaborate with ML engineers to define testing strategies for complex distributed system behaviors
Must Have Criteria 5-9 years of QA automation experience with large-scale distributed platforms serving 100M+ users Mandatory experience in ML testing and validation of machine learning systems Strong proficiency in Python for test automation and ML model validation Hands-on experience testing LLM applications and large language model systems Experience with PyTest framework for comprehensive test automation Hands-on experience with Playwright for end-to-end testing automation
Nice to Have Experience with LLM evaluation frameworks and model performance metrics Background in performance testing for high-throughput AI applications Knowledge of cloud-native testing strategies for AI workloads Experience with containerized testing environments and Kubernetes
What We Offer Opportunity to work on cutting-edge AI technology at massive scale Leadership role shaping QA practices for India's largest AI initiative Collaboration with global tech leaders and industry experts Comprehensive benefits package and competitive compensation Professional development opportunities in AI and machine learning Show more Show less