Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
π€ 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Livspace β’ Bengaluru, Karnataka, India
Role & seniority: Senior SDET β Automation (6β8 years of automation testing experience)
Languages: Java, Python, JavaScript
Web/API/Mobile automation: Selenium, Playwright, Cypress
API testing: RestAssured, Postman, Karate
Testing frameworks: TestNG, JUnit, PyTest, Mocha; BDD: Cucumber/SpecFlow (nice to have)
Databases: SQL
Version control: Git
CI/CD: Jenkins, GitHub Actions, GitLab CI
Containers/Cloud: Docker; AWS / GCP / Azure
GenAI / AI testing: GenAI concepts, LLMs, prompt engineering, evaluation, AI-assisted test design
Design, develop, and maintain robust automation frameworks for Web, API, and Mobile; build end-to-end automated suites (functional, regression, integration)
Apply GenAI techniques to improve test coverage, data generation, exploratory testing; validate AI/ML features (bias, accuracy, consistency)
Integrate automation with CI/CD, analyze failures, drive defect resolution, collaborate across product, data, DevOps, and ensure release readiness
6β8 years in Automation SDET roles; proven framework development from scratch
Strong coding in Java/Python/JavaScript; hands-on with Selenium/Playwright/Cypress
API automation experience; data pipeline/service validation
AI testing basics: LLMs, prompts, embeddings; familiarity with GenAI testing approaches
Testing frameworks (TestNG/JUnit/PyTest/Mocha); Git; SQL; CI/CD integration
N
We are seeking an experienced SDET β Automation with 4 to 8 years of hands-on experience in building scalable automation solutions and a strong interest or experience in Generative AIβdriven testing approaches. The ideal candidate combines solid engineering fundamentals with modern QA practices, including AI-assisted test design, validation, and quality engineering in Agile environments.
Key Responsibilities Design, develop, and maintain robust automation frameworks for Web, API, and Mobile applications Build and execute end-to-end automated test suites covering functional, regression, and integration scenarios Apply GenAI techniques to improve test coverage, test data generation, and exploratory testing Validate AI/ML-powered features, including GenAI outputs for accuracy, relevance, bias, and consistency Collaborate with developers, product, data, and DevOps teams to ensure quality across SDLC Automate API and backend testing including validation of data pipelines and services Integrate automation with CI/CD pipelines and ensure reliable test execution Analyze failures, identify root causes, and drive defect resolution (with GenAI is a plus here) Contribute to test strategy, quality metrics, and release readiness
Required Skills & Qualifications Experience 6β8 years of experience in Automation Testing / SDET roles Proven experience building or enhancing automation frameworks from scratch
Programming & Automation Strong coding skills in Java / Python / JavaScript Hands-on experience with Selenium / Playwright / Cypress Experience with API automation using RestAssured / Postman / Karate Strong understanding of OOP, data structures, and design patterns
Generative AI / AI Testing Working knowledge of Generative AI concepts (LLMs, prompts, embeddings, hallucinations, evaluation metrics) Experience with Claude-Code is advantage Experience testing GenAI-enabled features such as chatbots, summarization, recommendation, or content generation Hands-on exposure to prompt engineering, prompt versioning, and prompt evaluation Experience validating LLM outputs for correctness, relevance, safety, bias, and determinism Familiarity with LLM APIs (e.g., OpenAI, Azure OpenAI, or similar) Understanding of AI testing strategies including golden datasets, synthetic data generation, and regression testing for AI models
Testing & Tools Experience with TestNG / JUnit / PyTest / Mocha Knowledge of BDD frameworks (Cucumber / SpecFlow β good to have) Database testing using SQL Experience with Git and version control systems
DevOps & Environment Experience integrating automation with CI/CD tools (Jenkins, GitHub Actions, GitLab CI) Exposure to Docker and cloud platforms (AWS / GCP / Azure)
Good to Have Experience using AI-assisted testing tools or test generation frameworks Knowledge of model evaluation techniques (BLEU, ROUGE, semantic similarity, human-in-the-loop validation) Performance testing experience (JMeter, Gatling) Experience testing microservices and distributed systems Exposure to security testing for AI applications (prompt injection, data leakage)