Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Princeton IT Services, Inc • Canada
Role & seniority
Stack/tools
Languages: Python, TypeScript, Java
Automation: PyTest, Selenium, Playwright, API testing libraries
AI/ML: LangChain, Hugging Face, GPT models, vector databases, RAG pipelines; Scikit-learn, PyTorch, TensorFlow, Keras, Transformers, OpenCV
AI workflows/tools: LangGraph, AutoGen, CrewAI; embeddings, similarity search, content evaluation metrics
Cloud/DevOps: AWS; CI/CD, model validation steps, artifact/version management
Testing: RESTful APIs, automated API suites
Top 3 responsibilities
Build and maintain automated test suites validating AI outputs, reliability, and regression across next-gen AI features
Validate GenAI/RAG pipelines, prompt engineering results, hallucination detection, output quality checks, and model updates
Contribute to quality strategy, SDLC processes, defect tracking, and cross-team collaboration with Developers, Data Scientists, and QE
Must-have skills
Strong automation and programming background (Python/TypeScript/Java)
Hands-on experience with AI/ML frameworks and validating LLM/GenAI outputs
Experience designing/executing automated tests (functional, integration, API) and API testing
Familiarity with CI/CD, AWS, and SDLC collaboration; defect tracking and triage
Nice-to-haves
Experience with agentic AI systems and tools (LangGraph, AutoGen, CrewAI)
MCP/context-aware workflows; embedding and similarit
Job Title: AI QE Engineer
Location: Canada
Role Summary
We are looking for an AI Quality Engineering Engineer with strong automation expertise and hands-on experience validating LLMs, GenAI workflows, and AI-driven applications. This role involves building automated test suites, validating AI outputs, ensuring system reliability, and contributing to the quality strategy for next-generation AI features.
Core Experience
4–6 years as a Software Engineer, SDET, or Automation Engineer. Strong coding skills in Python, TypeScript, or Java. Hands-on experience developing automation scripts, tools, or frameworks. Practical experience using LLMs, prompt engineering, and evaluating AI-generated outputs. Familiarity with agentic AI systems and exposure to tools like LangGraph, AutoGen, CrewAI. Basic understanding of Model Context Protocol (MCP) or context-aware workflow automation (nice to have).
AI / ML Technologies
Practical experience with AI/ML frameworks: LangChain, Hugging Face, GPT models, vector databases, RAG pipelines.
Experience with ML/DL libraries such as: Scikit-learn, PyTorch, TensorFlow, Keras, Transformers, OpenCV. Ability to work with embeddings, similarity search, and content evaluation metrics.
GenAI & AI Agent Development
Ability to integrate or build GenAI components, including RAG pipelines or agent-based workflows.
Experience building automation using Python, PyTest, Selenium, Playwright, or API testing libraries.
Exposure to deploying AI or automation solutions on AWS.
Strong understanding of the software development lifecycle, including requirements, development, testing, and defect analysis. Ability to collaborate with Developers, Data Scientists, and QE teams, clearly communicating progress, risks, and results. Skilled in documenting and tracking defects and participating in defect triage. Show more Show less