Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

Cerebras Systems • Toronto, Ontario, Canada
Role & seniority: Software Engineer in Test (mid-level, 2+ years relevant experience) working on ML API features testing and integration.
Stack/tools: Python, C++, or Go; testing of compute/ML/network/storage systems; distributed deployments; cross-team collaboration; automation tooling; familiarity with ML workloads (LLM/Multimodal) and microservices is a plus.
Understand new features end-to-end and develop tests/tools to ensure quality (accuracy, fairness, performance).
Contribute to industry-standard benchmarks and drive automation to improve internal efficiency.
Communicate effectively across teams/time zones; navigate agile priorities and assess coverage vs. resource use.
2+ years in software integration, development, or quality/QA.
Strong automation and programming in Python, C++, or Go.
Experience testing large-scale enterprise compute/ML/storage systems; debugging distributed scale-out deployments.
Cross-team collaboration, excellent written/verbal communication, strong organization.
ML workloads experience (LLMs, multimodal inference/training).
Knowledge of hardware architecture, performance optimization, compilers, ML frameworks.
Experience with distributed systems, cloud, security; microservices deployment and orchestration.
Location & work type: Hybrid in-office role (3 days/week); not fully remote. Office locations: Sunnyvale, CA, or Toronto, Canada.
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs. Cerebras' current customers include top model labs, global enterprises, and cutting-edge AI-native startups. OpenAI recently announced a multi-year partnership with Cerebras, to deploy 750 megawatts of scale, transforming key workloads with ultra high-speed inference. Thanks to the groundbreaking wafer-scale architecture, Cerebras Inference offers the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services. This order of magnitude increase in speed is transforming the user experience of AI applications, unlocking real-time iteration and increasing intelligence via additional agentic computation. About The Role As a Software Engineer in Test for the ML API features team, you will be involved in testing AI/ML models for accuracy, fairness, and performance. You will play a pivotal role in bringing together and delivering all software and hardware components for Cerebras API Features. You will focus on SW components feature integration and quality and Pre-deployment/production validation for Cerebras inference solution. As part of this role, you will influence the best testing practice, good debugging methodology, effective cross team communication and advocate for world-class products. Responsibilities Understand new features end-to-end, and develop tests and tools to ensure quality. Contribute to industry standard benchmarks. Drive automation to improve internal efficiency. Understand trade off between coverage and resource requirements. Work in a highly agile environment where priorities change frequently. Effectively communicate across teams and timezones. Skills & Qualifications 2+ years of relevant industry experience in Software integration, development or quality. Strong automation and programming skills using one or more programming languages like Python, C++ or go. Experience in testing compute/machine learning/networking/storage systems within a large-scale enterprise environment. Experience in debugging issues across distributed scale out deployment. Experience working effectively across teams, including product development, product management, customer operations, and field teams. Excellent verbal and written communication skills. Strong organizational skills, teamwork, and can-do attitude. Experience working with geographically dispersed teams across time zones. Preferred Skills & Qualifications Experience in working with ML workloads such as LLM/Multimodal training or inference. Experience with hardware architecture, performance optimizations, compilers and ML frameworks. Experience working with distributed systems, cloud and security. Experience working with microservices deployment, debugging and orchestration. Location This role follows a hybrid schedule, requiring in-office presence 3 days per week. Please note, fully remote is not an option.
Office locations: Sunnyvale CA, Toronto, Canada. Why Join Cerebras
Read our blog: Five Reasons to Join Cerebras in 2026. Apply today and become part of the forefront of groundbreaking advancements in AI! Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them. This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.