Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.

NVIDIA • Santa Clara, Washington, United States
Salary: $140,000 - $224,250 / year
Role & seniority: Senior Software Engineer / CI/CD Infrastructure for DL/GPU software (3+ years in relevant build/release or developer productivity roles).
Stack/tools: Python; CI/CD and MLOps platforms; pipeline orchestration; artifact/package management; observability (logs/metrics/dashboards); Linux-based development; CUDA libraries; DL frameworks (PyTorch, JAX, TensorRT, NeMo, vLLM); driver/stack knowledge; containers and cluster schedulers; experience with LLVM/MLIR is a plus.
Design, build, and maintain CI/CD and infrastructure to accelerate deep learning compiler development across diverse GPU environments.
Improve noise reduction, reproducibility, diagnostics, scalability, and observability; optimize release cycles and long-term quality.
Develop performance-aware pipelines and workload harnesses; explore AI-assisted CI workflows ( smarter test selection, automation, triage ) to enhance testing, debugging, and releases.
BS/MS/PhD (or equivalent) in CS/EE/Math or related field; 3+ years scaling CI/CD, build/release, or developer productivity for DL/GPU software.
Strong Python and end-to-end systems architecture; production-grade observability (logs/metrics/dashboards); reliability-focused design.
Experience with DL stacks (PyTorch, JAX, TensorRT, NeMo) and Linux-based development across drivers, CUDA libraries, containers, and schedulers.
NVIDIA's invention of the GPU 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world. Today, we are increasingly known as “the AI computing company”. In this role you will work closely with deep learning compiler engineers to build the infrastructure and automation that powers day-to-day development and releases. Responsibilities include designing and maintaining sophisticated CI/CD systems that run ML workloads at scale across diverse GPU environments, produce actionable signals for compiler developers, testers, and release engineers, and continuously improve stability and turnaround time. This includes building performance-aware pipelines and workload harnesses that support release confidence and long-term quality of deep learning compiler stacks. What you’ll be doing: Drive CI and infrastructure capabilities that make deep learning compiler development fast, reliable, and scalable. This includes improving signal-to-noise (flake reduction, reproducibility, and richer diagnostics), accelerating iteration cycles, scaling capacity and coverage across models/hardware/software configurations, and building strong observability (metrics, logging, tracing, dashboards) so failures are easy to understand and fix. Explore practical uses of AI to enhance CI workflows—such as smarter test selection, automated triage/summarization, and faster issue isolation—ultimately increasing the quality and speed of deep learning compiler development, testing, and release. What we need to see: BS, MS, or PhD (or equivalent experience) in Computer Science, Computer/Electrical Engineering, Mathematics, or related field 3+ years of professional experience designing and scaling CI/CD, build/release, or developer productivity infrastructure for DL/GPU software environments Strong software engineering skills (Python required) with ability to architect, implement, and debug complex systems end-to-end Hands-on experience building CI/MLOps platform capabilities—pipeline orchestration, artifact/package management, and production-grade observability (logs/metrics/dashboards)—with strong reliability and maintainability Experience with deep learning frameworks/runtime stacks (e.g., PyTorch, JAX, vLLM, SGLang, TensorRT, NeMo) and running real workloads in production-like environments Working knowledge of Linux-based development and debugging across complex software/hardware stacks (drivers, CUDA libraries, containers, cluster schedulers, etc.) Ways to stand out from the crowd: Experience applying AI/LLMs and agent-based workflows to improve CI and infrastructure (e.g., smarter triage/routing, automated failure summarization, intelligent test selection, regression isolation, or developer-assist tooling) Experience with compiler-focused verification techniques (e.g., differential testing across backends/versions, IR-level checks, automated reduction/minimization, fuzzing/property-based testing, or translation-validation style approaches) Compiler-adjacent knowledge, including familiarity with LLVM/MLIR-based toolchains and the ability to debug issues that span compilation/codegen, runtime execution, and hardware/software boundaries With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous engineer with a real passion for technology, we want to hear from you. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 140,000 USD - 224,250 USD. You will also be eligible for equity and benefits. Applications for this job will be accepted at least until March 3, 2026. This posting is for an existing vacancy. NVIDIA uses AI tools in its recruiting processes. NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. NVIDIA is the world leader in accelerated computing. NVIDIA pioneered accelerated computing to tackle challenges no one else can solve. Our work in AI and digital twins is transforming the world's largest industries and profoundly impacting society. Learn more about NVIDIA.