Cookies & analytics consent
We serve candidates globally, so we only activate Google Tag Manager and other analytics after you opt in. This keeps us aligned with GDPR/UK DPA, ePrivacy, LGPD, and similar rules. Essential features still run without analytics cookies.
Read how we use data in our Privacy Policy and Terms of Service.
🤖 15+ AI Agents working for you. Find jobs, score and update resumes, cover letter, interview questions, missing keywords, and lots more.
AMD • Santa Clara, California, United States
Role & seniority: AI Cluster Validation Engineer (engineering role; emphasis on validation, automation, and leadership in tooling)
Languages/ scripting: Python, Linux shell
Platforms: Linux, Docker, Kubernetes, SLURM, LLVM compilers
AI/ML: ROCM software, training/inference workflows, performance profiling for CPUs/GPUs
Frameworks: PyTorch, TensorFlow, Megatron-LM, JAX
Inference: vLLM, SGLang; various inference benchmarks
Hardware focus: AI clusters, distributed training/inference
Validate AI solutions for distributed training and inference workloads with AMD ROCM
Build cluster-scale automation for distributed training and inference
Reproduce field defects and develop tests to prevent recurrence; lead tooling adoption and best-practice advocacy
Python and Linux shell scripting
Experience with Linux, Docker, Kubernetes, SLURM, and LLVM compilers
Performance profiling of CPUs/GPUs and debugging complex compute, network, storage issues
Experience running training/inference workloads with major ML frameworks (e.g., PyTorch, TensorFlow, Megatron-LM, JAX) and benchmarking
Experience training large models (LLMs, MoE, image generation, recommendation models)
Proficiency with multiple inference frameworks (vLLM, SGLang) and related benchmarking
Ability to communicate across teams and advocate for tooling and best practices
Location & wor
WHAT YOU DO AT AMD CHANGES EVERYTHING At AMD, our mission is to build great products that accelerate next-generation computing experiences—from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you’ll discover the real differentiator is our culture. We push the limits of innovation to solve the world’s most important challenges—striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.
Languages: Python, Linux Shell scripting.
Tools: Prior experience with Linux, Docker, Kubernetes, SLURM, LLVM compilers Experience with performance profiling of CPUs, GPUs and debugging complex compute, network, storage problems. Experience with running training of LLMs, MoE models, Image Generation, recommendations models with different frameworks like PyTorch, Tensorflow, Megatron-LM, JAX. Running training performance benchmarks. Experience with running inference workloads in AI clusters with different inference frameworks like vLLM, SGLang. Running performance benchmarks for inference.
Benefits offered are described: AMD benefits at a glance. AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants’ needs under the respective laws throughout all stages of the recruitment and selection process. AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD’s “Responsible AI Policy” is available here. This posting is for an existing vacancy.