AMD needs to ensure its team has visibility into the rapid changes in the AI technology landscape both inside AMD and in the outside AI ecosystems to provide meaningful feedback on where AMD's AI efforts should focus for the company's greatest competitive advantage.
Requirements
- Maintain technical competency with common AI architectures and workloads
- LLMs/SLMs (primary)
- Understand how to platform / tune both training- and inference-focused language models on AMD Instinct and EPYC
- AI model and RAG pipelining to tailor model and agent behavior to suit data accuracy and output expectations
- Data Science (secondary)
- Experience with large-scale model training or inference
- MCP and data pipelining
Responsibilities
- Maintain current, hands-on skills in a full-stack AI environment, including: Infrastructure Configuration
- CPU/GPU compute
- AI-specific backend networking and design
- Storage technologies and tiering aligned to AI training and inference pipelines
- Operating Environment
- Bare-metal Linux with AI hw/sw tweaks and configurations
- AI software platforms like Kubernetes for Cloud-Native implementations, and SLURM for HPC-focused setups
Other
- Act as a full-stack, AI solutions-focused resource for customer discussions and potential on-site meetings (some travel required)
- Maintain and build AI ecosystem participation and visibility
- Including CfP submissions for AI conferences
- Ability to build/share demonstrations of portions/full AI stack components from infrastructure to data flows
- Bachelor’s Degree in a technical field (e.g. engineering, mathematics, statistics), Masters preferred