AMD is looking to solve the problem of developing and deploying AI frameworks on their hardware platforms, including MI Instinct, Radeon GPUs, XDNA devices, and datacenter CPUs, to enable cutting-edge AI models and accelerate next-generation computing experiences.
Requirements
- Experience with development in one of the focus areas: AI frameworks, AI runtime stacks, and/or performance tuning and optimizations for workloads running on ML accelerator HWs (e.g. GPUs)
- Experience with ML frameworks such as PyTorch, OnnxRuntime, JAX, TensorFlow
- Proficient in C++ programming
- Experience developing and debugging in Python
- Experience with AI model architectures, e.g. Transformers, CNNs
- Knowledge of custom accelerator hardware (highly preferred)
- Experience with AI software framework, benchmarking and profiling
Responsibilities
- Drive technical direction for next generation frameworks for AI model training and inference for wide variety of AMD devices
- Enhance the AI framework capabilities to enable cutting-edge models on AMD's cutting-edge hardware
- Collaborate closely with AI researchers to drive the development of framework components to efficiently map AI models to run on a variety of HW AI accelerators
- Develop and deploy model optimization features, such as graph fusion, quantization, sparsity
- Profile and accelerate workloads on accelerators such as GPU or NPU, and AI execution runtimes
- Guide other senior developers and domain experts on next generation framework software
- Work with multiple engineering teams that are geographically dispersed
Other
- Team player and ready to work with a geographically distributed team
- BS, MS or PhD in Computer Science, Computer Engineering, Electrical Engineering, or related technical fields
- Excellent leadership and collaboration skills
- Ability to work in a dynamic, fast-paced development environment
- AMD benefits at a glance