AMD is looking to develop cutting-edge AI software that pushes the boundaries of performance and efficiency for next-generation GPU accelerators, contributing to open-source AI software and enhancing AI performance across data center GPUs.
Requirements
- Experience with C++, Python, or similar programming languages.
- Knowledge of AI training and inference.
- Familiarity with GPU programming (CUDA, HIP, or OpenCL) and performance optimization techniques.
Responsibilities
- Contribute to RAG, Ray, ROCm, Coding Agent, DGL, llama.cpp, verl, MegaBlocks, FlashInfer, Triton Inference Server, Taichi, and other merging open-source projects driving AI innovation.
- Collaborate with leading partners and open-source communities to enable AI workloads and improve performance on data center GPUs.
Other
- Bachelor's or master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent.
- Collaborative, curious, and excited to contribute to the open-source repositories that power the next generation of AI workloads.