Microsoft's AI Frameworks team needs to optimize the inference performance of large language models (LLMs) like those from OpenAI on various hardware, including GPUs and Microsoft's own silicon. The goal is to enable faster deployment, reduce hardware footprint, and achieve cost savings (capex goals) for Azure AI services, supporting major Microsoft products.
Requirements
- coding in languages including, but not limited to C/C++, Python
- 4+ years’ practical experience working on high performance applications and performance debugging and optimization on CPU's/GPU's
- Experience in DNN/LLM inference and experience in one or more DL frameworks such as PyTorch, Tensorflow, or ONNX Runtime and familiarity with CUDA, ROCm, Triton
- Technical background and solid foundation in software engineering principles, computer architecture, GPU architecture, hardware neural net acceleration
- Experience in end-to-end performance analysis and optimization of state of the art LLMs, HPC applications including proficiency using GPU profiling tools
Responsibilities
- Identify and drive improvements to end-to-end inference performance of OpenAI and other state of the art LLMs
- Measure, benchmark performance on Nvidia/AMD GPU's and first party Microsoft silicon
- Optimize and monitor performance of LLMs and build software tooling to enable insights into performance opportunities ranging from the model level to the systems and silicon level, help reduce the footprint of the computing fleet and achieve Azure AI capex goals
- Enable fast time to market of LLMs/models and their deployments at scale by building software tools that afford velocity in porting models on new Nvidia, AMD GPUs and Maia silicon
- Design, implement, and test functions or components for our AI/deep neural networks (DNN)/LLM frameworks and tools
- Speeding up/reducing complexity of key components/pipelines to improve performance and/or efficiency of our systems
- Benchmark OpenAI and other LLM models for performance on graphics processing units (GPUs) and Microsoft hardware, debug and optimize performance, monitor performance and enable these models to be deployed in the shortest amount of time and the least amount of hardware possible
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter.
- Cross-team collaboration skills and the desire to collaborate in a team of researchers and developers
- Ability to independently lead projects
- Communicate and collaborate with our partners both internal and external