NVIDIA is looking to build the software that will define the future of generative AI by creating a next-generation post-training software stack
Requirements
- Experience with AI Frameworks such as Pytorch or JAX
- Experience with at least one inference and deployment environments such as vLLM, SGLang or TRT-LLM
- Proficient in Python programming, software design, debugging, performance analysis, test design and documentation
- Strong understanding of AI/Deep-Learning fundamentals and their practical applications
- Contributions to open source deep learning libraries
- Hands-on experience in large-scale AI training, with a deep understanding of core compute system concepts
- Expertise in distributed computing, model parallelism, and mixed precision training
Responsibilities
- Work with applied researchers to design, implement and test next generation of RL and pos-training algorithms
- Contribute and advance open source by developing NeMo-RL, Megatron Core, and NeMo Framework and yet to be announced software
- Solve large-scale, end-to-end AI training and inference challenges, spanning the full model lifecycle from initial orchestration, data pre-processing, running of model training and tuning, to model deployment
- Work at the intersection of computer-architecture, libraries, frameworks, AI applications and the entire software stack
- Performance tuning and optimizations, model training with mixed precision recipes on next-gen NVIDIA GPU architectures
- Publish and present results at academic and industry conferences
- Engaged as part of one team during Nemotron models post-training
Other
- BS, MS or PhD in Computer Science, AI, Applied Math, or related fields or equivalent experience
- 3+ years of proven experience in machine learning, systems, distributed computing, or large-scale model training
- Creative and autonomous
- Ability to work with diverse employees
- Commitment to fostering a diverse work environment