OpenAI's Inference team needs to scale and optimize inference infrastructure across emerging GPU platforms, specifically focusing on advancing inference performance on AMD accelerators to increase performance, flexibility, and resiliency across their infrastructure.
Requirements
- Have experience writing or porting GPU kernels using HIP, CUDA, or Triton, and care deeply about low-level performance.
- Are familiar with communication libraries like NCCL/RCCL and understand their role in high-throughput model serving.
- Have worked on distributed inference systems and are comfortable scaling models across fleets of accelerators.
- Enjoy solving end-to-end performance challenges across hardware, system libraries, and orchestration layers.
- Contributions to open-source libraries like RCCL, Triton, or vLLM.
- Experience with GPU performance tools (Nsight, rocprof, perf) and memory/comms profiling.
- Prior experience deploying inference on other non-NVIDIA GPU environments.
Responsibilities
- Own bring-up, correctness and performance of the OpenAI inference stack on AMD hardware.
- Integrate internal model-serving infrastructure (e.g., vLLM, Triton) into a variety of GPU-backed systems.
- Debug and optimize distributed inference workloads across memory, network, and compute layers.
- Validate correctness, performance, and scalability of model execution on large GPU clusters.
- Collaborate with partner teams to design and optimize high-performance GPU kernels for accelerators using HIP, Triton, or other performance-focused frameworks.
- Collaborate with partner teams to build, integrate and tune collective communication libraries (e.g., RCCL) used to parallelize model execution across many GPUs.
Other
- Are excited to be part of a small, fast-moving team building new infrastructure from first principles.
- Knowledge of model/tensor parallelism, mixed precision, and serving 10B+ parameter models.