OpenAI's Inference team needs to design and build a load balancer for their research inference stack to route large AI models with millisecond precision and bulletproof reliability, ensuring long-lived connections remain consistent and performant for research jobs.
Requirements
- Have deep experience designing and operating large-scale distributed systems, particularly load balancers, service gateways, or traffic routing layers.
- Have 5+ years of experience designing in theory for and debugging in practice for the algorithmic and systems challenges of consistent hashing, sticky routing, and low-latency connection management.
- Have 5+ years of experience as a software engineer and systems architect working on high-scale, high-reliability infrastructure.
- Have a strong debugging mindset and enjoy spending time in tracing, logs, and metrics to untangle distributed failures.
- Are comfortable writing and reviewing production code in Rust or similar systems languages (C/C++, Java, Go, Zig, etc).
- Have operated in big tech or high-growth environments and are excited to apply that experience in a faster-moving setting.
- Experience with gateway or load balancing systems (e.g., Envoy, gRPC, custom LB implementations).
- Familiarity with inference workloads (e.g., reinforcement learning, streaming inference, KV cache management, etc).
Responsibilities
- Architect and build the gateway / network load balancer that fronts all research jobs, ensuring long-lived connections remain consistent and performant.
- Design traffic stickiness and routing strategies that optimize for both reliability and throughput.
- Instrument and debug complex distributed systems — with a focus on building world-class observability and debuggability tools (distributed tracing, logging, metrics).
- Collaborate closely with researchers and ML engineers to understand how infrastructure decisions impact model performance and training dynamics.
- Own the end-to-end system lifecycle: from design and code to deploy, operate, and scale.
- Work in an outcome-oriented environment where everyone contributes across layers of the stack, from infra plumbing to performance tuning.
Other
- Take ownership of problems end-to-end and are excited to build something foundational to how our models interact with the world.
- Equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
- Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act.
- Commitment to providing reasonable accommodations to applicants with disabilities.