Docker AI Gateway is our answer to the complexity of taking AI agents from prototype to production. It’s a powerful, intelligent, and secure control point that eliminates the toil of model orchestration, tool management, observability, and governance—so developers can focus on building incredible AI agents, not gluing together infrastructure.
Requirements
- 6+ years of backend engineering experience with production-grade systems
- Deep knowledge of distributed and highly scalable systems, cloud-native infrastructure, and API design
- Experience building secure, high-throughput services (e.g., gateways, proxies, load balancers, policy engines)
- Fluency in Go, and/or Rust (both preferred)
- Familiarity with AI/ML platforms or model serving infrastructure
- Prior experience with OpenAI, Anthropic, or similar LLM APIs
- Familiarity with RAG architectures, vector databases, or agent frameworks (e.g., LangChain, AutoGPT, CrewAI)
Responsibilities
- Design and implement core systems powering the AI Gateway, including the model router, MCP gateway, and control plane
- Build infrastructure that supports dynamic model selection, auto-failover, cost-based routing, and policy enforcement
- Own critical capabilities such as secure credential storage, session summarization, caching, and rate limiting
- Develop APIs for developers building with OpenAI-compatible interfaces and the Model Context Protocol
- Build the underlying infrastructure to support evaluation, telemetry, replay, and backtesting for agents and LLM workflows
- Lead architectural decisions and mentor engineers as the team scales
- Contribute to roadmap planning, technical strategy, and cross-functional alignment
Other
- A strong product mindset—you're excited about building developer-facing tools
- Ownership mentality with a bias for shipping, learning, and iterating
- Due to the remote nature of this role, we are unable to provide visa sponsorship.