Maitai manages the LLM stack for enterprise companies, enabling the fastest and most reliable inference. The future of enterprise AI revolves around mosaics of small, domain-specific models powering powerful, responsive agents, and Maitai is well positioned to capture the market.
Requirements
- Go
- Python
- Microservices
- PostgreSQL
- Amazon Web Services (AWS)
- Kubernetes
- Terraform
Responsibilities
- You'll collaborate with founders, other engineers, and partners to create the management layer for composable high-performance agents.
- Expect to tackle the hardest challenges surrounding scaling agents and inference while working at the cutting edge of opensource models and accelerated compute.
- You'll be working on high-performance distributed systems that power our agent infrastructure, ensuring agents run reliably and efficiently.
- You'll optimize base agent functionality like routing, orchestration, and processing, as well as backend latency, ensuring responses are quick and accurate.
- You’ll also work on agent evaluations and composability - ensuring new customers can get started same day as onboarding.
- On the frontend side, you’ll contribute to our Portal (React/TypeScript), where customers build agents, configure guardrails, fine-tune models, and test.
- Our infrastructure runs on Kubernetes, managed with Terraform, and deployed across AWS and GCP.
Other
- In-person downtown Redwood City.
- You believe “agent” is a buzzword (and you know what really makes an agent an agent)
- You’ve built and shipped real systems end-to-end
- You contribute to your side projects in your free time
- You have strong opinions