Basis equips accountants with a team of AI agents to take on real workflows. The company has hit product-market fit, has more demand than they can meet, and just raised $34m to scale.
Requirements
- Experience with retrieval, embeddings, and structured context management.
- Familiarity with eval frameworks, vector stores, and experiment tracking.
- Comfort working with observability stacks (metrics/logs/traces).
- Exposure to multi-model routing, guardrails, and cost/latency optimization.
Responsibilities
- Own the architecture for a core ML capability (e.g., agent orchestration, eval systems, or context stack).
- Write and review critical code; establish standards for structure, interfaces, and testing.
- Drive design reviews that clarify trade-offs and ensure long-term coherence across teams.
- Create frameworks and abstractions others can build on confidently.
- Partner with engineers across ML, Research, and Platform to implement robust, observable, and maintainable systems.
- Run high-velocity experiments across models, tools, and architectures—learn fast, share insights, and translate them into production decisions.
- Contribute across the stack: from prompt orchestration and retrieval to evaluation pipelines and observability tooling.
Other
- Prior startup or high-velocity environment experience.
- In-person team.
- You’ll operate as both architect and practitioner—writing, teaching, debugging, and designing systems that shape how AI agents reason and learn.
- You’ll review designs, simplify abstractions, and make sure the codebase stays coherent as we scale.
- Your job is technical leadership: holding a high bar for design, execution, and reasoning, and helping others reach it.