At Letta, we’re building self-improving artificial intelligence: creating agents that continually learn from experience and adapt over time. We’re assembling a world-class team of researchers and engineers to solve AI’s hardest problem: making machines that can reason, remember, and learn the way humans do.
Requirements
- Deep expertise in LLMs and retrieval
- Track record of impactful research (breakthrough publications and/or open source contributions)
- Ability to balance execution speed with empirical rigor
- Real-world impact beyond pure academic work
Responsibilities
- Defining the key abstractions of the LLM memory layer
- Building memory architectures that support multiple memory types including temporal sequences, episodic experiences, semantic knowledge, and procedural skills
- Researching memory sharing between multiple agents that enable effective multi-agent collaboration
- Improving context management techniques that solve the long-context / context derailment problem
- Running evaluations for measuring and improving memory for agents
- Advance the field through open publishing of research through papers, technical reports, blog posts, and open-source code.
Other
- Note that this role is in-person (no hybrid), 5 days a week in downtown San Francisco.
- You want to maximize your impact: you want work on a small, incredibly talented team where every individual plays a huge role in the team's success.
- You are fundamentally opposed to closed frontier AI that is controlled by a handful of billionaires and private tech companies.
- You like stability, and get stressed out when there’s nobody telling you exactly what to do.
- You want to work a 9-5, and value clear separation of work from life.