Goodfire is looking to solve the problem of making AI systems safer and more interpretable by developing advanced AI systems with a focus on interpretability.
Requirements
- 5+ years of experience in ML infra, research engineering, or systems programming
- Expertise in Python, PyTorch or Jax, and distributed systems
- Experience deploying and maintaining ML systems at scale
- Prior work on model internals, explainability, or interpretability
- Open-source ML infra contributions
- Startup or lab experience in fast-moving teams
Responsibilities
- Develop robust tooling for analyzing and visualizing model internals
- Optimize pipelines and infra for large-scale interpretability workflows
- Partner with researchers to iterate quickly on experimental techniques
- Help deploy interpretability tools into product and production contexts
- Ensure system reliability, reproducibility, and performance
- Build tools and infra to support foundational model probing
- Rapidly prototype novel interpretability systems with high upside potential
Other
- Put mission and team first
- Improve constantly
- Take ownership and initiative
- Action today
- At least two years of relevant experience
- Market competitive salary, equity, and competitive benefits