The Center for AI Safety (CAIS) is looking to solve societal-scale risks from AI by conducting research and developing solutions in areas like Trojans, Adversarial Robustness, Power Aversion, Machine Ethics, and Out-of-Distribution Detection.
Requirements
- Are able to read an ML paper, understand the key result, and understand how it fits into the broader literature.
- Are comfortable setting up, launching, and debugging ML experiments.
- Are familiar with relevant frameworks and libraries (e.g., pytorch).
- Have co-authored a ML paper in a top conference.
Responsibilities
- plan and run experiments
- conduct code reviews
- work in a small team to create a publication with outsized impact
- leverage our internal compute cluster to run experiments at scale on large language models
- setting up, launching, and debugging ML experiments
Other
- Communicate clearly and promptly with teammates.
- Take ownership of your individual part in a project.
- This application is for the full-time summer internship position.
- Applications are due by December 5, 2025.