A mission-driven AI research company is seeking to bridge theoretical AI safety research with practical implementation to ensure advanced AI systems are developed and deployed safely.
Requirements
- 4-6 years of professional experience in software engineering, ML engineering, or a related technical role.
- Strong proficiency in Python with experience in modern ML frameworks like PyTorch, JAX, or TensorFlow.
- Hands-on experience with LLM-based systems and familiarity with AI safety concepts and research areas.
Responsibilities
- Build and maintain robust research infrastructure, including evaluation frameworks and automated pipelines for frontier models (10B-100B+ parameters).
- Design and execute scientific experiments to test hypotheses about AI safety, alignment, interpretability, or robustness.
- Collaborate closely with research scientists, ML engineers, and safety evaluators to translate insights into empirical validation.
Other
- Visit the company's website and speak with Jack, an AI recruiter, to apply for the job.
- Login with your LinkedIn profile to communicate with Jack.
- Participate in a 20-minute conversation with Jack to discuss your experience and ambitions.