OpenAI is looking to advance their capabilities for precisely implementing robust, safe behavior in AI models and systems to ensure AI safety and prevent harmful misuse or misalignment as capabilities advance.
Requirements
- 4+ years of experience in the field of AI safety, especially in areas like RLHF, adversarial training, robustness, fairness & biases.
- Possess experience in safety work for AI model deployment
- Have an in-depth understanding of deep learning research and/or strong engineering skills.
Responsibilities
- Set the research directions and strategies to make our AI systems safer, more aligned and more robust.
- Coordinate and collaborate with cross-functional teams, including the rest of the research organization, T&S, policy and related alignment teams, to ensure that our AI meets the highest safety standards.
- Actively evaluate and understand the safety of our models and systems, identifying areas of risk and proposing mitigation strategies.
- Conduct state-of-the-art research on AI safety topics such as RLHF, adversarial training, robustness, and more.
- Implement new methods in OpenAI’s core model training and launch safety improvements in OpenAI’s products.
- Setting north star goals and milestones for new research directions, and developing challenging evaluations to track progress.
- Personally driving or leading research in new exploratory directions to demonstrate feasibility and scalability of the approaches.
Other
- Are excited about OpenAI’s mission of building safe, universally beneficial AGI and are aligned with OpenAI’s charter
- Demonstrate a passion for AI safety and making cutting-edge AI models safer for real-world use.
- Hold a Ph.D. or other degree in computer science, machine learning, or a related field.
- Are a team player who enjoys collaborative work environments.
- This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.