OpenAI is looking to prevent severe chemical and biological misuse across its products by designing, implementing, and overseeing an end-to-end mitigation stack.
Requirements
- Bring demonstrated experience in deep learning and transformer models.
- Are proficient with frameworks such as PyTorch or TensorFlow.
- Possess a strong foundation in data structures, algorithms, and software engineering principles.
- Are familiar with methods for training and fine-tuning large language models, including distillation, supervised fine-tuning, and policy optimization.
- Bring background knowledge in biosecurity, computational biology, or adjacent technical fields.
Responsibilities
- Lead the full-stack mitigation strategy and implement solutions for biological and chemical misuse—from prevention to enforcement.
- Ensure safeguards integrate seamlessly across OpenAI products and scale with usage.
- Make decisive calls on technical trade-offs within the bio risk domain.
- Partner with risk modeling leadership to align mitigation design with anticipated risks and coverage.
- Drive rigorous safeguard testing by stress-testing the mitigation stack against evolving threats and product surfaces.
Other
- Have a passion for AI safety and are motivated to make cutting-edge AI models safer for real-world use.
- Excel at working collaboratively with cross-functional teams across research, policy, product, and engineering.
- Show decisive leadership in high-stakes, ambiguous environments.
- Have significant experience designing and deploying technical safeguards at scale.