OpenAI's Safety Systems team needs to establish a data-driven approach to understand, evaluate, and monitor the safety of their AI models and their deployment in the real world, addressing emerging safety issues and developing fundamental solutions for safe deployment of advanced models and future AGI to ensure AI is beneficial and trustworthy.
Requirements
- Expertise in defining and implementing metrics, with a track record of operationalizing new feature and product-level metrics from scratch
- Strong statistical background, including knowledge of sampling, regression, causal analysis, and more
- Demonstrated prior experience in NLP, large language models, or generative AI
Responsibilities
- Establish the data-driven approach for understanding, evaluating, and monitoring the safety of our production systems.
- Own and implement the statistical methods to productionize those metrics.
- Conduct analysis to understand the impact of our products.
- Establish source-of-truth dashboards that the entire company can use to answer safety-related questions.
- Lead our efforts in understanding and measuring the real-world safety impacts of OpenAI’s current and upcoming products
- Uncovering new ways to improve our approaches to measuring and mitigating harm and abuse
- Develop and implement statistical methods necessary to operationalize safety-related metrics
Other
- 5+ years experience in a quantitative role navigating highly ambiguous environments, ideally as a founding data scientist or team lead at a hyper-growth product company or research org
- Proven leadership skills, including leading multiple data scientists and cross-functional teams
- Excellent communication skills with demonstrated ability to communicate with product managers, engineers, and executives alike
- Strategic insights that extend beyond traditional statistical significance testing.
- Experience in trust and safety, integrity, anti-abuse, or related fields