OpenAI is looking to solve the problem of abuse and fraud on its platform, particularly with the release of powerful tools like ChatGPT and DALL*E, by building next-generation anti-abuse systems.
Requirements
- Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems
- Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python
- Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning
Responsibilities
- Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience
- Work closely with finance, security, product, research, and trust & safety operations to holistically combat fraudulent and abusive actors on our system
- Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries
- Utilize GPT-5 and future models to more effectively combat fraud and abuse
Other
- Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives
- 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems
- Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams