OpenAI's Scaled Abuse team is looking for a data scientist with anti-fraud & abuse experience to help architect and build next-generation anti-abuse systems to identify and respond to fraudsters on their platform.
Requirements
- Can use coding languages (Python preferred) to programmatically explore large datasets and generate actionable insights to solve problems
- Proven ability to propose, design, and run rigorous experiments (A/B tests, quasi-experiments, simulations) with clear insights and actionable product recommendations, leveraging SQL and Python.
- Bonus if you have experience with deploying scaled detection solutions using large language models, embeddings, or fine tuning
- 5+ years of quantitative experience in ambiguous environments, ideally as a data scientist at a hyper-growth company or research org, with exposure to fraud, abuse, or security problems.
Responsibilities
- Design and build systems for fraud detection and remediation while balancing fraud loss, cost of implementation, and customer experience
- Work closely with finance, security, product, research, and trust & safety operations to holistically combat fraudulent and abusive actors on our system
- Stay abreast of the latest techniques and tools to stay several steps ahead of determined and well resourced adversaries
- Utilize GPT-5 and future models to more effectively combat fraud and abuse
Other
- Have experience on a highly technical trust and safety team and/or have worked closely with policy, content moderation, or security teams
- Excellent communication skills with a track record of influencing cross-functional partners, including product managers, engineers, policy leads, and executives.