To ensure that Airbnb's AI-powered systems are reliable, safe, and aligned with trust and governance standards
Requirements
- PhD/Master’s degree, preferably in CS, or equivalent experience
- 7/10+ years of work experience in developing and deploying machine learning models in production
- Strong understanding of machine learning principles and algorithms
- Hands-on programming experience in python and in-depth knowledge of machine learning frameworks
- 2+ years of experience with one or more of the following broader areas: Content Safety/Integrity, ML Fairness and Bias, Responsible AI, AI Model Security, or related areas
Responsibilities
- Collaborate with cross-functional teams to identify issues, evaluate risks, design monitoring systems, tailor safeguard measures and deploy efficient solutions
- Design and implement appropriate guardrails to mitigate risks like hallucinations, privacy breaches, prompt injections, harmful responses, or bias
- Set up continuous risk monitoring pipelines and alerting to enable human-in-the-loop feedback and mitigation
- Collaborate with trust, security, legal and operation teams to enable risk management
- Collaborate with evaluation and data platform to design and build out data flywheel for fixing model failure modes and guardrail improvement
Other
- Must live in a state where Airbnb, Inc. has a registered entity
- Occasional work at an Airbnb office or attendance at offsites, as agreed to with your manager