The Trust & Safety team at Google is looking to solve problems related to the safety and integrity of their products, particularly those involving AI technologies. They need to assess risks, develop mitigation strategies, and influence product design to incorporate safety principles from the outset.
Requirements
- 4 years of experience in data analytics, Trust and Safety, policy, cybersecurity, or related fields.
- Experience working with large datasets and data analysis tools.
- Understanding of AI systems, machine learning, and their potential risks or experience working with Google's products and services, including GenAI products.
Responsibilities
- Intake new products with a focus on AI technologies such as scope new launches, assess risk, and build a Trust and Safety launch plan.
- Develop and optimize processes to onboard new products to Trust and Safety.
- Develop methodologies to measure risk, and the effectiveness of risk mitigation.
- Influence product design so that safety principles and trust-oriented technologies are baked directly into the product development cycle from the very start.
- Review or be exposed to sensitive or graphic content as part of the core role.
Other
- Bachelor's degree or equivalent practical experience.
- Ability to work independently and as part of a team.
- Excellent written and verbal communication and presentation skills and the ability to influence cross-functionally at various levels.
- Excellent project management, problem solving, and analysis skills, with effective business acumen.
- You're a big-picture thinker and strategic team-player with a passion for doing what’s right.