Google is looking to solve the problem of ensuring that AI innovations are developed and deployed responsibly, ethically, safely, and in compliance with Google's AI Principles and evolving regulatory landscapes.
Requirements
- Experience with AI/Machine Learning concepts and development lifecycles.
- Knowledge of Google’s model development lifecycle and its tools (e.g., TFHub, ML Asset Hub).
- Knowledge of AI regulations and standards (e.g., NIST AI RMF, ISO/IEC 42001, EU AI Act).
- Experience designing and implementing governance frameworks in a tech or research environment.
- Master's degree or PhD in Computer Science, AI, Data Science, or a related field.
Responsibilities
- Partner with technical teams to implement new AI governance tools, processes, and monitoring.
- Work with compliance, risk, and governance leads to support cross-functional AI governance initiatives.
- Design and document mitigating activities along the model development cycle, to ensure compliance with new and existing AI regulations and obligations (e.g., AI Act, AI Principles).
- Drive the integration of governance controls throughout the research and development lifecycle.
- Establish metrics and reporting mechanisms to track the effectiveness of the AI Governance program.
Other
- Bachelor’s degree or equivalent practical experience.
- 5 years of experience in program management.
- Travel requirements not specified, but work location can be one of the following: Mountain View, CA, USA; Kirkland, WA, USA; Seattle, WA, USA; San Francisco, CA, USA
- US base salary range for this full-time position is $156,000-$229,000 + bonus + equity + benefits.
- Must be willing to work in a fast-paced environment and collaborate with cross-functional teams.