Google needs to integrate Responsible AI (RAI) education, consultation, and review for all GenAI products and models to ensure adherence to AI Principles and mitigate risks.
Requirements
Experience with machine learning.
Knowledge about the key policy issues affecting the internet (e.g., intellectual property, free expression, online safety, ethics and socio-technical considerations of technology and the future of AI).
Knowledge of ethics and socio-technical considerations of technology and the future of AI.
Ability to lead in threat or human rights' assessment (i.e., verifying that technology does not produce adverse effects on users).
Responsibilities
Define opportunities throughout the product development, launch, and post-launch process to integrate Responsible AI (RAI) education, consultation, and review for all GenAI products and models.
Maintain and evolve internal and enterprise RAI risk frameworks, taxonomy, and opportunity frameworks to help Google adhere to AI Principles.
Offer subject matter expertise and high quality judgment on RAI policy and product issues, with the ability to advise cross-functional partners.
Develop relationships with key cross-functional (XFN) partners to help develop and launch new products and policies.
Other
Escalate risk acceptance discussions to executive leadership and product area stakeholders.
Bachelor's degree or equivalent practical experience.
7 years of experience in data analytics, Trust and Safety, policy, cyber-security, or related fields.
Master’s degree, JD, or PhD in Public Policy, Security, or similar advanced degree in a relevant field.
Excellent communication and presentation skills (written and verbal) and the ability to influence cross-functionally at various levels.