Google is looking to solve broad spectrum safety, and neutrality risks in GenAI products by designing and developing solutions to prevent abuse on Google DeepMind base models, ultimately improving user safety.
Requirements
- 7 years of experience in data analysis or data science, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
- 5 years of experience in data analysis with experience in SQL or Python etc.
- Experience working with Large Language Models, LLM Operations, prompt engineering, pre-training, and fine-tuning.
- Experience in designing and conducting experiments or quantitative research, in a technology or AI context.
- Experience in AI systems, machine learning, and their potential risks.
- Strong technical competency with a data-driven investigative approach to solve complex tests, including demonstrable proficiency in data manipulation, analysis, and automation using languages like Python and SQL.
Responsibilities
- Co-locate with Google DeepMind, you will drive structured and unstructured testing of novel model modalities and capabilities.
- Lead platform and tooling development to bridge constraints and scale adversarial testing.
- Design engineering solutions, prompt generation strategies, evaluation tooling, and leveraging LLMs for analysis.
- Define testing and safety standards, working with cross-functional colleagues, policy and engineering, to ensure they are met.
- Perform analyses and drive insights to develop model-level and product-level safety mitigations.
- Lead and influence cross-functional teams to implement safety initiatives.
- Represent Google's AI safety efforts in external forums and collaborations, contributing to industry-wide best practices.
Other
- 7 years of experience in managing projects and defining project scope, goals, and deliverables.
- Act as an advisor to executive leadership on complex safety issues.
- Mentor analysts, fostering a culture of excellence and acting as a subject matter expert on adversarial techniques.
- Work with graphic, controversial, or upsetting content.
- Demonstrate an ability to thrive in a fluid, dynamic research and product development environment.