Google DeepMind aims to unblock the strongest and most helpful agentic GenAI capabilities in the real world by making Gemini and other GenAI models as capable as highly experienced privacy and security engineers in handling sensitive user data and permissions. The role focuses on identifying and solving impactful privacy and security problems in generative models.
Requirements
- Experience with JAX, PyTorch, or similar machine learning platforms.
- Demonstrated experience in Python through strong artifacts in building readable, scalable, reusable ML software.
- Demonstrated experience in adapting research outputs into impactful model improvements, in a rapidly shifting landscape and with a strong sense of ownership.
- Research experience and publications in ML security, privacy, safety, or alignment.
Responsibilities
- Identifying unsolved, impactful privacy & security problems present in generative models through auto-red teaming, with priorities guided by frontier agentic capabilities being developed in Gemini and other GenAI models.
- Building post-training data and tools hypothesised to improve model capabilities in the problem areas, testing the hypotheses through evaluations and auto-red teaming, and contributing successful solutions into Gemini and other models.
- Amplifying the impact by generalizing solutions into reusable libraries and frameworks for protecting agents and models across Google, and by sharing knowledge through publications, open source, and education.
Other
- B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.