Google DeepMind is looking to advance the state of the art in artificial intelligence, specifically focusing on creating innovative defensive and offensive techniques to protect Gemini and other GenAI models from security and privacy threats.
Requirements
- Adversarially-robust reasoning, coding, and tool-use capabilities under prompt injection and jailbreak attacks.
- Adherence to privacy norms, whether or not under adversarial prompting.
- Adversarial techniques against generative models through multi-modal inputs.
- New model architectures that are secure-by-design against prompt injections.
- Detecting attacks in the wild and finding ways to mitigate them
- Strong research experience with LLMs and publications in ML security, privacy, safety, or alignment.
- Experience with JAX, PyTorch, or similar machine learning platforms.
Responsibilities
- Identify unsolved, impactful privacy & security research problems, inspired by the needs of protecting frontier capabilities.
- Research novel solutions through related work studies, offline and online experiments, and building prototypes and demos.
- Verify the research ideas in the real world by driving and growing collaborations with Gemini teams working in safety, evaluations, and other related areas to land new innovations together.
- Amplify the impact by generalizing solutions into reusable libraries and frameworks for protecting Gemini and product models across Google, and by sharing knowledge through publications, open source, and education.
Other
- Ph.D. in Computer Science or related quantitative field, or B.S./M.S. in Computer Science or related quantitative field with 5+ years of relevant experience.
- Self-directed engineer/research scientist who can drive new research ideas from conception, experimentation, to productionisation in a rapidly shifting landscape.
- A track record on landing research impact within multi-team collaborative environments under senior stakeholders.