Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

DeepMind Logo

Research Scientist Tech Lead, Contextual Security

DeepMind

$248,000 - $349,000
Dec 16, 2025
Mountain View, CA, US • New York, NY, US • San Francisco, CA, US
Apply Now

Google DeepMind aims to unblock the strongest and most helpful agentic GenAI capabilities in the real world by addressing key challenges in contextual security, particularly prompt injection, to ensure Gemini and other GenAI models are as capable as highly experienced privacy and security engineers in handling sensitive user data and permissions.

Requirements

  • Demonstrated experience driving complex research projects in AI security, security, privacy or safety
  • Experience managing teams of ~5-10 individual contributors
  • Demonstrated experience driving complex projects to landing in production or adoption in open source
  • Demonstrated experience in adapting research outputs into impactful model improvements, in a rapidly shifting landscape and with a strong sense of ownership.
  • Research experience and publications in ML security, privacy, safety, or alignment.
  • Experience working on contextual security or prompt injection.

Responsibilities

  • Lead the existing team to address key challenges in contextual security with connections to prompt injection, from leading the fundamental research to providing out the solutions for key products.
  • Manage a team of researchers with extensive background in security and machine learning, as well as grow the team to keep up with the rapid evolving space of contextual security problems.
  • Identifying unsolved, impactful privacy & security problems present in generative models through auto-red teaming, with priorities guided by securing critical products and product features.
  • Building post-training data and tools hypothesised to improve model capabilities in the problem areas, testing the hypotheses through evaluations and auto-red teaming, and contributing successful solutions into Gemini and other models.
  • Amplifying the impact by generalizing solutions into reusable libraries and frameworks for protecting agents and models across Google, and by sharing knowledge through publications, open source, and education.

Other

  • PhD in Computer Science or related quantitative field OR 5+ years of relevant experience.