Google needs to enhance its resilience against adversarial attacks on its ML-based products.
Requirements
- 5 years of experience with one or more of the following: Speech/audio, reinforcement learning, ML infrastructure, or specialization in another ML field.
- 5 years of experience leading ML design and optimizing ML infrastructure.
- Experience in adversarial testing, red teaming, GenAI/AI safety, GenAI/AI ethics or responsibility, or similar.
- 8 years of experience with data structures/algorithms (preferred).
- Experience in AI/ML security research, including areas like adversarial machine learning, prompt injection, model extraction, or privacy-preserving ML (preferred).
Responsibilities
- Develop and expand the machine learning (ML) red team program and its overall impact.
- Plan, lead, and execute realistic ML red team exercises, stepping into the role of an attacker targeting ML deployments in our products.
- Design and build tools and infrastructure to support ML red team exercises.
- Collaborate closely with product teams to help them identify and implement mitigations against successful attacks on ML deployments.
- Test and improve our ability to detect specific classes of attacks.
Other
- Bachelor’s degree or equivalent practical experience.
- Master’s degree or PhD in Engineering, Computer Science, or a related technical field (preferred).
- 8 years of experience in software development.
- 3 years of experience in a technical leadership role leading project teams and setting technical direction (preferred).
- 3 years of experience working in a complex, matrixed organization involving cross-functional or cross-business projects (preferred).