At Google DeepMind, the business problem is to leverage AI for defense against cyberattacks by automatically fixing discovered vulnerabilities and hardening code, as AI's cyberattack capability is increasing rapidly and human experts are struggling to keep up with the long log of vulnerability reports.
Requirements
- Proven knowledge and experience of C/C++ and Python
- Strong, hands-on experience with LLMs (e.g., (RL) fine-tuning, inference, prompting, agent development)
- Experience designing and developing evaluation benchmarks and implementing scalable evaluation pipelines
- Proven experience with prominent ML frameworks and tools
- Hands-on experience with AI-based code generation or editing tools
- Experience with developer tools (compilers, runtimes, dynamic/static analyzers, web frameworks, ... )
- Familiarity with well-known code security practices
Responsibilities
- You will be developing an agent that leverages powerful AI models, compilers, runtimes, static/dynamic analyzers, and formal verification tools to harden code against a wide range of vulnerabilities across different programming languages and frameworks
- You will be rapid-prototyping initial concepts and designing and running experiments to achieve our goals
- Your work will be influential both in the research community and also in products that promise to have tremendous impact
Other
- MSc or PhD/DPhil degree in Computer Science (or relevant majors), with emphasis on ML, or equivalent practical experience
- Independent, self-starter attitude
- Passion for the mission above
- Flexibility and adaptability to work is a must
- Willingness to help out with whatever moves prototypes forward