Accelerate research in strategic projects that enable trustworthy, robust and reliable Agentic systems with a group of research scientists and engineers on a mission-driven team. Together, you will apply ML and other computational techniques to a wide range of challenging problems. Ensuring that such agents are reliable, secure and trustworthy is a large scientific and engineering challenge, with huge potential impact.
Requirements
- Strong programming experience.
- Demonstrated record of python implementations of LLM pipelines.
- Strong AI and Machine Learning background
- Experience in applying machine learning techniques to problems surrounding scalable, robust and trustworthy deployments of models.
- Experience with GenAI language models, programming languages, compilers, formal methods, and/or private storage solutions.
Responsibilities
- Invent and implement novel recipes for making agents safer, involving both improving models that power the agents, as well as systems that are built around the agents
- Develop strategies to hill-climb leaderboards and debug possible performance and safety issues in frontier agents
- Integrate novel agentic technologies into research & production grade prototypes
- Work with product teams to gather research requirements and consult on the deployment of research-based solutions to help deliver value incrementally
- Amplify the impact by generalizing solutions into reusable libraries and frameworks for safer AI agents across Google, and by sharing knowledge through design docs, open source, or external blog posts
Other
- PhD in computer science, security or related field, or equivalent practical experience
- Passion for accelerating the development of safe agents using innovative technologies, demonstrated via a portfolio of prior projects (github repos, papers, blog posts)
- Demonstrated success in creative problem solving for scalable teams and systems
- A real passion for AI!