The Developer AI team at Google is aiming to transform software development by integrating the latest Generative AI research into Google's products and workflows. This involves training generative models, refining their capabilities for code generation and bug fixing, and enabling rapid iteration on new approaches within a live lab environment.
Requirements
- 5 years of experience training and deploying generative models, with a focus on real-world applications.
- Experience working with large datasets, data cleaning, pre-processing, and analysis.
- 5 years of experience with data structures/algorithms.
- Experience with contributing to open-source projects or publications in relevant conferences.
- Experience with doing research work (e.g., graduate work or in prior projects) and working with Gemini models or machine learning frameworks.
- Understanding of deep learning architectures and related algorithms (e.g., Transformers) and deploying machine learning models on Alphabet infrastructure (especially TPUs).
Responsibilities
- Collaborate with DeepMind researchers to train generative models using a unique dataset.
- Partner with Alphabet's internal engineering teams to integrate these models into their workflows, transforming them into a live lab for testing research ideas and rapidly iterating on new approaches.
- Curate and refine software engineering pre-training, instruction tuning, and evaluation.
- Analyze model outputs and user feedback to continuously improve model performance and enable the use of internal software engineering data for training Gemini models.
- Explore and apply Large Language Models (LLM) post-training techniques to improve model quality for code generation, code transformation and agentic Computational Linguistics (CL) generation and bug-fixing workflows.
Other
- Bachelor's degree or equivalent practical experience.
- Excellent communication, collaboration, and problem-solving skills with a passion for innovation and generative models.