Advance the device-driven AI Assistant effort at Meta – e.g. RayBan Smart Glasses and future Wearable devices by building state-of-the-art LLMs to support on-device and on-server use cases, optimize for natural voice interaction, deliver knowledge-grounded, actionable, personalized AI through a unique voice user interface.
Requirements
- Experience with development and implementation of speech recognition algorithm/ systems and model training, including LLMs
- Experience working with machine learning libraries like Pytorch, Tensorflow, etc
- Familiar with scripting languages such as Python and shell scripts
- Experience with developing scalable machine learning models in at least one of the following areas: automatic speech recognition, speech synthesis, LLM, or relevant areas
- Experience with large scale model training, implementing algorithms, and evaluating speech-based systems
Responsibilities
- Apply relevant AI and machine learning techniques to build speech/LLM technology that will improve Meta Wearables products and experiences
- Develop novel, accurate AI algorithms and advanced systems for large scale applications
- Directly contribute to experiments, including designing experimental details, developing reusable code, running evaluations, and organizing results
- Work with large data, and contribute to development of large scale foundation models
- Design methods, tools, and infrastructure to push forward the state of the art in large language models
Other
- 2+ year(s) of work experience in a university, industry, or government lab with emphasis on AI research and development in speech recognition, speech synthesis, natural language understanding, machine learning, deep learning, or related fields
- Experience taking ideas from research to production
- Experience solving complex problems and comparing alternative solutions, tradeoffs, and broad points of view to determine a path forward
- Experience working and communicating cross functionally in a team environment