Meta is looking to improve its Wearables products and experiences by developing speech-LLMs and leveraging knowledge in areas like speech generation and understanding, multilingual modeling and on-device speech LLMs.
Requirements
- 5+ years of experience in one or more of the following areas: machine learning, large language models, speech and audio processing, speech generation, or related fields
- Experience developing machine learning algorithms or machine learning infrastructure in Python, PyTorch, and/or C/C++
- Technical background in speech processing (ASR, TTS) and speech and audio foundation model, and speech-LLM
- Track record in delivering impactful applied research work to production
Responsibilities
- Apply relevant AI and machine learning techniques to build speech/LLM technology that will improve Meta Wearables products and experiences
- Develop novel, accurate AI algorithms and advanced systems for large scale applications
- Directly contribute to experiments, including designing experimental details, developing reusable code, running evaluations, and organizing results
- Work with large data, and contribute to development of large scale foundation models
- Design methods, tools, and infrastructure to push forward the state of the art in large language models
Other
- Master's degree or PhD in relevant technical field
- 5+ years of experience
- Individual compensation is determined by skills, qualifications, experience, and location
- Meta is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process