Meta is looking to advance the state of the art in multimodal reasoning and generation research to enhance AI Assistants and contribute to Meta's product development.
Requirements
Publications in machine learning, computer vision, NLP, speech
Experience writing software and executing complex experiments involving large AI models and datasets
First (joint) author publications experience at peer-reviewed AI conferences (e.g., NeurIPS, CVPR, ICML, ICLR, ICCV, and ACL).
Direct experience in generative AI and LLM research.
Fluent in Python and PyTorch (or equivalent)
Responsibilities
Lead, collaborate, and execute on research that pushes forward the state of the art in multimodal reasoning and generation research.
Directly contribute to experiments, including designing experimental details, writing reusable code, running evaluations, and organizing results.
Push state of the art in multimodal generative AI
Explore new techniques for advanced reasoning and multimodal understanding for AI Assistants
Mentor and work with AI/ML engineers to find a path from research to production
Contribute to publications and open-sourcing efforts.
Prioritize research that can be applied to Meta's product development.
Other
Work towards long-term ambitious research goals, while identifying intermediate milestones.
Work with a large team.
Mentor other team members.
Play a significant role in healthy cross-functional collaboration.
Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment