Reality Labs Research is looking to develop the next generation assistance systems that guide the users in contextual and adaptive future AR/VR systems
Requirements
- Proficiency in Python and Machine Learning libraries (Numpy, Scikit-learn, Scipy, Pandas, Matplotlib, Tensorflow, Pytorch, etc.)
- Understanding of at least one of the following areas: Transfer, few-shot, zero-shot, continual and/or online learning, self-supervised learning, or multi- or cross-modal learning
- Experience with deep metric learning / neural net embedding methods
- Experience on vision based input recognition systems, such as hand tracking, body pose estimation
- Experience on working with time sequence form sensor data, such as IMU and audio
Responsibilities
- Develop, implement, and evaluate methods for learning robust representations from multi-modal egocentric data (e.g., video, audio, inertial measurement units)
- Make use of Meta’s large infrastructure to scale and speed up experimentation
- Write modular research code that can be reused in other contexts
- Collaborate with other researchers
- Work towards taking on big problems and deliver clear, compelling, and creative solutions to solve them at scale
- The work should result in publishable research to appear in a top-tier ML or CV conference (e.g., NeurIPS, ICLR, CVPR, ECCV)
Other
- Currently has or is in the process of pursuing a PhD in Machine Learning, Computer Vision, Speech Processing, Applied Statistics, Computational Neuroscience, or relevant technical field
- Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment
- Intent to return to degree program after the completion of the internship/co-op
- Research skills involving defining problems, exploring solutions, and analyzing and presenting results