Meta’s Reality Labs Research is looking to develop novel state-of-the-art AI algorithms to infer human behavior patterns, with an emphasis on those that inform attention, cognition or emotion.
Requirements
- 3+ years of experience with Python
- Experience with a common machine learning framework like PyTorch
- Experience with ML computer vision
- Experience with multimodal sensing platforms, data collection, multimodal signal processing and analysis, and converting raw sensor streams into robust models solving complex tasks
- Experience with biosignals, behavioral signals, or egocentric data from wearable sensors
- Experience with Multimodal Deep Learning approaches and research
- Experience with Large Language Models
Responsibilities
- Using data from wearable devices, employ state-of-the-art AI algorithms to infer human behavior patterns that inform attention, cognition or emotion.
- Develop data collection strategies, benchmarks, and metrics to validate and improve efficiency, scalability, and stability of these models.
- Create tools, infrastructure, and documentation to accelerate research.
- Perform code reviews that improve software engineering quality.
- Develop end-to-end wearable AI experiential validation platforms using cutting edge generative AI and language models to validate the impact of these signals.
Other
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience.
- PhD degree in Computer Science, Human-Computer Interaction, or related field plus 2+ years of experience
- Proven track record of solving complex challenges with multimodal ML as demonstrated through grants, fellowships, patents, or publications at conferences like CVPR, NeurIPS, CHI, or equivalent
- Learn constantly, dive into new areas with unfamiliar technologies, and embrace the ambiguity of Augmented Reality/Virtual Reality problem solving.