Meta is looking for research engineers to design and implement models that transform partial human information into realistic VR representations to achieve a vision of social presence in VR and AR where people can interact with each other across distances in a way that is indistinguishable from in-person interactions.
Requirements
- Experience with realistic 3D geometry/apperance estimation/generation or motion modeling
- Experience with generative models such as Diffusion Models, GANs or VAEs for image and geometry generation
- Experience with Neural Radiance Fields, Vision Transformers or Large Language Models
Responsibilities
- Integrate foundation models into telepresence prototypes and live-systems
- Build and scale state-of-the-art algorithms and models that can transform information deficient inputs (e.g. cameras with limited visibility, pose, text, audio) into indistinguishable-from-reality VR representations (e.g. bodies, hair, clothes, motion, ...)
- Build scalable and distributed training algorithms and efficient data loading for large scale deep learning
Other
- Currently has, or is in the process of obtaining a Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta
- First author publications in computer vision, machine learning or computer graphics peer-reviewed conferences (e.g. CVPR, ECCV, ICCV, NeurIPS, ICLR, or SIGGRAPH, etc.)
- Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment
- Proven track record of achieving significant research results as demonstrated by grants, fellowships and/or patents
- Currently has or is in the process of obtaining a PhD degree in the field of computer vision, computer graphics, machine learning or a related field