Adapt and deploy advanced real-time smart glasses algorithms on custom embedded computer vision/machine learning processors and make them available to an extensive internal and external user community.
Requirements
Experience in implementing computer vision algorithms for efficient execution on embedded systems
Demonstrated examples of delivering an end-to-end system rather than a single component
Experience developing, debugging, and shipping software products on large code bases that span platforms and tools
Experience with low-level programming (DSPs, MCUs, etc) and mobile accelerators
Experience with device to cloud low latency applications
Experience with cloud edge computing
Experience in zero-to-one blank slate projects
Responsibilities
Execute engineering development to advance the state-of-the-art in machine perception research for Contextualized Al, both on-device and on back-end
Collaborate closely with researchers, engineers, and product managers across multiple teams at Meta to design, architect and implement prototypes
Debug system-level, multi-component issues that typically span across multiple layers from sensor to application
Other
Currently has, or is in the process of obtaining a Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience. Degree must be completed prior to joining Meta
Master's degree in Computer Engineering, Computer Science, or Electrical Engineering and Computer Sciences or equivalent practical experience
4+ years of experience working in large scale C++ and Python code base
4+ years of experience in developing and prototyping machine perception