Meta is looking to adapt and deploy advanced real-time smart glasses algorithms on custom embedded computer vision/machine learning processors and make them available to an extensive internal and external user community.
Requirements
- Experience in implementing computer vision algorithms for efficient execution on embedded systems
- Demonstrated examples of delivering an end-to-end system rather than a single component
- Experience developing, debugging, and shipping software products on large code bases that span platforms and tools
- Experience with low-level programming (DSPs, MCUs, etc) and mobile accelerators
- Experience with device to cloud low latency applications
- Experience with cloud edge computing
- Experience in zero-to-one blank slate projects
Responsibilities
- Plan and execute engineering development to advance the state-of-the-art in machine perception research for Contextual Al, both on-device and on back-end
- Collaborate closely with researchers, engineers, and product managers across multiple teams at Meta to design, architect and implement prototypes
- Debug complex, system-level, multi-component issues that typically span across multiple layers from sensor to application
- Lead major initiatives, projects, rollouts and phased releases
Other
- Bachelor's degree in Computer Science, Computer Engineering, relevant technical field, or equivalent practical experience
- Master's degree in Computer Engineering, Computer Science, or Electrical Engineering and Computer Sciences or equivalent practical experience
- 5+ years of experience working in large scale C++ and Python code base
- 4+ years of experience in developing and prototyping machine perception
- Doctor of Philosophy (Ph.D.) in Computer Engineering, Computer Science, or Electrical Engineering and Computer Sciences or equivalent practical experience