Apple is looking for a talented engineer to help them take their computer vision efforts for future Apple products to the next level, focusing on real-time and low-power world tracking and sensor calibration based on VIO, SLAM, and ML solutions, with contributions to ARKit.
Requirements
- Programming skills with C++.
- Foundation in computer vision; key areas of interest include multiple view geometry, 3D computer vision, SfM (Structure from Motion) and SLAM (Simultaneous Localization and Mapping).
- Understanding of visual inertial sensor fusion or general sensor fusion is a plus.
- Experience in developing, training and tuning ML model related to the above areas is a plus.
Responsibilities
- create computer vision algorithms and deliver technologies with applications to augmented reality and device localization that are impactful, meaningful, and influential.
- core technology algorithm development in support of future user experiences
- communicating with and supporting external teams that use our algorithms
- supporting low-level, cross-platform efforts
- participating in code reviews
- being a constant advocate within the team for high quality results
Other
- BS and a minimum of 3 years relevant industry or academia experience.
- MS or PhD in computer vision, machine learning, robotics, or related fields.
- This role is highly multi-functional and you will work very closely with various highly skilled software development / ML teams developing groundbreaking algorithms.
- We work closely with Apple’s best-in-class designers to ensure the products we ship are more than technical demos - they resonate with users at a personal level.