Apple is looking to develop a secure software architecture for multi-modal awareness on Apple platforms, utilizing input from cameras, microphones, and other sensors to enable future Apple products to better understand the world around them while maintaining industry-leading standards for privacy and security
Requirements
- Experience with on-device ML frameworks and systems
- Experience developing and using performance tracing, profiling, logging tools
- Excellent software design/programming skills in Swift, Objective-C and/or C/C++
- Experience with on-device ML frameworks, especially involving image and video processing
- Understanding of how to develop and debug multi-threaded software
Responsibilities
- Developing an algorithm execution runtime
- Developing real-time algorithms for camera, audio, and other sensors
- Creating a corresponding system framework and APIs
- Integrating the new framework with other system components and applications to enable new experiences on future Apple products
Other
- BS or MS in Computer Science or other related field or equivalent
- A passion for understanding end-to-end systems, from the user experience down to the hardware
- Proactive learning and a passion for learning new technologies
- Apple is an equal opportunity employer that is committed to inclusion and diversity