Apple is looking to develop a secure software architecture for multi-modal awareness on Apple platforms to enable future Apple products to better understand the world around them while maintaining industry-leading standards for privacy and security
Requirements
- Experience with on-device ML frameworks and systems
- Experience with on-device ML frameworks, especially involving image and video processing
- Excellent software design/programming skills in Swift, Objective-C and/or C/C++
- Understanding of how to develop and debug multi-threaded software
- Experience developing and using performance tracing, profiling, logging tools
- Experience with algorithm execution runtime
- Experience with real-time algorithms for camera, audio, and other sensors
Responsibilities
- Developing an algorithm execution runtime
- Developing real-time algorithms for camera, audio, and other sensors
- Creating a corresponding system framework and APIs
- Integrating the new framework with other system components and applications to enable new experiences on future Apple products
- Developing and using performance tracing, profiling, logging tools
- Debugging multi-threaded software
- Developing secure Perception Systems software architecture for Apple platforms
Other
- BS or MS in Computer Science or other related field or equivalent
- A passion for understanding end-to-end systems, from the user experience down to the hardware
- Proactive learning and a passion for learning new technologies
- Apple is an equal opportunity employer that is committed to inclusion and diversity
- Must be authorized to work in the country without sponsorship