Apple is seeking to build next-generation features and experiences using multi-modal sensing, specifically in motion sensing-related features, including sensor fusion and interactive technologies, to create intuitive experiences for customers across Apple products such as iPhone, Watch, AirPods, Vision Pro, and more.
Requirements
- Experience developing for embedded or real-time systems
- Experience leveraging distributed compute/storage models when the scale of data calls for it
- Experience designing and implementing interfaces between algorithms, software, and firmware
- Experience with multi-modal inputs and models, including IMU, images, video, and/or audio
- Strong proficiency in Python, machine learning tools and frameworks e.g. PyTorch, TensorFlow
Responsibilities
- Designing and driving end to end features with specialists across the company
- Developing machine learning data pipelines for training and testing
- Deploying efficient, low-power models & algorithms
- Delivering the software through prototyping and release to our customers
Other
- Results oriented, with a proven ability to effectively prioritize and deliver tasks on schedule
- Excellent communication and collaboration skills
- Strong product sense, including the ability to balance technical feasibility with user experience
- MS, PhD, or 5+ years experience in machine learning, computer science, or related fields