Apple is looking to develop innovative, ML-driven product features for Apple Watch that push the boundaries of sensing and human-computer interaction, aiming to improve lives by understanding people, their activities, connections, and environments using sensing and machine learning on devices.
Requirements
- Solid knowledge of machine learning methods, statistical analysis, and predictive modeling using time-series data
- Strong Python skills with experience writing production-quality code and working with deep learning frameworks such as PyTorch or TensorFlow
- Experience with Swift or Objective-C and developing on Apple platforms
- Proficient in the full ML development cycle: data collection, model training and optimization, defining metrics, evaluation, performing failure analysis, and model deployment to resource constrained devices
Responsibilities
- Develop and optimize ML algorithms leveraging multimodal sensor data - like motion and audio - to detect user activities and contextual situations that enhance our understanding of real-world behavior
- Integrate and deploy ML models on-device, building power-efficient frameworks that encapsulate models, interface seamlessly with sensors, and communicate effectively with UX layers
- Drive innovation from concept to deployment, ensuring promising research ideas evolve into high-impact, user-facing features
- Design and implement tools, analytics, and processes to perform in-depth, hands-on analysis for validating and quantifying algorithm performance both offline and on-device
- Prototype new experiences
- Develop and ship products
- Publish our work
Other
- Excellent communication and collaboration skills, with ability to work independently or in small teams
- M.S. or Ph.D. in Machine Learning, Computer Science, or a related field