At Apple, the business problem is to create innovative product features for Apple Watch and other devices that improve lives and the world by understanding people, activities, connections, and environments using sensing and machine learning.
Requirements
- Proficient in the full ML development cycle: data collection, model training and optimization, defining metrics, evaluation, performing failure analysis, and model deployment to resource constrained devices
- Solid knowledge of machine learning methods, statistical analysis, and predictive modeling using time-series data
- Strong Python skills with experience writing production-quality code and working with deep learning frameworks such as PyTorch or TensorFlow
- Experience with Swift or Objective-C and developing on Apple platforms
Responsibilities
- Develop and optimize ML algorithms leveraging multimodal sensor data - like motion and audio - to detect user activities and contextual situations that enhance our understanding of real-world behavior
- Integrate and deploy ML models on-device, building power-efficient frameworks that encapsulate models, interface seamlessly with sensors, and communicate effectively with UX layers
- Drive innovation from concept to deployment, ensuring promising research ideas evolve into high-impact, user-facing features
- Design and implement tools, analytics, and processes to perform in-depth, hands-on analysis for validating and quantifying algorithm performance both offline and on-device
Other
- M.S. or Ph.D. in Machine Learning, Computer Science, or a related field
- Excellent communication and collaboration skills, with ability to work independently or in small teams
- Ability to work in a highly collaborative environment with cross-functional partners across Apple