Apple is looking to solve the problem of building foundation models with fundamental general capabilities such as understanding and generation of text, images, speech, videos, and other modalities and applying these models to Apple products
Requirements
- Web-scale information retrieval
- Human-like conversation agent
- Multi-modal perception for existing products and future hardware platforms
- On-device intelligence and learning with strong privacy protections
- Proven track record in training or deployment of large models or building large-scale distributed systems
- Proficient programming skills in Python and one of the deep learning toolkits such as JAX, PyTorch, or Tensorflow
- Ability to work with deep learning models and large-scale distributed systems
Responsibilities
- building infrastructure, datasets, and models with fundamental general capabilities
- applying foundation models to Apple products
- tackling challenging problems in foundation models and deep learning
- working on natural language processing, multi-modal understanding, and combining learning with knowledge
- building systems that push the frontier of deep learning in terms of scaling, efficiency, and flexibility
- identifying and developing novel applications of deep learning in Apple products
- improving the experience of millions of users with deep learning models
Other
- Ability to work in a collaborative environment
- PhD, or equivalent practical experience, in Computer Science, or related technical field
- Apple is an equal opportunity employer that is committed to inclusion and diversity
- We seek to promote equal opportunity for all applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, Veteran status, or other legally protected characteristics