Apple is looking to solve the problem of creating groundbreaking technology for large-scale systems, spoken language, big data, and artificial intelligence, building production-quality models that power natural voice experiences used by millions for the Siri Speech team.
Requirements
- Strong proficiency in Python; good coding skills in bash scripting, and any OOP/functional language such as Java, C, C++, Go, Rust etc.
- Experience with machine learning algorithms and techniques, including deep learning.
- Hands-on experience with TensorFlow and/or PyTorch; familiarity with scikit-learn.
- Experience with version control systems such as Git.
- Good knowledge in machine learning technologies related to speech and audio processing; experience with image processing is a plus.
Responsibilities
- Design, develop, and implement machine learning models for speech, NLP, and multimodal applications.
- Investigate and fine tune deep learning architectures for natural voice interaction and speaker recognition.
- Integrate ML solutions into production systems and existing workflows at scale.
- Collaborate with data scientists, software engineers, and product managers to define requirements and deliverables.
- Write clean, efficient, well-documented code and participate in code reviews.
- Analyze large datasets and apply state-of-the-art methods to build production-quality models.
- Stay current with advances in ML, HCI, LLMs, speech recognition, and signal processing and contribute to research.
Other
- Mentor junior team members and contributes to engineering best practices.
- M.S in Computer Science or related field, or Bachelor’s degree with equivalent experience.
- Strong problem-solving skills and ability to work independently as well as in a team environment.
- Excellent written and verbal communication skills.
- Passionate about creating and shipping phenomenal products and thrive in a fast-paced environment with rapidly changing priorities.