Advance Apple's data infrastructure, driving innovation by implementing cutting-edge capabilities and tools that will power Apple Services Engineering.
Requirements
- deep experience in scalable streaming processing systems.
- in depth knowledge of systems like Spark, Beam, Kafka, Iceberg and other existing frameworks.
- Good knowledge of Apache Iceberg and Apache Kafka.
- Experience working on or building connectors from Spark to any of the data sources.
- Knowledge of Yarn, Kubernetes or other compute substrate.
- A successful track-record or demonstrated aptitude as an engineer who has worked on distributed systems.
- Experienced Poweruser (at least 2 years of running production applications) of Apache Spark or Committer to Apache Spark.
Responsibilities
- Develop and deploy new features of Apple’s internal data platform.
- Regularly contribute to open source.
- Collaborate with cross-functional teams to design and implement automation tools that streamline operations and reduce manual intervention.
- Optimize our cloud-based services to ensure they scale effectively, handling increasing loads while maintaining high performance.
- Innovate on API development and integration, enabling seamless communication between our applications and services
Other
- passion to push the limits of distributed stateful streaming computing frameworks to get every ounce of performance out of them.
- passionate by the prospect of working reciprocally with other groups internal to Apple and also communities outside Apple.
- 8+ years of professional experience professional experience.
- BS or MS degree in computer science or equivalent
- Apple is an equal opportunity employer that is committed to inclusion and diversity.