Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Apple Logo

Senior Software Development Engineer in Test

Apple

$181,100 - $318,400
Sep 28, 2025
Cupertino, CA, US
Apply Now

Apple Services Engineering is looking for an experienced engineer to lead quality efforts across their machine learning platform, shaping how they test and validate machine learning pipelines, data workflows, and platform services that power AI products at scale.

Requirements

  • Strong programming experience in Java or Python, with ability to write reusable test frameworks.
  • Proven ability to lead testing efforts for large-scale, backend or platform systems (ideally including microservices or cloud-based architectures).
  • Deep understanding of test design methodologies, CI/CD practices, and test automation at scale.
  • Experience with test frameworks and tools such as PyTest, JUnit, or equivalent.
  • Experience with performance testing of large scale systems.
  • Skilled in driving multi-functional quality programs and influencing engineering architecture and tooling.
  • Experience testing AI/ML systems or platforms that include ML model training or data pipelines.

Responsibilities

  • Lead Quality Strategy for ML Platform: Own and define the testing strategy for end-to-end ML pipelines, data flows, and AI platform services.
  • Tooling & Infrastructure Influence: Guide the selection and integration of tools and platforms that support scalable test automation, data validation, continuous training (CT), and continuous integration/continuous delivery (CI/CD) in ML workflows.
  • Collaborate with ML Engineers & Data Scientists: Partner closely with AI/ML engineers, MLOps, and data science teams to ensure testability, model governance, and validation of ML outputs.
  • Champion Best Practices: Define and enforce standards for quality in ML systems - including unit, integration, regression, and fairness testing.
  • Measure & Improve Quality: Define and track quality metrics such as test coverage for ML pipelines, test flakiness, and pipeline reliability.
  • Mentor Engineering Teams: Influence ML Engineering and Platform teams to adopt a quality-driven approach in their design and implementation
  • Stay Ahead of AI Testing Trends: Explore new tools and research in AI quality assurance, ML testing frameworks, and integrate them where beneficial.

Other

  • 10+ years in software development and/or test automation, with at least 3 years leading complex, distributed system testing.
  • B.S. in Computer Science or similar field, M.S. preferred.
  • A general understanding of Machine Learning Lifecycle.
  • Have worked on projects using GenAI.
  • Experience working with cloud platforms (AWS/GCP/Azure) and containerized environments (Docker, Kubernetes).