Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Anthropic Logo

ML Infrastructure Engineer, Safeguards

Anthropic

$300,000 - $405,000
Nov 5, 2025
San Francisco, CA, US
Apply Now

Anthropic is seeking a Machine Learning Infrastructure Engineer to build and scale the critical infrastructure that powers their AI safety systems, ensuring Claude safety and making AI systems more trustworthy and aligned with human values.

Requirements

  • 5+ years of experience building production ML infrastructure, ideally in safety-critical domains like fraud detection, content moderation, or risk assessment
  • Proficient in Python and have experience with ML frameworks like PyTorch, TensorFlow, or JAX
  • Hands-on experience with cloud platforms (AWS, GCP) and container orchestration (Kubernetes)
  • Understand distributed systems principles and have built systems that handle high-throughput, low-latency workloads
  • Experience with data engineering tools and building robust data pipelines (e.g., Spark, Airflow, streaming systems)
  • Working with large language models and modern transformer architectures
  • Implementing A/B testing frameworks and experimentation infrastructure for ML systems

Responsibilities

  • Design and build scalable ML infrastructure to support real-time and batch classifier and safety evaluations across our model ecosystem
  • Build monitoring and observability tools to track model performance, data quality, and system health for safety-critical applications
  • Collaborate with research teams to productionize safety research, translating experimental safety techniques into robust, scalable systems
  • Optimize inference latency and throughput for real-time safety evaluations while maintaining high reliability standards
  • Implement automated testing, deployment, and rollback systems for ML models in production safety applications
  • Contribute to the development of internal tools and frameworks that accelerate safety research and deployment
  • Partner with Safeguards, Security, and Alignment teams to understand requirements and deliver infrastructure that meets safety and production needs

Other

  • Care deeply about AI safety and the societal impacts of your work
  • Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
  • Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
  • Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
  • We encourage you to apply even if you do not believe you meet every single qualification.
  • Not all strong candidates will meet every single qualification as listed.  Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.