The Chan Zuckerberg Initiative is looking to secure systems to enable the use of AI across the organization by making it safe and easy to incorporate more data and models.
Requirements
- Proficiency in Python and at least one systems-level language (e.g., Go, Rust, or C++).
- Experience with securing cloud-based and on-prem AI infrastructure.
- Strong understanding of authentication, authorization, encryption, container security, and network security.
- Background in privacy-enhancing technologies (e.g., differential privacy, federated learning, homomorphic encryption) (nice to have)
Responsibilities
- Design and implement secure-by-default infrastructure and services supporting AI/ML workloads.
- Collaborate with AI/ML engineers, data engineers, and platform teams to integrate security best practices.
- Develop or acquire tooling and automation to detect and mitigate vulnerabilities specific to AI environments (e.g., model poisoning, data leakage, adversarial attacks).
- Stay current on AI threat landscapes, compliance standards (e.g., NIST AI RMF, GDPR), and emerging security frameworks for AI/ML systems.
- Leverage and contribute to open source tools and technologies.
- Monitor for and respond to emerging threats to AI/ML systems, and participate in incident response and root cause analysis.
Other
- 8+ years of experience in software engineering with a focus on security.
- Ability to work cross functionally across the organization.
- Desire to automate systems to minimise human errors.