OpenAI is looking to solve the problem of mapping, characterizing, and prioritizing cross-layer vulnerabilities in advanced AI systems to ensure the security of their technology, people, and products.
Requirements
- Are fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end.
- Have deep experience with cutting edge offensive-security techniques
Responsibilities
- Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
- Deliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks.
- Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs.
- Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones.
- Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes.)
- Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations.
Other
- Current security clearance is not mandatory, but being eligible for sponsorship is required.
- Have led high-stakes security research programs with external sponsors (e.g., national-security or critical-infrastructure stakeholders).
- Operate independently, align diverse teams, and deliver on tight timelines.
- Communicate clearly and concisely with experts and decision-makers.