Fractal is looking to solve the problem of proactively identifying, simulating, and mitigating threats to AI systems, specifically LLM models, to ensure their robustness, fairness, and security across the enterprise.
Requirements
- 10+ years of experience in Machine Learning
- 6+ years of experience in cybersecurity, red teaming, or adversarial testing
- 3+ years of experience with generative AI systems (e.g., LLMs, agents) and their unique threat surfaces.
- Familiarity with AI safety, fairness, and interpretability frameworks.
- Contributions to open-source tools, academic research, or AI security communities.
- Knowledge of secure model deployment practices in cloud and edge environments.
Responsibilities
- Define and execute the vision and roadmap for AI red teaming initiatives across the enterprise.
- Build, mentor, and lead a high-performing team of AI red teamers, adversarial ML researchers, and security engineers.
- Oversee the design and execution of red teaming exercises targeting AI systems
- Partner with AI/ML engineering, enterprise cybersecurity, product, legal, and compliance teams to embed red teaming insights into model development and deployment lifecycles.
- Stay ahead of the curve on adversarial ML, model exploitation techniques, and AI safety research; foster a culture of continuous learning and innovation.
- Lead investigations into AI-related security incidents and develop mitigation strategies and post-mortem analyses.
Other
- 6+ years of leadership experience managing technical teams and cross-functional initiatives.
- Strong communication skills with the ability to influence executive stakeholders and translate technical findings into business impact.
- Experience working in regulated industries (e.g., finance, healthcare, defense) with AI governance requirements.
- Advanced degree (MS or PhD) in Computer Science, Machine Learning, Cybersecurity, or a related field.
- Ensure alignment with internal policies for Responsible use of AI