Cisco is seeking to advance the safety, reliability, and security of AI systems by identifying, assessing, and mitigating risks across their AI and machine learning products and services.
Requirements
- 5+ years of experience in cybersecurity, red teaming, penetration testing, or identifying security vulnerabilities in complex systems or similar background.
- Hands-on experience with adversarial testing of AI/ML systems or deep interest in AI safety and adversarial machine learning.
- Strong proficiency in programming languages such as Python, with the ability to develop tools for vulnerability assessment and automation.
- Advanced degree in Computer Science, Artificial Intelligence, or a related discipline.
- Experience in designing and securing AI/ML pipelines, with expertise in data taxonomy, labeling, and safety mechanisms.
- Familiarity with adversarial machine learning research and hands-on experimentation with AI models.
- Knowledge of regulatory and compliance standards for AI and data security.
Responsibilities
- Lead an advanced red team focused on adversarial testing of AI/ML systems, identifying vulnerabilities across application, infrastructure, and cloud environments.
- Develop and execute sophisticated adversarial tactics, techniques, and procedures (TTPs) to emulate real-world threats targeting AI models and systems.
- Partner with product teams to analyze and mitigate risks in generative AI and language models, ensuring robust safeguards against adversarial manipulation.
- Prototype and deploy tools to automate vulnerability discovery and adversarial emulation, scaling impact across Cisco’s AI platforms.
- Collaborate with blue teams and security engineers to improve detection, investigation, and incident response capabilities for AI-specific threats.
- Drive technical investigations, analyzing AI safety risks and producing actionable insights to enhance system reliability and trustworthiness.
- Design and implement frameworks for secure data pipelines, ensuring quality, scalability, and compliance with customer and regulatory requirements.
Other
- Advanced degree in Computer Science, Artificial Intelligence, or a related discipline.
- 5+ years of experience in cybersecurity, red teaming, penetration testing, or identifying security vulnerabilities in complex systems or similar background.
- Strong entrepreneurial mindset with a passion for identifying and addressing emerging security challenges in frontier AI technologies.
- Ability to develop and scale analytics frameworks for data-driven decision-making in AI safety and adversarial testing.
- Unlimited PTO, 10 paid volunteering days, paid birthday off, 401k match with NO vesting, generous health/dental/vision benefits