GitLab is seeking a Senior AI Product Security Researcher to proactively identify and validate vulnerabilities in their AI-powered DevSecOps platform, ensuring the security of their platform and customers as they transform software development with AI.
Requirements
- 5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security
- Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms
- Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation
- Proficiency in Python with experience in AI frameworks and security testing tools
- Experience with offensive security tools and vulnerability discovery methodologies
- Ability to read and analyze code across multiple languages and codebases
- Strong analytical and problem-solving skills with creative thinking about attack scenarios
Responsibilities
- Identify and validate security vulnerabilities in GitLab's AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios
- Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques
- Research emerging AI security threats and attack techniques to assess their potential impact on GitLab's AI-powered platform
- Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation
- Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies
- Collaborate with AI engineering teams to validate security fixes through iterative testing and verification
- Contribute to the development of AI security testing frameworks and automated validation tools
Other
- Excellent written communication skills for documenting technical findings and creating security advisories
- Ability to translate technical findings into clear risk assessments and remediation recommendations
- Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures
- Published security research or conference presentations on AI security topics
- Background in software engineering with distributed systems expertise