Microsoft is looking to accelerate its mission to secure digital technology platforms, devices, and clouds in customers' heterogeneous environments, as well as ensure the security of its own internal estate, by developing AI capabilities that automate end-to-end red team engagements.
Requirements
- 4+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C-Sharp, Java, JavaScript, or Python
- 4+ years of experience in red teaming, adversarial testing, and offensive security—including threat emulation, vulnerability discovery, and ethical hacking
- 4+ years in system design and cloud platforms (Azure, AWS, GCP)
- 1 + years Experience with Large Language Models (LLMs) and agentic AI systems
- Experience with generative AI and agentic systems
- Understanding of attacker techniques and behaviors
- Experience with online services and cloud-based systems
Responsibilities
- Design, implement, and support AI-driven red team services using generative as well as traditional AI techniques
- Research, experiment with, and productionize frontier AI capabilities and design patterns
- Research the latest attack techniques used by internal red teams and external threat actors
- Contribute to red team tools for use by both human operators and AI red teaming services
- Support partner development teams in contributing to our services and tools
- Partner with internal defensive security teams to improve their detection, investigation, and response capabilities
- Build strong relationships with your peers through design, code reviews, and peer mentoring
Other
- Bachelor's Degree in Computer Science or related technical field
- Ability to meet Microsoft, customer and/or government security screening requirements
- Ability to pass the Microsoft Cloud background check upon hire/transfer and every two years thereafter
- Ability to work in a team environment and build strong relationships with peers
- Embody Microsoft's culture and values