Microsoft Security aspires to make the world a safer place for all by reshaping security and empowering every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security AI Research team develops advanced AI-driven security solutions to protect Microsoft and its customers by addressing evolving security challenges across Microsoft’s complex digital environment.
Requirements
- 5+ years of professional experience in software development and applied machine learning, including building and deploying production-quality systems.
- 2+ years of hands-on experience with large language models (LLMs), such as prompt engineering, fine-tuning, or developing and deploying LLM-based applications in production
- 2+ years of hands-on experience with graph theory, graph algorithms, and graph machine learning, including practical work with large-scale graph data in real-world environments
- Experience with building, scaling, and deploying graph-based solutions and/or multi-agent frameworks (e.g., AutoGen, LangGraph, crewAI) in cloud environments.
- Ability to translate advanced graph and LLM research into production-grade software that delivers measurable business or security impact at scale.
- Proficiency in Python is required, with significant experience developing robust, production-grade AI/ML systems using object-oriented programming.
- Experience in cybersecurity domains such as red teaming, adversary emulation, or threat intelligence.
Responsibilities
- Research, design, and develop advanced graph-based and LLM-powered AI systems to automate red-teaming and adversarial simulation.
- Build and maintain large-scale knowledge graphs and leverage LLMs for representing, reasoning about, and simulating attack paths, threat relationships, and mitigation strategies within Microsoft’s cloud and enterprise environments.
- Apply state-of-the-art graph algorithms, graph neural networks, and LLM techniques to real-world security data.
- Collaborate with security researchers, applied scientists, and engineers to design autonomous agents and multi-agent frameworks for security testing and incident response.
- Integrate data and insights from Microsoft’s Threat Intelligence Center, Red Team, and security telemetry to inform graph and LLM modeling and simulation.
- Contribute to research prototypes and their operationalization in production systems, with a focus on scalability and robustness.
- Develop and deploy state-of-the-art graph AI models to enhance red teaming automation.
Other
- Although this is an individual contributor (IC) role, the Principal Applied Scientist is expected to provide technical leadership, mentor and support staff on technical aspects, and foster a collaborative, team-oriented environment.
- Embody our culture and values
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Strong written and verbal communication skills; ability to present complex technical concepts clearly.
- Microsoft is an equal opportunity employer.