Microsoft Security aims to protect customers from digital threats, regulatory scrutiny, and estate complexity by developing advanced AI-driven security solutions. This role focuses on applying AI to automate and enhance red-teaming operations within Microsoft's complex digital environment.
Requirements
- 5+ years of professional experience in software development and applied machine learning, including building and deploying production-quality systems.
- 2+ years of hands-on experience with large language models (LLMs), such as prompt engineering, fine-tuning, or developing and deploying LLM-based applications in production
- 2+ years of hands-on experience with graph theory, graph algorithms, and graph machine learning, including practical work with large-scale graph data in real-world environments
- Experience with building, scaling, and deploying graph-based solutions and/or multi-agent frameworks (e.g., AutoGen, LangGraph, crewAI) in cloud environments.
- Ability to translate advanced graph and LLM research into production-grade software that delivers measurable business or security impact at scale.
- Proficiency in Python is required, with significant experience developing robust, production-grade AI/ML systems using object-oriented programming.
- Experience combining LLMs with knowledge graphs or graph-based data.
Responsibilities
- Research, design, and develop advanced graph-based and LLM-powered AI systems to automate red-teaming and adversarial simulation.
- Build and maintain large-scale knowledge graphs and leverage LLMs for representing, reasoning about, and simulating attack paths, threat relationships, and mitigation strategies within Microsoft’s cloud and enterprise environments.
- Apply state-of-the-art graph algorithms, graph neural networks, and LLM techniques to real-world security data.
- Collaborate with security researchers, applied scientists, and engineers to design autonomous agents and multi-agent frameworks for security testing and incident response.
- Integrate data and insights from Microsoft’s Threat Intelligence Center, Red Team, and security telemetry to inform graph and LLM modeling and simulation.
- Contribute to research prototypes and their operationalization in production systems, with a focus on scalability and robustness.
- Develop and deploy state-of-the-art graph AI models to enhance red teaming automation.
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Although this is an individual contributor (IC) role, the Principal Applied Scientist is expected to provide technical leadership, mentor and support staff on technical aspects, and foster a collaborative, team-oriented environment.
- Embody our culture and values
- Microsoft is an equal opportunity employer.
- If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.