Microsoft Security aspires to make the world a safer place by reshaping security and empowering users, customers, and developers with a security cloud. The NEXT team, as the incubation and research arm of Microsoft Security AI (MSECAI), is building the next generation of AI-native security products to address evolving digital threats and complexity.
Requirements
- 6+ years of experience driving complex, cross-functional initiatives; experience leading without authority across multiple teams.
- 3+ years working with Machine Learning (ML)/Artificial Intelligence (AI) systems (e.g., Large Language Models (LLMs)/Generative AI (GenAI), retrieval/Retrieval-Augmented Generation (RAG), model serving, experimentation platforms, data pipelines) including establishing evaluation metrics and improving model quality.
- Experienced in program leadership, communication, and stakeholder management skills with the ability to influence leaders and make data-informed decisions.
- Proven track record shipping cloud services or platforms at scale (multi-tenant, high-throughput) with measurable customer and business impact.
- Security domain expertise (e.g., threat detection/response, SIEM/SOAR, identity, endpoint, cloud security) and familiarity with analyst workflows.
- Experience with GenAI/LLM techniques and tooling (prompt engineering, retrieval/vector stores, agents/tool use, content safety/guardrails, offline/online eval frameworks, vibe coding).
- Hands-on coding ability in one or more languages (e.g., Python, C-Sharp, C++, Rust, JavaScript/TypeScript); comfortable prototyping, reading Pull Requests (PRs), and engaging deeply in technical design reviews.
Responsibilities
- Define the technical vision, strategy, and roadmap for AI-native incubation initiatives; align stakeholders across Security Copilot, Defender, Sentinel, Entra, Purview, Azure AI Foundry and Microsoft AI to deliver cohesive customer value.
- Lead zero-to-one (0 1) incubation R&D through MVP and private preview, then drive one-to-many (1 N) platformization and scale to GA; make principled tradeoffs across quality, latency, reliability, cost, and safety.
- Provide hands-on technical leadership: prototype in code, review designs and Pull Requests (PRs), define Application Programming Interfaces (APIs)/data contracts, build comprehensive well-architected systems, and establish evaluation frameworks to de-risk complex systems.
- Set strategy for AI-native security experiences and platform components: where to use Large Language Models (LLMs) versus classical Machine Learning (ML), retrieval/Retrieval-Augmented Generation (RAG) design, grounding, model routing/fallbacks, and safety guardrails to meet customer outcomes and Service Level Objectives (SLOs).
- Ensure Responsible AI, privacy, and security guardrails are designed in from day one, coordinate safety reviews, abuse prevention, compliance, and incident readiness.
- Lead v-teams and mentor others; cultivate a builder culture of velocity and quality as a force multiplier.
- Engage directly with enterprise customers and field to co-design solutions and land adoption; communicate program status and strategy to executives with hands on real code demonstrations.
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
- Embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day.
- Culture blends ambition and scientific rigor with curiosity, humility, and customer obsession.
- Communicate program status and strategy to executives with hands on real code demonstrations.