Microsoft Security aspires to make the world a safer place for all by reshaping security and empowering users, customers, and developers with a security cloud that protects them with end-to-end, simplified solutions. The NEXT incubation and research arm of Microsoft Security AI (MSECAI) is building the next generation of AI-native security products, driving science behind Microsoft Security Copilot and delivering foundational and specialized models.
Requirements
- 8+ years technical engineering experience with coding in languages including, but not limited to, C, C++, C-Sharp, Java, JavaScript, or Python
- 3+ years working with Machine Learning/AI systems (e.g., Large Language Models, Generative AI, retrieval-augmented generation, model serving, experimentation platforms, data pipelines), including establishing evaluation metrics and improving model quality.
- Experience with GenAI/LLM techniques and tooling – e.g., prompt engineering, retrieval/vector stores, agents or tool integration, content safety and guardrails, offline/online evaluation frameworks, vibe coding.
- Hands-on coding ability in one or more languages (e.g., Python, C-Sharp, C++, Rust, JavaScript/TypeScript); comfortable prototyping, reviewing code (PRs), and diving deep into technical design discussions.
- Demonstrated success driving 0 1 initiatives from ambiguity to MVP to GA, and then leading 1 N platform adoption across multiple product teams.
- Security domain expertise (e.g., threat detection/response, SIEM/SOAR, identity, endpoint, or cloud security) and familiarity with analyst workflows.
- Proven track record of shipping cloud-based AI or security services or platforms at scale (multi-tenant, high-throughput) with measurable customer and business impact.
Responsibilities
- Define the technical vision, architecture, and roadmap for AI-native security incubation initiatives; align stakeholders across Security Copilot, Defender, Sentinel, Entra, Purview, Azure AI and other groups to deliver cohesive customer value, acting as a diplomat to negotiate priorities and trade-offs among partner teams.
- Lead 0 1 incubation R&D through MVP and private preview, then drive 1 N platformization and scale to General Availability (GA); make principled trade-offs across quality, latency, reliability, cost, and safety when delivering solutions.
- Provide hands-on technical leadership – prototype in code, review designs and Pull Requests (PRs), define APIs/data contracts, build well-architected systems, and establish evaluation frameworks to de-risk complex AI systems.
- Set strategy for AI-first security experiences and platform components – determine where to use Large Language Models (LLMs) versus classical Machine Learning, design retrieval-augmented generation (RAG) pipelines, implement grounding and model routing/fallbacks, and establish safety guardrails to meet customer outcomes and Service Level Objectives (SLOs).
- Ensure a security-centric and Responsible AI approach – design privacy and security guardrails from day one, coordinate security/privacy reviews, abuse prevention, compliance checks, and incident readiness as integral parts of the development process.
- Lead virtual teams (v-teams) and mentor others to cultivate a high-velocity, high-quality engineering culture.
- Engage directly with enterprise customers and field teams to co-design solutions and drive adoption, and communicate program status and strategy to executives through compelling, hands-on demonstrations.
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
- 6+ years of experience driving complex, cross-functional initiatives; experience leading without authority across multiple teams.
- Program leadership and communication skills with exceptional stakeholder management; proven ability to diplomatically influence technical and product leaders and drive data-informed decisions across organizations.
- Industry thought leadership with deep industry clout in AI and/or security – recognized for contributions such as patents, published papers, talks at conferences, or community leadership.