Microsoft Security aspires to make the world a safer place by reshaping security and empowering users, customers, and developers with a security cloud that protects them with end-to-end, simplified solutions. The team builds and operates large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities.
Requirements
- Doctorate in Statistics, Mathematics, Computer Science or related field
- Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
- Proven track record of building and training large language models or multimodal models for production scenarios, including continual pretraining and task specific adaptation.
Responsibilities
- Lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation.
- Deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow.
- Partner closely with engineering and product to translate innovations into shipped experiences.
- Design objective benchmarks and quality gates.
- Mentor scientists and engineers to scale results across globally distributed teams.
- Combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability.
- Help shape the next generation of secure, trustworthy AI for our customers.
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Microsoft Cloud Background Check: This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.
- Microsoft is an equal opportunity employer.