Microsoft Security aspires to make the world a safer place for all by reshaping security and empowering every user, customer, and developer with a security cloud that protects them with end-to-end, simplified solutions. The team builds and operates large-scale AI training and adaptation engines that power Microsoft Security products, turning cutting-edge research into dependable, production-ready capabilities.
Requirements
- Experience with privacy preserving ML including differential privacy concepts, privacy risk assessment, and utility measurement on privatized data.
- Proven track record of building and training large language models or multimodal models for production scenarios, including continual pretraining and task specific adaptation.
Responsibilities
- Lead end-to-end model development for security scenarios, including privacy-aware data curation, continual pretraining, task-focused fine-tuning, reinforcement learning, and rigorous evaluation.
- Deepen model reasoning and tool-use skills, and embed responsible AI and compliance into every stage of the workflow.
- Translate innovations into shipped experiences, designing objective benchmarks and quality gates.
- Combine strong coding and experimentation with a systems mindset to accelerate iteration cycles, improve throughput and reliability.
- Help shape the next generation of secure, trustworthy AI for our customers.
- Defines and collects the information that is needed and analyzes information to gain insight into and address complex security problems and threats.
- Tracks advances within the industry, identifies relevant research, and adapts algorithms and/or techniques to develop new tools and automations.
Other
- Doctorate in Statistics, Mathematics, Computer Science or related field
- OR 7+ years experience in software development lifecycle, large-scale computing, modeling, cybersecurity, and/or anomaly detection.
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- Works with others to incorporate findings into future designs and analyses (e.g. creates working groups).
- Mentoring scientists and engineers to scale results across globally distributed teams.