Microsoft Security is looking to develop automation that red teams can use to exploit security vulnerabilities in Microsoft's largest AI systems impacting millions of users. The PyRIT Team needs to create software to emulate real-world attacks against Microsoft's AI products.
Requirements
- Demonstrated hands-on experience applying modern AI/ML techniques (e.g., transformer architectures, fine-tuning, or retrieval-augmented generation) to real-world engineering problems.
- Ability to write clean, efficient, and maintainable Python code.
- Experience with the operation of AI infrastructure (networking, Azure, HuggingFace, etc.)
- Expertise in utilizing LLM library features for fine-tuning and optimizing model outputs.
- Ability to evaluate and refine prompts based on model responses to enhance accuracy and relevance.
- Experience in identifying security vulnerabilities, software development lifecycle, large-scale computing, modeling, cyber security, and anomaly detection.
Responsibilities
- Design, implement, and support AI-driven adversary emulation tooling
- Support partner development teams and the open-source community
- Partner with internal defensive security teams to improve their detection, investigation, and response capabilities
- Build strong relationships with peers through design and code reviews
- Analyze emerging attack techniques from red teams, adversarial researchers, and external threat actors, leveraging these insights to develop and refine advanced security tooling.
Other
- Ability to meet Microsoft, customer and/or government security screening requirements are required for this role.
- These requirements include, but are not limited to the following specialized security screenings: Microsoft Cloud Background Check:
- This position will be required to pass the Microsoft background and Microsoft Cloud background check upon hire/transfer and every two years thereafter.