Uber's Engineering Security team is looking to leverage AI/ML to transform security and privacy, focusing on strengthening security capabilities, building secure-by-design AI systems, and protecting against emerging AI-based threats. The team aims to develop next-generation security and privacy platforms that utilize and defend against cutting-edge AI to protect Uber's ecosystem.
Requirements
- Strong understanding of the LLM ecosystem and agentic AI frameworks (e.g., tool use, multi-step reasoning, orchestration)
- Proficiency in Python and experience with ML frameworks (e.g., PyTorch, TensorFlow)
- Publications in top ML/AI or security venues (e.g., NeurIPS, ICML, ICLR, USENIX Security, CCS, IEEE S&P)
- Familiarity with automated reasoning, program synthesis, or verification methods
- Experience applying LLMs to security and software engineering tasks
Responsibilities
- Research and prototype AI-driven agents that can discover, remediate, and verify security vulnerabilities
- Design and evaluate automated patching and verification workflows using LLMs and agent-based architectures
- Investigate reasoning, planning, and tool-use capabilities of LLM-based agents in security contexts
- Collaborate with researchers and engineers to integrate solutions into production systems
- Document findings and contribute to technical reports, publications, or open-source tools
Other
- Current PhD student in Computer Science, Artificial Intelligence, Security, or related field
- Candidates must have at least one semester/quarter of their education left following the internship
- Demonstrated ability to conduct independent research
- Strong problem-solving and communication skills