Uber's Engineering Security team is looking to research and develop AI agents to enhance security and privacy within their systems. This involves creating agents that can reason about complex systems, identify vulnerabilities, propose and implement fixes, and verify security outcomes.
Requirements
- Strong understanding of the LLM ecosystem and agentic AI frameworks (e.g., tool use, multi-step reasoning, orchestration)
- Proficiency in Python and experience with ML frameworks (e.g., PyTorch, TensorFlow)
- Demonstrated ability to conduct independent research
- Publications in top ML/AI or security venues (e.g., NeurIPS, ICML, ICLR, USENIX Security, CCS, IEEE S&P)
- Familiarity with automated reasoning, program synthesis, or verification methods
- Experience applying LLMs to security and software engineering tasks
Responsibilities
- Research and prototype AI-driven agents that can discover, remediate, and verify security vulnerabilities
- Design and evaluate automated patching and verification workflows using LLMs and agent-based architectures
- Investigate reasoning, planning, and tool-use capabilities of LLM-based agents in security contexts
- Collaborate with researchers and engineers to integrate solutions into production systems
- Document findings and contribute to technical reports, publications, or open-source tools
Other
- Current PhD student in Computer Science, Artificial Intelligence, Security, or related field
- Candidates must have at least one semester/quarter of their education left following the internship
- Strong problem-solving and communication skills