Uber's AI Security team needs to secure how AI agents and tools interact with their systems and data. This involves building foundations for agentic identity and risk-based access to ensure safe, observable, and compliant AI adoption at scale.
Requirements
- Strong grounding in LLMs/agent frameworks (tool use, planning, orchestration) and empirical evaluation
- Proficiency in Python and modern ML tooling (PyTorch or TensorFlow)
- Demonstrated ability to conduct independent research and translate ideas into working prototypes
- Publications in top ML/AI or Security venues (e.g., NeurIPS, ICML, ICLR, USENIX Security, CCS, IEEE S&P)
- Experience with identity/authz standards (OAuth2/OIDC), policy engines (e.g., OPA/Rego), or program analysis/verification
- Applied security for LLM/agent systems (prompt/ tool security, redaction, auditability, explainability)
Responsibilities
- Research & prototype identity and attestation for AI agents (e.g., A2A AuthN/A2A AuthZ, context propagation, chain-of-custody verification) and evaluate correctness, robustness, and usability
- Design risk-based access policies and scoring that adapt to actor, tool, data sensitivity, and runtime signals; validate via offline/online experiments
- Build evaluation harnesses for agent workflows (tool-use, multi-step planning, self-verification) to measure security outcomes (prevent, detect, contain) and regression-proof changes
- Ship with engineers: integrate prototypes into production gateways/SDKs; add observability (auditing, explanations of allow/deny), and stress-test for scale and latency
- Communicate findings through docs, internal talks, and (where appropriate) publications or open-source contributions
Other
- Current PhD student in Computer Science, AI/ML, Security, or related field
- Candidates must have at least one semester/quarter of their education left following the internship
- Strong problem-solving and communication; comfort navigating ambiguous, cross-functional spaces