Wake Forest University is looking to solve the problem of developing foundational methods for ensuring the safety and interpretability of Multiagent Reinforcement Learning (MARL) systems through a 3-year postdoctoral research position
Requirements
- Ph.D. in Computer Science, Electrical Engineering, or a related field
- Strong background in reinforcement learning (preferably MARL)
- Proficiency with machine learning tools (e.g., PyTorch, RL libraries)
- Strong publication record in relevant venues (e.g., NeurIPS, ICLR, AAAI, AAMAS)
- Experience in formal verification, interpretability, or AI safety
- Interest in interdisciplinary research and real-world impact
- Proficiency in programming languages and software development
Responsibilities
- Learning robust policies under uncertainty with built-in safety mechanisms
- Developing tools and methods to visualize, explain, and verify MARL policies
- Designing MARL algorithms resilient to adversarial conditions or partial failures
- Advancing theoretically sound and practically applicable MARL algorithms
- Contributing to one or more areas: Safe Learning in MARL, Policy Explainability and Testing, Robustness and Fault Tolerance
- Working with the PI and collaborators to develop MARL algorithms
- Conducting research in MARL with an emphasis on safety, explainability, and robust real-world deployment
Other
- Cover letter describing background and research interests
- Curriculum vitae (CV)
- Two representative publications
- Contact information for 2–3 references
- Ph.D. degree requirement