Developing foundational methods for ensuring the safety and interpretability of Multiagent Reinforcement Learning (MARL) systems at Wake Forest University
Requirements
- Ph.D. in Computer Science, Electrical Engineering, or a related field
- Strong background in reinforcement learning (preferably MARL)
- Proficiency with machine learning tools (e.g., PyTorch, RL libraries)
- Strong publication record in relevant venues (e.g., NeurIPS, ICLR, AAAI, AAMAS)
- Experience in formal verification, interpretability, or AI safety
- Interest in interdisciplinary research and real-world impact
- Proficiency with programming languages and software development
Responsibilities
- Learning robust policies under uncertainty with built-in safety mechanisms
- Developing tools and methods to visualize, explain, and verify MARL policies
- Designing MARL algorithms resilient to adversarial conditions or partial failures
- Advancing theoretically sound and practically applicable MARL algorithms
- Contributing to one or more areas of Safe Learning in MARL, Policy Explainability and Testing, and Robustness and Fault Tolerance
- Working with the PI and collaborators to advance MARL algorithms
- Conducting research in Safe and Explainable Multiagent Reinforcement Learning (MARL)
Other
- Cover letter describing background and research interests
- Curriculum vitae (CV)
- Two representative publications
- Contact information for 2–3 references
- Strong communication and collaboration skills