Advance visual intelligence in human-AI coadaptive systems by developing algorithms that enable AI to interpret complex real-world scenes, including objects, people, intentions, and interactions, ensuring technical strength, reliability, interpretability, and adaptability for meaningful human collaboration.
Requirements
- Expertise in machine learning for visual intelligence, perception, spatial reasoning, or behavior understanding.
- Proficiency in deep learning frameworks such as PyTorch or TensorFlow.
- Strong programming skills in Python and C++.
- Experience building scalable, testable research prototypes.
- Hands-on experience with multimodal data (e.g., video, audio, text).
- Demonstrated ability to integrate technical methods with human-centered considerations (e.g., interpretability, fairness, or user trust).
- Publication record in visual intelligence, perception, or machine learning venues.
Responsibilities
- Design and implement algorithms for scene and behavior understanding.
- Develop models that capture attributes, relationships, and contextual cues in human environments.
- Create learning mechanisms to identify critical agents and signals that influence behavior.
- Build and integrate research prototypes and software systems.
- Define and evaluate metrics for accuracy, interpretability, and trustworthiness.
- Contribute to publications, patents, and prototypes demonstrating technical and practical value.
Other
- M.S. in computer science, electrical engineering, robotics, or related field.
- Excellent written and verbal communication skills.
- Experience collaborating in interdisciplinary teams spanning technical and human-centered domains
- Working experience with HRI
- 3+ years