Reality Defender is an award-winning cybersecurity company helping enterprises and governments detect deepfakes and AI-generated media. Backed by world class investors including DCVC, Illuminate Financial, Y Combinator, Booz Allen Hamilton, IBM, Accenture, Rackhouse, and Argon VC, Reality Defender works with leading enterprise clients, financial institutions, and governments in order to ensure AI-generated media is not used for malicious purposes.
Requirements
- Experience in computer vision.
- Proficient in Python and in building deep learning models with PyTorch.
- Published peer-reviewed research papers in reputable computer vision venues, e.g. CVPR, ICCV, NeurIPS.
- modern deep learning stack - Python, PyTorch, and GPU-enabled cloud compute, like AWS/GCP.
Responsibilities
- Investigate new methods for generative image/video detection.
- Perform research of deepfake image/video detection.
- Write up results of research for internal reports and submission to academic journals/workshops.
- Independently implement and evaluate ideas on modern deep learning stack - Python, PyTorch, and GPU-enabled cloud compute, like AWS/GCP.
Other
- This 3-month internship is designed for current PhD students and candidates to partner with Reality Defender's AI team to generate cutting-edge research and publish peer-reviewed papers.
- Your primary collaborator will be Jacob Seidman, who will guide and advise your efforts within deepfake image and video detection.
- This internship can be performed remotely, although you're welcome to work from our HQ in New York City.
- PhD student in a relevant technical field.
- Excited about Reality Defender's mission to build a best-in-class and comprehensive deepfake and AI-generated media detection platform.
- Available to start a research project in Summer of 2026.