Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Inside Higher Ed Logo

Research Scientist - Interpretability (1 Year Fixed Term)

Inside Higher Ed

$156,560 - $180,039
Sep 21, 2025
Palo Alto, CA, US
Apply Now

The Enigma Project at Stanford University School of Medicine aims to understand the computational principles of natural intelligence using AI. The project seeks to create a foundation model of the brain by capturing the relationship between perception, cognition, behavior, and brain activity dynamics. This initiative aims to provide insights into brain algorithms and align AI models with human-like neural representations. The role focuses on developing methods for analyzing and interpreting these models to understand how the brain represents and processes information.

Requirements

  • At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models
  • Strong programming skills in Python and deep learning frameworks
  • Background in theoretical neuroscience or computational neuroscience
  • Experience in processing and analyzing large-scale, high-dimensional data of different sources
  • Experience with cloud computing platforms (e.g., AWS, GCP, Azure) and their machine learning services
  • Familiarity with big data and MLOps platforms (e.g. MLflow, Weights & Biases)
  • Familiarity with training, fine tuning, and quantization of LLMs or multimodal models using common techniques and frameworks (LoRA, PEFT, AWQ, GPTQ, or similar)

Responsibilities

  • Lead research initiatives in the mechanistic interpretability of foundation models of the brain
  • Develop novel theoretical frameworks and methods for understanding neural representations
  • Design and guide interpretability studies that bridge artificial and biological neural networks
  • Advanced techniques for circuit discovery, feature visualization, and geometric analysis of high-dimensional neural data
  • Collaborate with neuroscientists to connect interpretability findings with biological principles
  • Mentor junior researchers and engineers in interpretability methods
  • Help shape the research agenda of the interpretability team

Other

  • Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or related field plus 2+ years post-Ph.D. research experience
  • Demonstrated ability to lead research projects and mentor others
  • Ability to work effectively in a collaborative, multidisciplinary environment
  • Bachelor's degree and five years of relevant experience, or combination of education and relevant experience.
  • Demonstrated project leadership experience.