Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

CHEManager International Logo

Research Scientist - Interpretability (1 Year Fixed Term)

CHEManager International

$156,560 - $180,039
Oct 6, 2025
Palo Alto, CA, US
Apply Now

The Enigma Project at Stanford University is seeking to understand the computational principles of natural intelligence using AI. The goal is to create a foundation model of the brain by capturing the relationship between perception, cognition, behavior, and brain activity dynamics. This initiative aims to provide insights into the brain's algorithms and align AI models with human-like neural representations. The specific role focuses on developing methods and systems for analyzing and interpreting these models to understand how the brain represents and processes information.

Requirements

  • At least 2+ years of practical experience in training, fine-tuning, and using multi-modal deep learning models
  • Strong programming skills in Python and deep learning frameworks
  • Experience in processing and analyzing large-scale, high-dimensional data of different sources
  • Experience with cloud computing platforms (e.g., AWS, GCP, Azure) and their machine learning services
  • Familiarity with big data and MLOps platforms (e.g. MLflow, Weights & Biases)
  • Familiarity with training, fine tuning, and quantization of LLMs or multimodal models using common techniques and frameworks (LoRA, PEFT, AWQ, GPTQ, or similar)
  • Experience with large-scale distributed model training frameworks (e.g. Ray, DeepSpeed, HF Accelerate, FSDP)

Responsibilities

  • Lead research initiatives in the mechanistic interpretability of foundation models of the brain
  • Develop novel theoretical frameworks and methods for understanding neural representations
  • Design and guide interpretability studies that bridge artificial and biological neural networks
  • Advanced techniques for circuit discovery, feature visualization, and geometric analysis of high-dimensional neural data
  • Collaborate with neuroscientists to connect interpretability findings with biological principles
  • Mentor junior researchers and engineers in interpretability methods
  • Help shape the research agenda of the interpretability team

Other

  • Ph.D. in Computer Science, Machine Learning, Computational Neuroscience, or related field plus 2+ years post-Ph.D. research experience
  • Strong publication record in top-tier machine learning conferences and journals, particularly in areas related to multi-modal modeling
  • Demonstrated ability to lead research projects and mentor others
  • Ability to work effectively in a collaborative, multidisciplinary environment
  • Bachelor's degree and five years of relevant experience, or combination of education and relevant experience.