Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Centific Logo

AI Safety Research Intern

Centific

$35 - $40
Nov 11, 2025
Redmond, WA, United States of America
Apply Now

Advance the frontiers of AI safety, LLM jailbreak detection and defense, and agentic AI at LinkedIn

Requirements

  • Strong Python and PyTorch/JAX skills; comfort with toolkits for language models, benchmarking, and simulation
  • Demonstrated research in at least one of: LLM jailbreak attacks/defense, agentic AI safety, human-AI interaction vulnerabilities
  • Experience in adversarial prompt engineering, jailbreak detection (narrative, obfuscated, sequential attacks)
  • Prior work on multi-agent architectures or robust defense strategies for LLMs
  • Familiarity with red-teaming, synthetic behavioral data, and regulatory safety standards
  • Scalable training and deployment: Ray, distributed evaluation, CI/telemetry for defense protocols
  • Public code artifacts (GitHub) and first-author publications or strong open-source impact

Responsibilities

  • Advance AI Safety: Design, implement, and evaluate attack and defense strategies for LLM jailbreaks (prompt injection, obfuscation, narrative red teaming)
  • Evaluate AI Behavior: Analyze and simulate human-AI interaction patterns to uncover behavioral vulnerabilities, social engineering risks, and over-defensive vs. permissive response tradeoffs
  • Agentic AI Security: Prototype workflows for multi-agent safety (e.g., agent self-checks, regulatory compliance, defense chains) that span perception, reasoning, and action
  • Benchmark & Harden LLMs: Create reproducible evaluation protocols/KPIs for safety, over-defensiveness, adversarial resilience, and defense effectiveness across diverse models (including latest benchmarks and real-world exploit scenarios)
  • Deploy and Monitor: Package research into robust, monitorable AI services using modern stacks (Kubernetes, Docker, Ray, FastAPI); integrate safety telemetry, anomaly detection, and continuous red-teaming
  • Jailbreaking Analysis: Systematically red-team advanced LLMs (GPT-4o, GPT-5, LLaMA, Mistral, Gemma, etc.), uncovering novel exploits and defense gaps
  • Multi-turn Obfuscation Defense: Implement context-aware, multi-turn attack detection and guardrail mechanisms, including countermeasures for obfuscated prompts (e.g., StringJoin, narrative exploits)

Other

  • Ph.D. student in CS/EE/ML/Security (or related); actively publishing in AI Safety, NLP robustness, or adversarial ML (ACL, NeurIPS, BlackHat, IEEE S&P, etc.)
  • Proven ability to go from concept → code → experiment → result, with rigorous tracking and ablation studies
  • Comprehensive healthcare benefits
  • Full-time Internship - 40 hours per week
  • Duration: 6 months