Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

CHEManager International Logo

Researcher, Misalignment Research

CHEManager International

Salary not specified
Dec 27, 2025
New York, NY, US
Apply Now

OpenAI is seeking a Senior Researcher to focus on identifying, quantifying, and understanding future AGI misalignment risks to ensure the responsible development and deployment of safe AGI for the benefit of society.

Requirements

  • Have 4+ years of experience in AI red-teaming, security research, adversarial ML, or related safety fields.
  • Possess a strong research track record-publications, open-source projects, or high-impact internal work-demonstrating creativity in uncovering and exploiting system weaknesses.
  • Are fluent in modern ML / AI techniques and comfortable hacking on large-scale codebases and evaluation infrastructure.

Responsibilities

  • Design and implement worst-case demonstrations that make AGI alignment risks concrete for stakeholders, focused on high stakes use cases described above.
  • Develop adversarial and system-level evaluations grounded in those demonstrations, driving adoption across OpenAI.
  • Create automated tools and infrastructure to scale automated red-teaming and stress testing.
  • Conduct research on failure modes of alignment techniques and propose improvements.
  • Publish influential internal or external papers that shift safety strategy or industry practice.
  • Partner with engineering, research, policy, and legal teams to integrate findings into product safeguards and governance processes.
  • Mentor engineers and researchers, fostering a culture of rigorous, impact-oriented safety work.

Other

  • Already are thinking about these problems night and day, and share our mission to build safe, universally beneficial AGI and align with the OpenAI Charter.
  • Communicate clearly with both technical and non-technical audiences, translating complex findings into actionable recommendations.
  • Enjoy collaboration and can drive cross-functional projects that span research, engineering, and policy.
  • Hold a Ph.D., master's degree, or equivalent experience in computer science, machine learning, security, or a related discipline (nice to have but not required).
  • A chance to shape safety practices at the frontier of AGI. Your work will directly lower the changes of catastrophic misalignment.