Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Datadog Logo

Senior Security Researcher - GenAI

Datadog

$187,000 - $240,000
Sep 12, 2025
New York, NY, USA
Apply Now

Datadog is looking to solve the problem of security in Generative AI technologies by researching and discovering vulnerabilities, attack vectors, and defensive strategies.

Requirements

  • Deep knowledge of common Generative AI technologies and frameworks (MCP, A2A, LangGraph, CrewAI, etc.).
  • Deep understanding of vulnerabilities specific to Generative AI, such as prompt injection, model poisoning, etc.
  • Knowledgeable with the OWASP Top 10 for LLMs, and/or how common web security concepts (access controls, input handling, API security) apply to LLM models.
  • Experience working in offensive security roles (penetration testing, red teaming) or vulnerability research with a focus on cloud or SaaS production environments/technologies.
  • Comfortable with taking ambiguous requirements and creating a research plan, with measurable and tangible outcomes, as well as getting buy-in on the plan from stakeholders autonomously.
  • Comfortable presenting and documenting your research findings with others: either internally at your current role, or publicly in conference talks and blog posts.
  • Ability to write software to solve problems with and without AI tooling, and build systems for research purposes (eg. Go, Python, Rust, etc.).

Responsibilities

  • Drive strategic research initiatives on Generative AI security by proposing, validating, and executing innovative projects.
  • Conduct hands-on research to discover and demonstrate vulnerabilities, attack vectors, adversarial methods, and misconfigurations in Generative AI and large language model (LLM) technologies.
  • Create proof-of-concept attacks, simulations, and demonstrations to illustrate vulnerabilities and defensive strategies clearly.
  • Serve as a subject matter expert in Generative AI and collaborate with product management, engineering, and detection engineering teams to translate research findings into actionable product improvements.
  • Author and present impactful blog posts, webinars, and conference presentations to educate the broader security and AI community.
  • Engage closely with cloud providers, AI vendors, and open-source communities to responsibly disclose and remediate identified security issues.
  • Track, research and experiment with the latest tactics, techniques & procedures for attacking and defending Generative AI infrastructure.

Other

  • Bachelor's, Master's, or Ph.D. degree in a relevant field.
  • Ability to work in a hybrid workplace and collaborate with team members.
  • Strong communication and presentation skills, both verbal and written.
  • Ability to work in an OKR driven environment.
  • Experience working in a Security Research organization.