Roblox is looking to proactively moderate content and behavior on its platform to ensure a safe, civil, and inclusive environment for its users. The company aims to become a leader in civil immersive online communities by systematically detecting, removing, and preventing problematic content and behavior.
Requirements
- 8+ years of experience designing, developing, and operating large-scale, high-impact machine learning systems in a production environment.
- 5+ years of experience in technical leadership, management, or mentorship roles, ideally having managed Engineering Managers or Principal/Staff-level individual contributors.
- A proven track record of successfully setting the long-term technical direction for an entire ML domain or pillar, demonstrating the ability to take ambiguous problems from concept to scaled production impact.
- Deep expertise in advanced ML architectures, including Large Language Models (LLMs), transfer learning, or other foundation model technologies, especially applied to text or multimodal data.
- Expertise in architecting scalable, real-time ML inference services and robust data pipelines operating at millions of requests per second.
- Demonstrated success in leading and resolving high-stakes, cross-functional conflicts and technical disagreements, with an ability to build consensus among diverse stakeholders.
- Exceptional product sense and strategic planning ability: able to translate platform safety requirements into an achievable, iterative technical roadmap.
Responsibilities
- Define and lead the multi-year technical vision, architectural strategy, and execution for machine learning solutions across Content and Communication Safety, ensuring these systems proactively and effectively detect and prevent high-severity, critical harms at massive scale.
- Act as the highest technical authority for the Content Safety ML domain, guiding the architecture and long-term maintainability of foundational models, data pipelines, and real-time inference services.
- Identify and champion the most ambiguous, high-leverage technical problems, driving alignment and securing investment for organization-wide ML infrastructure and platform development initiatives that benefit all of Trust & Safety.
- Oversee the adoption and safe deployment of innovative technologies (e.g., advanced NLP, self-supervised learning, multimodal LLMs) to anticipate and mitigate novel abuse vectors, moving beyond reactive detection to proactive intervention.
- Collaborate with executive-level Product, Data Science, Policy, and Operations leaders to define and prioritize the strategic machine learning roadmap, influencing product strategy and demonstrating the impact of ML on user trust and safety outcomes.
- Own the execution roadmap and technical planning, directly guiding the launch of high-priority new ML projects.
- Define the standards of innovation in data quality, model robustness, and ethical deployment across the entire Content & Communication Safety ML pillar.
Other
- Capable of synthesizing complex business and safety goals into a clear, compelling, and actionable technical strategy.
- Passionate about developing the next generation of technical leaders, managers, and engineers.
- Thrive in undefined or open-ended problem spaces, providing structure, clarity, and decisive direction to your teams.
- Highly effective at communicating complex technical concepts to both engineering teams and non-technical executive leadership.
- Dedicated to building ML systems that are fair, transparent, and operate with the utmost responsibility toward user safety and platform civility.