At Taskify, we believe that AI safety starts with high-quality human data. Advanced AI models rely on human judgment to evaluate nuanced outputs that machines cannot assess alone. We’re building a flexible team of Safety Specialists — contributors from diverse backgrounds who annotate and evaluate AI behaviors to ensure they are safe, fair, and aligned with human values.
Requirements
- Experienced in model evaluation, structured annotation, or applied research.
- Skilled at spotting subtle biases, inconsistencies, and unsafe behaviors that automated systems might miss.
- Able to clearly explain and defend your reasoning.
- Comfortable working in a fast-paced, evolving environment where evaluation methods adapt rapidly.
Responsibilities
- Annotate AI-generated content for safety criteria including bias, misinformation, disallowed content, and unsafe reasoning.
- Apply harm taxonomies and guidelines consistently, even when tasks involve ambiguity.
- Document your decision-making process to help improve annotation guidelines.
- Collaborate with researchers and engineers to enhance AI safety research and model development.
Other
- This role may involve reviewing sensitive content including bias or harmful material; support and clear guidelines are provided.
- Work is text-based, remote, flexible, and suitable for both full-time and part-time contributors.
- Preferred location: US time zones, open to candidates in the US, UK, Canada.
- Independent contractor engagement with flexible scheduling.