The AI platform for human intelligence needs to build models that rigorously vet PhDs and scale human data training for frontier LLMs. They need to power post-training (RLHF, evals, red-teaming) for top AI labs and ensure their models are safe, aligned, and performant.
Requirements
- Technical aptitude with demonstrated experience in data analysis, annotation, or workflow optimization.
- Experience with data visualization tools (such as Excel, Tableau, Power BI) or programming languages (such as Python, SQL).
- Internship or project-based experience in data management, AI/ML, or process optimization.
Responsibilities
- Design, build, and manage robust data workflows for accurate annotation, collection, processing, and analysis.
- Establish and monitor KPIs to optimize operational performance and drive data-driven decision-making.
- Analyze datasets to uncover actionable insights and present recommendations that accelerate efficiency.
- Develop and maintain comprehensive reporting frameworks tracking key metrics and workflow improvements.
- Continuously refine processes to ensure scalability, maintain data integrity, and uphold quality standards.
- Demonstrate ownership and a proactive, hands-on approach to identifying and solving operational challenges.
Other
- Recent Bachelor’s or Master’s graduate in Computer Science, Data Analytics, Industrial Engineering, Operations Management, Economics, Finance or other related fields.
- Familiarity with operational processes and a passion for increasing efficiency.
- Ability to interpret and communicate data-driven insights clearly and persuasively, both in writing and verbally.
- Strong drive to learn, thrive under pressure, and a willingness to work hard in a fast-paced environment.
- Excellent collaboration skills and attention to detail in managing data quality and validation.
- Background in engineering or technical fields with operational focus or human data projects.