The company is looking to evaluate data quality and its return on investment to improve model performance and maximize client impact.
Requirements
- Strong grasp of LLMs and data-model dynamics
- Knowledge of the latest trends in Generative AI and data that is useful for improving foundation models
- Proven track record in benchmark development, model evaluation, or data-centric infrastructure
- Experience designing and interpreting metrics that inform delivery performance
- Familiarity with annotation workflows, validation processes, and scalable QA systems
- Solid ML or data science foundation
- Experience with feedback-driven annotation loops and pre-delivery QA
Responsibilities
- Define and implement strategies to assess the ROI of data across training and fine-tuning pipelines
- Build and maintain benchmarks that measure performance across key client and internal objectives
- Develop systems and tooling for continuous data evaluation
- Drive human-in-the-loop quality processes including pre-delivery validation and annotation feedback loops
- Identify data gaps and lead targeted acquisitions or refinements
- Define and/or leverage comprehensive task taxonomy frameworks to structure data annotation efforts
- Translate research insights and data evaluations into client-facing value
Other
- Strategic thinker with a bias toward impact: can connect data quality work directly to client value
- Collaborate with data operations, research, and delivery teams to align on quality standards and data priorities
- 5 days a week
- Flexible working hours
- Full-time remote opportunity
- Competitive compensation