The business problem is to develop and deploy state-of-the-art machine learning algorithms and systems, specifically focusing on Responsible AI to ensure the design, deployment, and monitoring of trustworthy AI systems across a broad range of products, with a particular emphasis on large language and multi-modal models.
Requirements
- Hands-on experience with LLMs including fine-tuning, evaluation, and prompt engineering
- Demonstrated expertise in building or evaluating Responsible AI systems (e.g., fairness, safety, interpretability)
- Proficiency in Python and ML/DL frameworks such as PyTorch or TensorFlow
- Strong understanding of model evaluation techniques and metrics related to bias, robustness, and toxicity
- Experience with RLHF (Reinforcement Learning from Human Feedback) or other alignment methods
- Open-source contributions in the AI/ML community
- Experience working with model guardrails, safety filters, or content moderation systems
Responsibilities
- Conduct cutting-edge research and development in Responsible AI, including fairness, robustness, explainability, and safety for generative models
- Design and implement safeguards, red teaming pipelines, and bias mitigation strategies for LLMs and other foundation models
- Contribute to the fine-tuning and alignment of LLMs using techniques such as prompt engineering, instruction tuning, and RLHF/DPO.
- Define and implement rigorous evaluation protocols (e.g., bias audits, toxicity analysis, robustness benchmarks)
- Collaborate cross-functionally with product, policy, legal, and engineering teams to ensure Responsible AI principles are embedded throughout the model lifecycle
- Publish in top-tier venues (e.g., NeurIPS, ICML, ICLR, ACL, CVPR) and represent the company in academic and industry forums
- Invent, implement and deploy state-of-the-art machine learning and/or specific domain industry algorithms and systems.
Other
- Ph.D. in Computer Science, Machine Learning, NLP, or a related field, with publications in top-tier AI/ML conferences or journals
- Creative problem-solving skills with a rapid prototyping mindset and a collaborative attitude
- Collaborate cross-functionally with product, policy, legal, and engineering teams
- If you’re passionate about ensuring AI benefits everyone—and you have the technical depth to back it up—we want to hear from you.