Character.AI is looking to solve the critical challenge of AI safety and alignment, ensuring their advanced AI models behave in accordance with human values and intentions, mitigating risks, and making models safer, more robust, honest, and harmless.
Requirements
- Write clear and clean production-facing and training code
- Experience working with GPUs (training, serving, debugging)
- Experience with data pipelines and data infrastructure
- Strong understanding of modern machine learning techniques, particularly transformers and reinforcement learning, with a focus on their safety implications.
- Experience with product experimentation and A/B testing
- Experience training large models in a distributed setting
- Familiarity with ML deployment and orchestration (Kubernetes, Docker, cloud)
- Experience with explainable AI (XAI) and interpretability techniques.
Responsibilities
- Develop and implement novel evaluation methodologies and metrics to assess the safety and alignment of large language models.
- Research and develop cutting-edge techniques for model alignment, value learning, and interpretability.
- Conduct adversarial testing to proactively uncover potential vulnerabilities and failure modes in our models.
- Analyze and mitigate biases, toxicity, and other harmful behaviors in large language models through techniques like reinforcement learning from human feedback (RLHF) and fine-tuning.
- Collaborate with engineering and product teams to translate safety research into practical, scalable solutions and best practices.
- Stay abreast of the latest advancements in AI safety research and contribute to the academic community through publications and presentations.
- Write clear and clean production-facing and training code
Other
- Hold a PhD (or equivalent experience) in a relevant field such as Computer Science, Machine Learning, or a related discipline.
- Are passionate about the responsible development of AI and dedicated to solving complex safety challenges.
- Have research in AI safety, alignment, ethics, or a related area.
- Knowledge of the broader societal and ethical implications of AI, including policy and governance.
- Publications in relevant academic journals or conferences in the field of machine learning