TSMC Arizona needs to fine-tune its AI models to make them more adaptable, intelligent, and aligned with real-world use cases.
Requirements
- 5+ years of experience working on fine-tuning large-scale models such as GPT, T5, or BERT
- Expertise in advanced tuning methods such as RLHF, prompt engineering, and zero-shot learning
- Experience with popular transformer architectures and frameworks like Hugging Face, TensorFlow, or PyTorch
- Deep understanding of LLM behaviors, including instruction-following, task completion, and ethical considerations in output
- Proficiency in Python and experience with libraries for model fine-tuning (e.g., Transformers, DeepSpeed)
- Experience in evaluating model performance, including using metrics like BLEU, ROUGE, perplexity, and custom evaluation frameworks
- Experience with model deployment and real-time experimentation (A/B testing)
Responsibilities
- Lead the fine-tuning process for large pre-trained models
- Design and implement prompt engineering strategies
- Apply Reinforcement Learning from Human Feedback (RLHF) and other behavioral fine-tuning methods
- Collaborate with data teams to integrate relevant data
- Conduct model evaluations using various performance metrics
- Iterate and experiment with different fine-tuning methods
- Monitor model drift and ensure model consistency, reliability, and safety
Other
- Bachelor's degree in Computer Science, Data Science, or a related field
- Communication
- Computer proficiency
- Presentation skills
- Listening
- Teamwork
- Willing and able to work on-site at our Phoenix Arizona facility