Fine-tuning VLA models for robotic perception and interaction tasks at the company
Requirements
- VLA (Vision-Language Alignment) models
- Multi-modal datasets (images, text, sensor data)
- Isaac Sim and Omniverse environments
- AWS
- Robotic platforms
- Simulation and real-world sources
- Model integration workflows
Responsibilities
- Assist in fine-tuning VLA (Vision-Language Alignment) models for robotic perception and interaction tasks
- Curate and preprocess multi-modal datasets (images, text, sensor data) from simulation and real-world sources
- Evaluate model performance for tasks such as semantic understanding, object recognition, and scene interpretation
- Collaborate with simulation engineers to validate models in Isaac Sim and Omniverse environments
- Support real-world deployment and testing of VLA models on robotic platforms
- Work with AWS teams to optimize training pipelines and model integration workflows
- Analyze model outputs and provide insights for iterative improvements
Other
- BACHELOR OF COMPUTER SCIENCE
- Discretionary Annual Incentive
- Comprehensive Medical Coverage: Medical & Health, Dental & Vision, Disability Planning & Insurance, Pet Insurance Plans
- Family Support: Maternal & Parental Leaves
- Time Off: Vacation, Time Off, Sick Leave & Holidays