Drive cutting-edge machine learning solutions in a fast-paced, sensor-driven environment by integrating and fusing multimodal sensor data to enable intelligent, scalable, and seamless product interactions.
Requirements
- 3+ years of experience in sensor fusion, multi-modal learning, transformers, video understanding, or activity recognition.
- Strong programming skills in Python, C++, Java/Kotlin.
- Experience with machine learning frameworks such as PyTorch, TensorFlow, or Scikit-learn.
- Familiarity with data science tools such as SQL, Pandas, and similar technologies.
- Proven track record of delivering end-to-end ML solutions from research to production.
- Preferred: experience with large language models (LLMs), vision-language models (VLMs), edge computing, onboard hardware deployment, A/B testing, and translating research into production.
Responsibilities
- Design, implement, and deploy machine learning models that leverage multi-sensor data to improve product interactions.
- Drive end-to-end ML solutions, including model research, training, evaluation, and deployment.
- Build and maintain infrastructure for collecting, processing, and storing sensor data efficiently.
- Deploy models at the edge using platforms such as Nvidia Jetson.
- Continuously improve ML workflows, architectures, and data pipelines to ensure scalability and performance.
- Mentor team members and share best practices in model development and deployment.
Other
- Collaborate with cross-functional teams to inform system design and contribute to product vision.
- Strong communication, collaboration, and presentation skills to work with diverse stakeholders.
- Master’s or Ph.D. in Computer Science, Electrical Engineering, Applied Mathematics, or related fields.
- Fully remote, flexible work arrangements with a “Flex First” approach.
- Opportunities to lead high-impact ML projects with cutting-edge sensor and AI technology.