Building advanced ML systems that enable precise localization capabilities for autonomous technologies.
Requirements
- Hands-on expertise with localization systems such as LiDAR-based localization, SLAM, visual odometry, or map-based pose estimation.
- Strong background building and deploying ML models in perception, localization, or sensor fusion environments.
- Proficiency with PyTorch, modern ML tooling, and large-scale multimodal datasets.
- Solid grounding in 3D geometry, spatial transforms, probabilistic estimation, and robotics fundamentals.
- Strong Python or C++ engineering skills, with experience delivering maintainable, production-ready code.
- experience with distributed computing (Ray, Kubernetes), simulation or synthetic data generation, uncertainty-aware ML, or open-source contributions.
Responsibilities
- Design, develop, and optimize ML models for localization, including learned pose estimation, map-matching, and sensor fusion using camera, LiDAR, and radar data.
- Build high-performance training, evaluation, and optimization workflows using PyTorch, distributed training, and large-scale datasets.
- Collaborate with robotics and mapping teams to integrate localization models into real-time autonomy stacks with strict performance requirements.
- Analyze failure cases, conduct ablations, and improve model robustness to achieve production-grade reliability.
- Contribute to system design, documentation, best practices, and code reviews across ML and autonomy teams.
Other
- Bachelor’s degree with 6+ years or Master’s degree with 3+ years of applied ML engineering experience in autonomous systems, robotics, or related fields.
- Excellent communication and ability to work cross-functionally in fast-paced environments.