Transforming the way goods move and solving the world’s most urgent and complex access challenges
Requirements
At least 5+ years of experience building and deploying deep learning-based perception systems, particularly in 3D geometry, semantic understanding, or mapping from remote sensing data
Strong understanding of classical computer vision (e.g. camera calibration, epipolar geometry, structure-from-motion) and the ability to blend it with modern ML approaches
Hands-on experience training, iterating on, and optimizing CNN and transformer architectures in production environments
Familiarity with building training, data annotation, and evaluation pipelines—not just models
Comfort working across systems: jumping into data pipelines, training infrastructure, or debugging distributed training issues as needed
Responsibilities
Own the design and implementation of cloud-side autonomy pipelines that directly support and scale our onboard perception stack
Leverage satellite imagery, aerial surveys, and structured data to build semantic and geometric world models of customer delivery zones
Design and ship tools that predict deliverability, generate high-fidelity priors, and reduce the operational friction of onboarding new customers in new environments
Train and deploy mid- to large-scale models for semantic segmentation, 3D geometry, and learned preference modeling
Design evaluation and validation infrastructure to ensure models behave reliably in the field
Work across engineering to integrate your work into fleet-facing autonomy systems
Other
5+ years of experience
Lead architectural decisions, drive experimentation, and help the team push the limits of what’s possible with production-grade perception at scale
An engineering mindset focused on outcomes over experimentation
Comfort working in a fast-paced environment
Equal opportunity employer and encourages candidates from historically underrepresented backgrounds to apply