Serve Robotics is looking to improve the efficiency and ubiquity of robotic deliveries by optimizing machine learning models for real-time deployment on edge hardware.
Requirements
3+ years experience in deploying ML models on embedded or edge platforms (preferably robotics).
2+ years of experience with CUDA, TensorRT, and other NVIDIA acceleration tools.
Proficient in Python and C++, especially for performance-sensitive systems.
Experience with NVIDIA Jetson (e.g., Xavier, Orin) and edge inference tools.
Familiarity with model conversion workflows (e.g., PyTorch → ONNX → TensorRT).
Experience with real-time robotics systems (e.g., ROS2, middleware, safety-critical constraints and linux embedded systems).
Knowledge of performance tuning under thermal, power, and memory constraints on embedded devices.
Responsibilities
Own the full lifecycle of ML model deployment on robots—from handoff by the ML team to full system integration.
Convert, optimize, and integrate trained models (e.g., PyTorch/ONNX/TensorRT) for Jetson platforms using NVIDIA tools.
Develop and optimize CUDA kernels and pipelines for low-latency, high-throughput model inference.
Profile and benchmark existing ML workloads using tools like Nsight, nvprof, and TensorRT profiler.
Identify and remove compute and memory bottlenecks for real-time inference.
Design and implement strategies for quantization, pruning, and other model compression techniques suited for edge inference.
Ensure models are robust to the resource constraints of real-time, low-power robotic systems.
Other
Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, or equivalent field.
Master’s degree in Computer Science, Robotics, Electrical Engineering, or equivalent field.
Contributions to open-source ML or CUDA projects is a plus.