Loft is looking to develop, integrate, and optimize their Ultimate Edge SDK, which provides unified compute capabilities across various embedded platforms, with a primary focus on NVIDIA Orin-based systems.
Requirements
- Solid experience with C++ and/or Python.
- Familiarity with Linux-based embedded environments.
- Understanding of ML inference frameworks (ONNX Runtime, TensorRT, etc.).
- Strong experience with containerization technologies (e.g., Docker, Kubernetes) and exposing processing capabilities or services from containerized workloads
- Experience with hardware-accelerated processing (e.g., GPUs, TPU...) to optimize performance for compute-intensive workloads.
- Experience with the NVIDIA ecosystem: CUDA, Orin, Jetson platforms.
- Knowledge of heterogeneous compute environments and optimization.
Responsibilities
- Integrating ONNX-based inference runtimes and image-processing frameworks (e.g., ONNX Runtime, OpenCV) into Loft’s SDK.
- Configuring and optimizing GPU-accelerated and heterogeneous runtime environments, ensuring efficient use of available resources.
- Profiling, benchmarking, and performance tuning across multiple embedded platforms.
- Collaborating with other teams in Loft to ensure smooth deployment of edge applications.
- Supporting the continuous improvement of Loft’s onboard compute stack through structured testing, documentation, and validation.
- Integrate and optimize runtime components (ONNX Runtime, OpenCV, etc.) within the Ultimate Edge SDK.
- Develop, configure, and tune GPU-accelerated pipelines on NVIDIA hardware.
Other
- Master-level background in embedded systems, computer engineering, AI/ML, or software engineering.
- English communication skills (written & verbal) for international collaboration.
- Interest in space technologies and autonomous onboard processing.
- Kind, supportive and team-oriented collaborators.
- Problem solver and a great communicator.