NVIDIA is building the future of real-time AI for sensor-driven applications with its Holoscan Platform. This role will extend Holoscan's core mission by enabling GPU-resident generative methods that accelerate development, improve simulation fidelity, and unlock new possibilities for real-time perception, specifically integrating generative AI into real-time sensing, simulation, and robotics.
Requirements
- Dynamic programming expertise in C++ (modern standards), plus proven Python skills for prototyping and tooling.
- Familiarity with multimodal or vision-language models and an understanding of how to adapt them to streaming or real-time workloads is a strong plus.
- Success designing APIs and frameworks that stand the test of scale and that developers love to use!
- 8+ years of experience building and shipping complex, high-performance imaging, sensor, or rendering software.
- Familiarity with GPU processing and rendering pipelines, synchronization, GPU memory management, and multi-GPU rendering is a plus.
- Experience adapting VLMs or multimodal foundation models to real-time sensor or video pipelines.
- Background integrating real-time GPU-accelerated processing and visualization pipelines (e.g., CUDA Vulkan interop).
Responsibilities
- Architect the next generation of Holoscan SDK by developing intuitive, scalable APIs for real-time sensor, imaging, and multimodal data processing—balancing developer usability with peak GPU performance.
- Prototype GPU-accelerated algorithms for computer vision, imaging, sensor fusion, and low-latency rendering-translating research into production-grade software.
- Build and optimize core GPU libraries for accelerated I/O, streaming, decoding, and visualization, employing CUDA, Vulkan, and GPU-resident data paths.
- Contribute to real-time visualization frameworks for medical, robotic, or industrial applications-integrating Vulkan, OpenGL, or Omniverse/RTX-based rendering back-ends.
- Benchmark performance rigorously, profiling and optimizing across the full pipeline (Sensor AI Render Display, Sensor AI Robotic Control).
- Combine generative models with the Holoscan Sensor Bridge (HSB), Isaac Sim, ISAAC Lab and Omniverse to create real-time “AI-powered virtual sensors” that behave like real hardware—enabling development and testing long before physical sensors exist.
- Prototype and optimize neural field (NeRF/SDF/Gaussian) operators for real-time scene reconstruction, view synthesis, and 3D perception—directly within Holoscan’s streaming architecture.
Other
- A strong communicator and collaborator able to work across multiple domains-from AI and compute to graphics and visualization.
- Deep passion for real-time AI, computer vision, and sensor-driven systems-plus a passion for high-performance visualization and rendering.
- Master’s/PhD or equivalent experience in Computer Science, Applied Math, Electrical or Computer Engineering, or related fields.
- Hands-on expertise with CUDA C/C++ and deep knowledge of GPU architecture and parallel programming paradigms.
- Knowledge of Omniverse Kit, or other GPU rendering frameworks for real-time visualization.