Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Agtonomy Logo

Senior/Staff Machine Learning Engineer, Perception

Agtonomy

Salary not specified
Sep 29, 2025
South San Francisco, CA, US
Apply Now

Agtonomy is looking for a software engineer to develop and refine perception algorithms for autonomous tractors, aiming to provide human-like awareness in rugged environments and address challenges like labor shortages, environmental strain, and inefficiencies in agriculture.

Requirements

  • Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding.
  • Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar.
  • Experience handling large datasets efficiently and organizing them for labeling, training and evaluation.
  • Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications.
  • Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings.
  • Experience architecting multi-sensor ML systems from scratch.
  • Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.

Responsibilities

  • Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions.
  • Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift.
  • Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance.
  • Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures.
  • Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems.

Other

  • A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems.
  • An eagerness to get your hands dirty and agility in a fast-moving, collaborative, small team environment with lots of ownership.
  • Experience with Foundational models for robotics or Vision-Language-Action (VLA) models
  • Experience implementing custom operations in CUDA.
  • Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).