Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Databricks Logo

Software Engineer - GenAI inference

Databricks

$142,200 - $204,600
Oct 8, 2025
San Francisco, CA, US
Apply Now

Databricks is looking to design, develop, and optimize the inference engine that powers their Foundation Model API, ensuring fast, scalable, and efficient large language model serving systems.

Requirements

  • Strong software engineering background (3+ years or equivalent) in performance-critical systems
  • Solid understanding of ML inference internals: attention, MLPs, recurrent modules, quantization, sparse operations, etc.
  • Hands-on experience with CUDA, GPU programming, and key libraries (cuBLAS, cuDNN, NCCL, etc.)
  • Comfortable designing and operating distributed systems, including RPC frameworks, queuing, RPC batching, sharding, memory partitioning
  • Demonstrated ability to uncover and solve performance bottlenecks across layers (kernel, memory, networking, scheduler)
  • Experience building instrumentation, tracing, and profiling tools for ML models
  • Ability to work closely with ML researchers, translate novel model ideas into production systems

Responsibilities

  • Contribute to the design and implementation of the inference engine, and collaborate on model-serving stack optimized for large-scale LLMs inference
  • Collaborate with researchers to bring new model architectures or features (sparsity, activation compression, mixture-of-experts) into the engine
  • Optimize for latency, throughput, memory efficiency, and hardware utilization across GPUs, and accelerators
  • Build and maintain instrumentation, profiling, and tracing tooling to uncover bottlenecks and guide optimizations
  • Develop and enhance scalable routing, batching, scheduling, memory management, and dynamic loading mechanisms for inference workloads
  • Support reliability, reproducibility, and fault tolerance in the inference pipelines, including A/B launches, rollback, and model versioning
  • Integrate with federated, distributed inference infrastructure – orchestrate across nodes, balance load, handle communication overhead

Other

  • BS/MS/PhD in Computer Science, or a related field
  • Ownership mindset and eagerness to dive deep into complex system challenges
  • Bonus: published research or open-source contributions in ML systems, inference optimization, or model serving