Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Databricks Logo

Staff Software Engineer - GenAI Performance and Kernel

Databricks

$190,900 - $232,800
Oct 8, 2025
San Francisco, CA, US
Apply Now

Databricks is looking to solve the problem of optimizing the performance and efficiency of their GenAI inference stack by developing high-performance GPU kernels.

Requirements

  • Deep hands-on experience writing and tuning compute kernels (CUDA, Triton, OpenCL, LLVM IR, assembly or similar sort) for ML workloads
  • Strong knowledge of GPU/accelerator architecture: warp structure, memory hierarchy (global, shared, register, L1/L2 caches), tensor cores, scheduling, SM occupancy, etc.
  • Experience with advanced optimization techniques: tiling, blocking, software pipelining, vectorization, fusion, loop transformations, auto-tuning
  • Familiarity with ML-specific kernel libraries (cuBLAS, cuDNN, CUTLASS, oneDNN, etc.) or open kernels
  • Strong debugging and profiling skills (Nsight, NVProf, perf, vtune, custom instrumentation)
  • Experience reasoning about numerical stability, mixed precision, quantization, and error propagation
  • Experience in integrating optimized kernels into real-world ML inference systems; exposure to distributed inference pipelines, memory management, and runtime systems

Responsibilities

  • Lead the design, implementation, benchmarking, and maintenance of core compute kernels (e.g. attention, MLP, softmax, layernorm, memory management) optimized for various hardware backends (GPU, accelerators)
  • Drive the performance roadmap for kernel-level improvements: vectorization, tensorization, tiling, fusion, mixed precision, sparsity, quantization, memory reuse, scheduling, auto-tuning, etc.
  • Integrate kernel optimizations with higher-level ML systems
  • Build and maintain profiling, instrumentation, and verification tooling to detect correctness, performance regressions, numerical issues, and hardware utilization gaps
  • Lead performance investigations and root-cause analysis on inference bottlenecks, e.g. memory bandwidth, cache contention, kernel launch overhead, tensor fragmentation
  • Establish coding patterns, abstractions, and frameworks to modularize kernels for reuse, cross-backend portability, and maintainability
  • Influence system architecture decisions to make kernel improvements more effective (e.g. memory layout, dataflow scheduling, kernel fusion boundaries)

Other

  • Mentor and guide other engineers working on lower-level performance, provide code reviews, help set best practices
  • Collaborate with infrastructure, tooling, and ML teams to roll out kernel-level optimizations into production, and monitor their impact
  • Excellent communication and leadership skills — able to drive design discussions, mentor colleagues, and make trade-offs visible
  • A track record of shipping performance-critical, high-quality production software
  • BS/MS/PhD in Computer Science, or a related field