Granica is redefining how enterprises prepare and optimize data at the most fundamental layer of the AI stack—where raw information becomes usable intelligence. Our technology operates deep in the data infrastructure layer, making data efficient, secure, and ready for scale. We eliminate the hidden inefficiencies in modern data platforms—slashing storage and compute costs, accelerating pipelines, and boosting platform efficiency.
Requirements
- 4+ years engineering in distributed systems, databases, or low-latency infrastructure at companies like MongoDB, Snowflake, Databricks, or similar
- Deep fluency in query execution, compilers, compression, indexing, or vectorized computation
- Experience with data systems like Parquet, Delta Lake, Iceberg, Spark, Trino, or Flink
- Proven track record building production-grade infrastructure and making measurable performance gains
Responsibilities
- Build intelligent, autonomous data infrastructure that makes storage feel free and analytics feel instant
- Design and implement adaptive layouts, zone maps, and columnar encodings that slash I/O and unlock true vectorized execution
- Invent and optimize exabyte-scale engines with self-tuning, zero-human-intervention workflows
- Apply deep research (PhD or equivalent) to real-world systems—drive adoption in production
- Evolve resource schedulers, query planners, and shuffle paths for high-throughput, multi-tenant environments
- Build self-healing reliability: chaos-resilient systems with fault injection, retries, and observability by default
- Code across the stack with whatever tool gets the job done: Rust, Go, Spark, CUDA, Java, Scala
Other
- PhD in systems, distributed computing, storage, or adjacent field—or equivalent experience with published research or patents
- Excellent communicator with a builder-first mindset and systems-level thinking
- Competitive salary and meaningful equity
- Unlimited PTO + quarterly recharge days
- Premium health, vision, and dental