Hex is looking to solve the problem of building and maintaining a core compute platform that serves as the fundamental engine for all code execution, SQL querying, and data processing across the Hex application, making it exceptionally easy to use, develop on, and monitor, while also ensuring it is blazingly fast and performant.
Requirements
- Expert-level understanding of Python, SQL, jupyter kernels, and/or data warehousing systems
- Deeply informed on industry trends around data processing, databases, and performant data architectures
- Experience with Typescript and React
- Experience with Apollo GraphQL and Redux
- Experience with Express/Apollo GraphQL server
- Experience with Postgres, Redis, and Kubernetes
- Experience with Terraform, Helm, and AWS
Responsibilities
- Using DuckDB to enable seamless data interoperability between Python and SQL code written by our users or AI agents
- Building intelligent caching systems to speed up customer data queries
- Implementing a graph-based dependency framework for optimizing multi-step execution plans
- Instrumenting complex, multi-language systems to identify performance bottlenecks
- Building visualizations into the product to help users understand the performance characteristics of their code
- Working on projects with high technical complexity and spanning across the stack, from our backend data systems to the frontend of Hex’s collaborative notebook interface
- Designing elegant abstractions that improve correctness and developer productivity
Other
- 6+ years of engineering experience working on complex, data-driven systems or applications
- Strong communication skills in a remote-first team context
- Ability to work in a remote-first team and strong communication skills in this context
- Inexhaustible desire to understand where every millisecond of time could be going
- Instinct for strategic thinking and aligning with business and product goals while keeping a healthy balance of velocity and engineering excellence