dbt Labs is looking to solve the problem of building and maintaining a scalable and reliable data ecosystem to enable analytics, accelerate growth, and improve operational efficiency across the business.
Requirements
- Expertise in SQL and Python
- Proficiency in at least one additional core Data Engineering language such as Scala, Java or Rust
- Strong knowledge of data infrastructure and architecture design
- Hands-on experience with modern orchestration tools like Airflow, Dagster, or Prefect
- Experience developing and scaling dbt projects
- Experience working in a SaaS or high-growth tech environment
- Experience working with open table format (such as Apache Iceberg) data storage across regions and clouds
Responsibilities
- Design, build, and manage scalable, reliable data pipelines that ingest product and event data into our data stores
- Develop and maintain canonical datasets to track key product and business metrics—user growth, engagement, revenue, and more
- Architect robust, reliable systems for large volume batch data processing
- Drive decisions on data architecture, tooling, and engineering best practices
- Enhance observability and monitoring of existing workflows and processes
- Partner cross-functionally with teams across Infrastructure, Product, Marketing, Finance, and GTM to understand data needs and deliver impactful solutions
- Provide product feedback by 'dogfooding' new data infrastructure and AI technology
Other
- 5+ years of experience as a data engineer, and 8+ years of total experience in software engineering (including data engineering roles)
- A bias for action—able to stay focused and prioritize effectively
- Unlimited vacation (and yes, we use it!)
- Excellent healthcare
- Paid Parental Leave
- 401k matching