Tabs is looking for the first Data Engineer to build the core data infrastructure that powers internal KPIs, customer insights, and AI systems, with the goal of designing and implementing the foundational data platform around core metrics and ensuring long-term analytical scalability.
Requirements
- Solid programming skills in Python.
- Strong SQL skills and experience designing/data modeling for analytics
- Hands-on experience with a modern cloud data stack, such as: Warehouses: Snowflake, BigQuery, Redshift, Databricks, etc.
- Orchestration/transformation: dbt, Airflow, Dagster, or similar.
- Ingestion: Fivetran, Stitch, custom ingestion pipelines, etc.
- Experience building and operating production-grade data pipelines (performance, reliability, cost-awareness).
- Strong understanding of data quality, testing, and monitoring practices.
Responsibilities
- Design and implement our first scalable data warehouse/lakehouse to support KPIs.
- Build and maintain reliable data pipelines (batch and/or streaming) from application databases and third-party tools.
- Work with leadership to translate business metrics into concrete data models and schemas.
- Define and own data modeling standards and best practices.
- Implement monitoring, data quality checks, and observability around pipelines and core tables.
- Enable BI and data analysts through well structured models and self-serve friendly datasets.
- Document the data platform (lineage, definitions, contracts) and help establish a shared source of truth for metrics.
Other
- 3–5+ years of experience as a Data Engineer, ideally in a mid-stage startup.
- Comfort starting from a relatively greenfield environment and making pragmatic build vs. buy decisions.
- Experience supporting BI tools (Looker, Mode, Tableau, Metabase, etc) and designing semantic layers.
- Experience in B2B SaaS, especially around revenue, usage, and customer health metrics.
- Prior experience as an early data hire or working in a small or medium, fast-growing startup.