Build the data foundation that powers critical insights across Engineering and Tech Operations at Sigma, architecting, scaling, and optimizing data models and pipelines across Snowflake and Databricks.
Requirements
- Strong experience working with APIs and building pipelines in cloud platforms (e.g., Snowflake, Databricks)
- Expertise in SQL and dbt; fluency in at least one programming language (e.g., Python, R, Scala)
- Experience implementing data governance frameworks that scale
- 3+ years of experience in a Data Engineering role
- Experience with modern orchestrators (e.g., Astronomer, Airflow, Dagster)
- Experience building scalable ML systems such as recommendation engines, search or machine translation
Responsibilities
- Design, build, and maintain core data models and visualizations in Sigma to support Engineering & Tech Operations initiatives, ensuring high data accuracy and usability.
- Architect and manage our production data pipelines in Snowflake and how they are consumed in Sigma (Tech we use: Fivetran, dbt, Snowflake, Sigma, Hightouch, Metaplane).
- Build foundational data assets for Tech Operations, including Support insights and internal telemetry.
- Create observability datasets from Sigma's cloud infrastructure platforms (AWS, GCP, Azure).
- Partner with our infrastructure engineering team to ensure high availability of all key data assets.
- Build internal data products and enable self-service usage across Tech Operations.
- Identify and execute high-impact data projects in ambiguous environments, working independently to define scope, set priorities, and deliver quickly.
Other
- Startup experience
- A drive to continuously learn (and share those learnings) about the evolving data ecosystem
- The base salary range for this position is $140k - $180k annually.
- This role is eligible for stock options, as well as a comprehensive benefits package.
- Note: We have an in-office work environment in all our offices in SF, NYC, and London.