Transform raw data into actionable insights that drive business decisions across multiple functions, improve operational efficiency, reporting, and analytics, and influence company-wide strategy through reliable, well-structured datasets.
Requirements
- Expert-level SQL skills with the ability to write readable, performant, and maintainable queries.
- Strong experience with dbt in production environments.
- Solid understanding of data modeling principles, including Kimball methodology, star schema, and slowly changing dimensions.
- Familiarity with data architecture concepts, including data warehousing, transformation layers, and analytics pipelines.
- Bonus: Experience with Python for predictive analytics or machine learning.
- Bonus: Familiarity with cloud-based warehouses and BI tools such as Snowflake, Power BI, or Looker.
Responsibilities
- Maintain and optimize data transformation pipelines in dbt, including staging, intermediate, and mart layers.
- Collaborate with teams such as Product, Finance, Operations, and Marketing to define and validate KPIs.
- Translate raw cloud data (Snowflake, BigQuery, Redshift) into structured, business-ready datasets.
- Write efficient, modular, and well-documented SQL queries for daily data transformations and ad hoc analysis.
- Participate in code reviews and support the enforcement of data engineering best practices.
- Build dashboards or visualizations in BI tools such as Power BI, Looker, or Mode to support decision-making.
- Assist in the design and evaluation of predictive models and advanced analytics projects.
Other
- 2–5+ years of experience in data analysis, analytics engineering, or a similar role.
- Ability to collaborate cross-functionally and translate business requirements into technical solutions.
- Excellent communication and documentation skills.
- Strong sense of ownership and attention to accuracy, efficiency, and reliability.
- Supportive remote work environment.