The company is building software products that rely on data and needs to ensure this data is clean, reliable, and usable from the start. They are looking for a Data Engineer to own the pipelines, schemas, and infrastructure that make their data trustworthy and easy to work with.
Requirements
- Strong SQL skills and experience with relational databases (Postgres, MySQL, etc.).
- Experience with data warehouses (BigQuery, Snowflake, Redshift, etc.).
- Hands-on with ETL/ELT pipelines (Airflow, Kafka, or similar).
- Proficiency in a scripting language (Python preferred).
- Comfort with cloud services (GCP/AWS).
- Bonus: experience setting up analytics stacks (Looker, Tableau, Mode, etc.).
- 2–5 years years of related experience in data engineering, data analysis, data warehouses, data lakes.
Responsibilities
- Design and maintain data pipelines (batch & streaming) to move data between systems.
- Build and manage databases, schemas, and data warehouses.
- Ensure data quality, consistency, and reliability.
- Work with backend engineers to design event streams, APIs, and logging that generate useful data.
- Make it easy for product and business teams to query and analyze data.
- Build an ETL pipeline to pull data from production systems into a warehouse for analytics.
- Design a data model that helps engineers and PMs track key product metrics.
Other
- Education: Bachelor's degree
- Certifications (AWS Big Data, GCP Data Engineer, Snowflake) recommended but optional
- If you want to shape the backbone of our data strategy while working closely with product and engineering, this role is for you.
- If you are a highly motivated individual who wants to grow your career with a fast paced and progressive company, Granite has countless opportunities for you.
- EOE/M/F/Vets/Disabled