Shelter is looking to solve the problem of designing, implementing, and maintaining scalable data pipelines and integration solutions to support efficient data architecture, processing, and analytics.
Requirements
- Proven experience (2+ years) as a Data Engineer, with expertise in Qlik Replicate, Compose, and DBTLabs, Astronomer & Github, combined with hands-on experience in Snowflake typically required.
- Proficiency in Qlik Replicate, Compose, and Enterprise Engineer for data integration, replication, and transformation.
- Strong understanding of data engineering principles, data modeling, ETL processes, and data warehousing concepts.
- Expertise in Snowflake, including data ingestion, storage, performance optimization, and security.
- Experience with cloud platforms, such as AWS or Azure, and working with cloud-based data solutions and services.
- Familiarity with data quality, data governance, and data lifecycle management practices.
- Solid programming skills in SQL and scripting languages (e.g., Python, PowerShell) for data manipulation and automation.
Responsibilities
- Design, implement, and maintain scalable data pipelines and integration solutions using Qlik Replicate, Compose, Snowflake, and DBT Labs to support efficient data architecture, processing, and analytics.
Other
- Bachelor's or Master's degree in computer science, information systems, or a related field is preferred.
- Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders.
- Ability to perform essential functions of the position, with or without a reasonable accommodation.
- Strong problem-solving and troubleshooting abilities, with the ability to diagnose and resolve data engineering issues.