Heritage Bank is looking to optimize data pipelines and architecture to support data and automation initiatives, ensuring efficient data flow, building systems for new products, and maximizing the utilization of data assets.
Requirements
- Work with Snowflake or Microsoft Fabric, basic knowledge of SQL is required.
- Implement data security and privacy best practices to protect sensitive data.
- Stay current with emerging trends in data engineering and big data technologies.
- Data Modeling and Schema Design
- ETL/ELT
- Experience in using Snowflake, Microsoft Fabric, SQL Server
- Data Migration experience in AWS, Azure, GCP environments
Responsibilities
- Serve as the primary point of contact for data integration issues, needs, opportunities, and questions.
- Develop and maintain optimal data pipeline architecture and ETL processes.
- Gather and organize data sets to meet business requirements.
- Identify, design, and implement internal process improvements, such as automating manual processes, optimizing data delivery, and redesigning infrastructure for greater scalability.
- Build the infrastructure needed for efficient extraction, transformation, and loading of data from various sources.
- Develop analytics tools that use the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business metrics.
- Develop, maintain, and optimize technology to enable cross-platform automation and integration.
Other
- highly motivated problem-solver with strong communication and leadership skills
- able to manage shifting priorities in a fast-changing environment, and drive results with a passion for making things happen.
- Collaborates across teams to ensure efficient data flow
- Collaborate with data analysts and business stakeholders to understand data requirements and deliver high-quality data solutions
- Experience in engaging with both technical and non-technical stakeholder