The company is looking to expand and improve its data and data pipeline architecture, and optimize data flow and MDM for cross-functional teams.
Requirements
- Implementation experience in Python, Parquet, Spark, Azure Databricks, Delta Lake, Databricks Data Warehouse
- SQL development knowledge – Stored procedures, triggers, jobs, indexes, partitioning, pruning etc.
- ETL/ELT and Data-warehousing techniques and best practices
- Experience building, maintaining, and scaling ETL/ELT processes and infrastructure
- Implementation experience with various data modelling techniques
- Experience with CI/CD tools (Preferred Gitlab, Jenkins)
- Experience with cloud infrastructure (Azure strongly preferred)
Responsibilities
- Design, develop, and operate large scale data pipelines to support internal and external consumers
- Improve and automate internal processes
- Integrate data sources to meet business requirements
- Write robust, maintainable, well documented code
Other
- 2-4 years professional Data Engineering and Data warehousing experience
- Be able to navigate ambiguity and pivot based on business priorities with ease
- Strong communication, negotiating and estimating skills
- Be a team player and should be able to collaborate well
- Prior Financial industry experience a plus
- Salary range of $97,000 - 135,000
- Stock options or other equity-based awards
- Insurance coverage (medical, dental, vision, life, and disability)
- Flexible paid time off
- Paid holidays
- 401(k) plan with company match
- Remote work