Sezzle is growing and its data generation and consumption is growing at an increasing scale, requiring the empowerment of the business, engineers, and the rest of the organization to analyze large volumes of data quickly and efficiently.
Requirements
- 9+ years of experience in data engineering, with a strong track record of production-grade systems.
- Deep expertise with AWS Redshift or similar products, including performance tuning, table design, and workload management.
- Strong hands-on experience with ETL/ELT frameworks, especially DBT, AWS DMS, and similar tools.
- Proficiency in SQL (advanced level) and at least one programming language such as Python, Scala, or Java.
- Experience building and maintaining AWS-based data platforms, including S3, Lambda, Glue, or EMR.
- Track record designing scalable, fault-tolerant data pipelines using modern orchestration tools (Airflow, Dagster, Prefect, etc.) processing more than 100GB - 1 TB of new data a day
- Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns.
Responsibilities
- Design, build, and optimize large-scale, high-performance data pipelines to support analytics, product insights, and operational workflows.
- Architect and evolve Sezzle’s data ecosystem, driving improvements in reliability, scalability, and efficiency.
- Lead development of ETL/ELT workflows using Redshift, DBT, AWS DMS, and related modern data tooling.
- Partner with cross-functional teams (engineering, analytics, product, finance, risk) to gather or adapt requirements and deliver robust, high-quality datasets.
- Evaluate and integrate new technologies, guiding the evolution of Sezzle’s data stack and infrastructure.
- Optimize Redshift and warehouse performance, including query tuning, modeling improvements, and cost management.
Other
- 9+ years of experience
- You have relentlessly high standards
- You’re not bound by convention
- You need action
- You earn trust
- You have backbone; disagree, then commit