Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

Sezzle Logo

Senior Data Engineer (Poland)

Sezzle

$5,000 - $9,500
Dec 17, 2025
Remote, US
Apply Now

Sezzle is growing rapidly and needs to empower the business and organization to analyze large volumes of data quickly and efficiently. The company is looking to evolve its data ecosystem, improve data infrastructure, and integrate new technologies to support its expanding data needs.

Requirements

  • 9+ years of experience in data engineering, with a strong track record of production-grade systems.
  • Deep expertise with AWS Redshift or similar products, including performance tuning, table design, and workload management.
  • Strong hands-on experience with ETL/ELT frameworks, especially DBT, AWS DMS, and similar tools.
  • Proficiency in SQL (advanced level) and at least one programming language such as Python, Scala, or Java.
  • Experience building and maintaining AWS-based data platforms, including S3, Lambda, Glue, or EMR.
  • Track record designing scalable, fault-tolerant data pipelines using modern orchestration tools (Airflow, Dagster, Prefect, etc.) processing more than 100GB - 1 TB of new data a day
  • Strong understanding of data modeling, distributed systems, and warehouse/lake design patterns.

Responsibilities

  • Design, build, and optimize large-scale, high-performance data pipelines to support analytics, product insights, and operational workflows.
  • Architect and evolve Sezzle’s data ecosystem, driving improvements in reliability, scalability, and efficiency.
  • Lead development of ETL/ELT workflows using Redshift, DBT, AWS DMS, and related modern data tooling.
  • Partner with cross-functional teams (engineering, analytics, product, finance, risk) to gather or adapt requirements and deliver robust, high-quality datasets.
  • Evaluate and integrate new technologies, guiding the evolution of Sezzle’s data stack and infrastructure.
  • Optimize Redshift and warehouse performance, including query tuning, modeling improvements, and cost management.

Other

  • 9+ years of experience in data engineering
  • Ability to work in a fast-paced, collaborative environment with excellent communication and documentation skills
  • Preferred experience in high-growth, data-intensive fintech or similar regulated environments
  • Preferred familiarity with streaming technologies (Kafka, Kinesis, Flink, Spark Streaming)
  • Preferred knowledge of lakehouse architectures and modern stacks such as Snowflake, Databricks, Iceberg, or Delta Lake