Patreon is looking to support its mission to fund the creative class by investing in building the best creator platform with the best team in the creator economy, and is seeking a Staff Software Engineer to architect and scale the data foundation that underpins its creator analytics product, discovery and safety ML systems, internal product analytics, executive reporting, experimentation, and company-wide decision-making.
Requirements
- 6+ years of experience in software development, with at least 2+ years of experience in building scalable, production-grade data pipelines.
- Expert-level proficiency in SQL and distributed data processing tools like Spark, Flink, Kafka Streams, or similar.
- Strong programming foundations in Python or similar language, with good software engineering design patterns and principles (testing, CI/CD, monitoring).
- Expert in modern data lakes (eg: Delta Lake, Iceberg). Familiar with data warehouses (eg: Snowflake, Redshift, BigQuery) and production data stores such as relational (eg: MySQL, PostgreSQL), object (eg: S3), key-value (eg: DynamoDB) and message queues (eg: Kinesis, Kafka)
- Understanding of data modeling and metric design principles.
- Passionate about data quality, system reliability, and empowering others through well-crafted data assets.
- Highly motivated self-starter who thrives in a collaborative, fast-paced environment and takes pride in high-craft, high-impact work.
Responsibilities
- Design, build, and maintain the pipelines that power all data use cases. This includes ingestion of raw data from production databases, object storage, and message queues, and vendors into our Data Lake, and building core datasets and metrics.
- Develop intuitive, performant, and scalable data models (facts, dimensions, aggregations) that support product features, internal analytics, experimentation, and machine learning workloads.
- Implement robust batch and streaming pipelines using Spark, Python, and Airflow. Handle complex patterns like incremental processing, event-time partitioning, and late data handling.
- Define and enforce standards for accuracy, completeness, lineage, and dependency management. Build monitoring and observability so teams can trust what they’re using.
- Work with Product, Data Science, Infrastructure, Finance, Marketing, and Sales to turn ambiguous questions into well-scoped, high-impact data solutions.
- Pay down technical debt, improve automation, and drive best practices in data modeling, testing, and reliability. Mentor peers and help shape the future of Patreon’s data.
Other
- Bachelor’s degree in Computer Science, Computer Engineering, or a related field, or the equivalent
- Excellent collaboration and communication skills; comfortable partnering with non-technical stakeholders, writing crisp design docs, giving actionable feedback, and can influence without authority across teams.
- Expect to travel a handful of times per year for team building and collaboration offsites.
- Patreon operates under a hybrid work model, where employees based in office locations are expected to come into the office two days per week, excluding sick time and paid leave.
- Patreon offers a competitive benefits package including and not limited to salary, equity plans, healthcare, flexible time off, company holidays and recharge days, commuter benefits, lifestyle stipends, learning and development stipends, patronage, parental leave, and 401k plan with matching.