Resend is building the most accessible email platform for developers. As the company grows, the pressure of maintaining a safe place for good senders continues to build, requiring a robust data backbone for Trust & Safety operations.
Requirements
- 3+ years building data pipelines (batch, stream) in production.
- Strong in Python, SQL, and data infrastructure (e.g. Kafka, Airflow, Flink, Spark).
- Experience with data integrity, schema evolution, partitioning, compaction, etc.
- Deep understanding of performance, latency, indexing.
- Comfortable designing for failure, retries, backfills, idempotency.
- Experience in fraud, security, abuse domains.
- Familiarity with real-time feature serving or feature store systems.
Responsibilities
- Build and maintain ETL / streaming pipelines for logs, events, metadata, attribution.
- Work on aggregation, feature construction, smoothing, and correctness layers.
- Ensure data quality: alerting, reconciliation, schema validation, drift detection.
- Optimize for low-latency queries and data access patterns used by enforcement engines and dashboards.
- Partner with software engineers and analytics to expose data in APIs or internal tools.
- Instrument logging and observability within your pipelines (monitoring, health dashboards).
- Over time, help build feature stores or real-time feature services for models or rules.
Other
- You treat data as first-class infrastructure.
- Exposure to internal tooling APIs or data mesh architectures.
- Our fully remote team of over 25 humans spans 7 countries.. and counting.
- We’re backed by a16z, Y Combinator, Basecase and other top investors.