WRITER is looking for a Data Engineer to help design, build, and scale the data infrastructure that powers their analytics, reporting, and product insights
Requirements
- Expert-level proficiency in SQL, dbt, and Python
- Strong experience with data pipeline orchestration (Airflow, Prefect, Dagster, etc.) and CI/CD for data workflows
- Deep understanding of cloud-based data architectures (AWS, GCP) — including networking, IAM, and security best practices
- Experience with event-driven systems (Kafka, Pub/Sub, Kinesis) and real-time data streaming is a plus
- Strong grasp of data modeling principles, warehouse optimization, and cost management
- Proficiency with Snowflake, BigQuery, Fivetran, Hightouch, Segment
- Experience with Terraform, GitHub Actions, AWS DMS, Google Datastream
Responsibilities
- Design efficient and reusable data models optimized for analytical and operational workloads
- Design and maintain scalable, fault-tolerant data pipelines and ingestion frameworks across multiple data sources
- Architect and optimize our data warehouse (Snowflake/BigQuery) to ensure performance, cost efficiency, and security
- Define and implement data governance frameworks — schema management, lineage tracking, and access control
- Build and manage robust ETL workflows using dbt and orchestration tools (e.g., Airflow, Prefect)
- Implement monitoring, alerting, and logging to ensure pipeline observability and reliability
- Develop comprehensive data validation, testing, and anomaly detection systems
Other
- 5+ years of hands-on experience in a data engineering role, ideally in a SaaS environment
- Excellent communicator who can bridge technical and business perspectives
- Intellectually curious and driven by building scalable systems that last
- Pragmatic problem solver who values simplicity and clarity in design
- Embody our values: Connect, Challenge, Own