The company is looking to enable data-informed decision making across the organization by leveraging its data assets to analyze and optimize user adoption, growth, and revenue.
Requirements
- Experience in data modeling and building scalable data pipelines involving complex transformations
- Proficient in data processing and storage technologies like AWS/S3/HDFS, Databricks, Redshift, Python/Scala/Java, SQL, Spark, Airflow
Responsibilities
- Design, implement and scale end-to-end data products that support growing data processing and analytical needs
- Transform raw data into actionable insights to drive product strategy, and power in-depth analysis and reporting
- Implement systems that guarantee data quality and availability
- Partner with data scientists, domain experts, and engineering teams to develop a roadmap that aligns with business goals
- Embrace broad ownership to expand skills and influence the company’s data strategy
Other
- 5+ years of experience in Data Engineering or Software Engineering
- Proactive and innovative in identifying and addressing bottlenecks in existing workflows
- Motivated to work closely with cross-functional partners to evolve our analytical data model
- In-office schedule with flexibility to work from home on Wednesdays and potentially Fridays
- Bachelor's, Master's, or Ph.D. degree (not explicitly mentioned but implied)