The Data engineering team at Chime needs to build a scalable data platform to cater to the data plumbing needs of the company, enabling the creation of scalable data pipelines and frameworks, and potentially setting industry standards for data workflows.
Requirements
- 5+ years experience transforming data to governed and lucid datasets
- 5+ years of hands-on experience to build and deploy production-quality data pipelines
- 5+ years of experience writing Spark, AWS Glue, EMR, Airflow, Python
- 3+ years of hands-on experience using any MPP database system like Snowflake, AWS Redshift or Teradata
- Understanding of key metrics for data pipelines and has built solutions to provide visibility to partner teams
Responsibilities
- Be a hands-on data engineer, building, scaling and optimizing ETL pipelines
- Design data warehouse schemas and scale data warehouse process data for 10x data growth
- Ownership of all aspects of data - data quality, data governance, data and schema design, data quality and security
- Own schema registry and dependency chart for persistent data
- Own the ETL workflows and make sure the pipeline meets data quality and availability requirements
- Work closely with partner teams, like Data Science, Analytics and DevOps
Other
- 5+ years of experience working with stakeholders to provide business insights
- Track record of successful partnerships with Analytics, Data Science and DevOps teams
- Four days a week in the office and Fridays from home for those near one of our offices, plus team and company-wide events depending on location.
- Competitive salary based on experience
- 401k match plus great medical, dental, vision, life, and disability benefits