Mesa is looking to establish and scale its data function by building and operationalizing data pipelines, standardizing metrics, and enabling data-driven decision-making across the company.
Requirements
- 5+ years of software engineering and operationalizing data pipelines with large and complex datasets
- Experience with data modeling, ETL, and developing patterns for efficient data governance.
- Experience with manipulating large-scale structured and unstructured data
- Experience working with batch and stream processing
- Strong proficiency with Typescript is a must
- Strong with SQL
- Experience using dashboarding tools such as: Mode, Tableau, Looker
Responsibilities
- Lead data engineering at Mesa by developing & operationalizing scalable and reliable data pipelines
- Assemble large, complex data sets that meet functional and non-functional requirements
- Work with product and cross functional business stakeholders to enable to visualization layers empowering everyone at Mesa to make data led decisions
- Drive technical delivery - everything from architectural design to development to QA
- Participate in customer discovery efforts and initiatives as our beta users help us refine our product
Other
- Ability to thrive in a fast-paced startup environment, handle ambiguity, and have strong product ownership mindset
- Must love dogs 🐶
- This is a hybrid role, requiring four days per week in one of our offices in San Francisco, CA; New York, NY; or Austin, TX