At ARB Interactive, the business problem is to shape and expand the foundation of their modern data stack, turning raw data into business-critical insights to support a high-growth environment. They need to ensure their systems scale with the business and enable self-service analytics.
Requirements
- Strong SQL and Python skills, with a focus on readable and efficient code
- Deep understanding of data warehousing concepts and data modeling best practices
- Hands-on experience with tools in the modern data stack (e.g., dbt, Airflow, Snowflake, BigQuery, Redshift)
- Familiarity with event tracking platforms (e.g., Segment, Amplitude)
Responsibilities
- Design, build, and maintain scalable, efficient ETL/ELT pipelines
- Model clean, trusted datasets to support analytics, experimentation, and reporting
- Optimize our data infrastructure for performance, cost, governance, and maintainability
- Partner with data analysts and product teams to improve data accessibility and accuracy
- Enable self-service analytics by designing intuitive data models and comprehensive documentation
- Implement robust data quality frameworks, monitoring, alerting and observability to ensure data reliability
- Collaborate with product and engineering on instrumentation of new product features and events
Other
- 5+ years of experience in data engineering or related roles
- Strong communication and collaboration skills; able to work cross-functionally with analysts, PMs, and engineers
- A bias toward action and ownership; you thrive in fast-paced, high-autonomy environments
- Experience in gaming, entertainment, or high-volume consumer applications
- Experience hiring or onboarding engineers in a high-growth environment