The company is seeking to develop and maintain robust ETL pipelines, data warehousing solutions, and marketing performance reporting systems to support data-driven decision making across marketing and business operations.
Requirements
- Highly proficient with Python, SqlAlchemy, Alembic, Pytest
- Experience with SQL and relational databases such as Postgres
- Experience with database design and optimization
- Experience integrating with marketing APIs (Google Analytics, Facebook Marketing API, Salesforce, HubSpot, etc.)
- Experience with deploying data pipelines in Python
- Experience with containerized architecture (Docker, Kubernetes, Argo Workflows)
- Experience with DevOps practices and CI/CD (GitHub Actions)
Responsibilities
- Work closely with data scientists, product managers, and software engineers to build and maintain data pipelines for popular marketing platforms
- Implement data validation, quality checks, and monitoring systems to ensure data accuracy and reliability
- Support automated reporting solutions using Python
- Deploy data pipelines across multiple environments with Docker, Helm, Terraform, and Kubernetes
- Ensure data warehouse scalability, performance optimization, and cost efficiency
- Integrate new data pipelines on a management UI, ensuring it effectively supports setup and maintenance tasks
Other
- 2-5 years of experience in data engineering or related roles
- Unlimited paid time off
- 401k with company matching and no vesting period
- Annual bonuses
- Generous medical plan
- Paid parental leave