Tatari is looking to level up its data platform by designing, building, and scaling the infrastructure that powers all of its data engineering, data science, and analytics efforts.
Requirements
- 3+ years of dedicated experience building and scaling data pipelines and data infrastructure
- 2+ years of experience working with cloud-based infrastructure (AWS and Databricks)
- Strong programming skills in Python or a similar language
- Experience with containerization and deployment using Docker
- Hands-on experience developing and orchestrating workflows using Apache Airflow
- Expertise in building and optimizing distributed data processing systems with Spark/PySpark
- Experience deploying and managing open-source frameworks like Kafka, Kubernetes, Terraform, and Helm
Responsibilities
- Designing, building, and scaling the infrastructure which powers all of Tatari's data engineering, data science, and analytics efforts
- Laying down industry standard data engineering practices and solutions
- Bringinging in new tools and technologies when necessary
- Doubling down efforts to scale current data infrastructure
- Developing and orchestrating workflows
- Building and optimizing distributed data processing systems
- Setting up observability and monitoring for data platforms
Other
- 3+ years of dedicated experience
- Total compensation ($140,000-170,000/annually)
- Equity compensation
- Health insurance coverage for you and your dependents
- 401K, FSA, and commuter benefits
- $150 monthly spending account
- $1,000 annual continued education benefit
- $500 WFH reimbursement
- Unlimited PTO and sick days
- Monthly Company Wellness Day Off
- Snacks, drinks, and catered lunches at the office
- Team building events
- Hybrid RTO of 2 days per week
- THIS IS AN IN-OFFICE POSITION