Tatari is looking to level up its data platform by designing, building, and scaling the infrastructure that powers its data engineering, data science, and analytics efforts.
Requirements
- 3+ years of dedicated experience building and scaling data pipelines and data infrastructure
- 2+ years of experience working with cloud-based infrastructure (AWS and Databricks)
- Strong programming skills in Python or a similar language
- Experience with containerization and deployment using Docker
- Hands-on experience developing and orchestrating workflows using Apache Airflow
- Expertise in building and optimizing distributed data processing systems with Spark/PySpark
- Experience deploying and managing open-source frameworks like Kafka, Kubernetes, Terraform, and Helm
Responsibilities
- Design, build, and scale the infrastructure that powers data engineering, data science, and analytics efforts
- Lay down industry-standard data engineering practices and solutions
- Bring in new tools and technologies when necessary
- Scale current data infrastructure
- Develop and orchestrate workflows
- Build and optimize distributed data processing systems
- Set up observability and monitoring for data platforms
Other
- 3+ years of dedicated experience
- Total compensation ($140,000-170,000/annually)
- Equity compensation
- Health insurance coverage for you and your dependents
- 401K, FSA, and commuter benefits
- $150 monthly spending account
- $1,000 annual continued education benefit
- $500 WFH reimbursement
- Unlimited PTO and sick days
- Monthly Company Wellness Day Off
- Snacks, drinks, and catered lunches at the office
- Team building events
- Hybrid RTO of 2 days per week
- THIS IS AN IN-OFFICE POSITION