Life360 is looking to enhance and maintain its data processing and storage pipelines for a robust and secure data lake.
Requirements
- Deep experience with both Databricks and AWS cloud computing platforms
- Proficient programming in Python
- Proficiency with SQL and ability to optimize queries
- Experience with large-scale data processing using Spark and/or Presto/Trino
- Experience with streaming data with tools like Kinesis, Kafka, or Flink
- Experience working with high volume event based data architecture like Amplitude or Braze
- Experience with job orchestration tooling like Airflow or Databricks Workflows
Responsibilities
- Design, implement, and maintain scalable data processing platforms used for real-time analytics and exploratory data analysis.
- Manage various types of data from ingestion through ELT to storage and batch processing.
- Automate, test and harden all data workflows.
- Architect logical and physical data models to ensure the needs of the business are met.
- Collaborate with analytics and platform teams, while applying best practices.
- Architect and develop systems and algorithms for distributed real-time analytics and data processing.
- Implement strategies for acquiring and transforming data to develop new insights.
Other
- Minimum of 5 years of experience working with high volume data infrastructure
- BS in Computer Science, Software Engineering, Mathematics, or equivalent experience
- Good communication skills and ability to work independently
- Competitive pay and benefits
- Flexible PTO, 13 company wide days off throughout the year
- Winter and Summer Week-long Synchronized Company Shutdowns