The company is looking to solve the problem of improving the performance and capabilities of its mission-critical infrastructure and data platform.
Requirements
- Experience with Python, Django, Celery, Airflow, and Kafka
- Experience with React, Redux, and Mapbox
- Experience with PostgreSQL and Elasticsearch
- Experience with machine learning models hosted in Bedrock and Sagemaker
- Experience with AWS, Pulumi, Terraform, and Kubernetes
- Strong understanding of product development and navigating large codebases
- Ability to write robust, well tested, and well-designed code
Responsibilities
- Re-architecting Elasticsearch cluster to achieve order-of-magnitude improvements in performance
- Owning strategic initiatives that deliver industry-leading platform performance and capabilities
- Creating novel capabilities that span the platform
- Optimizing subsystems to achieve order-of-magnitude improvements in performance
- Collaborating with deployment team and users to iterate and solve problems
- Taking full responsibility for major features and working closely with other engineers to drive them to completion
- Pushing the boundaries of what a data platform can do
Other
- Desire and drive to own large portions of the application from start to finish
- Passion for crafting and shipping software solutions that delight users
- Thrive on ambiguity and love taking on hard problems
- Excellent technical vision with the ability to synthesize product requests into strong and reliable software components
- Degree in Computer Science or a related field, or equivalent experience
- 5+ years of experience working with cross-functional software development teams
- San Francisco or Washington DC based candidates strongly preferred (hybrid work is acceptable)