Eliminate data bottlenecks and accelerate business impact for the world's most innovative companies
Requirements
- Eight or more years of full-time professional programming experience
- Five or more years experience working with large-scale distributed computing systems on a major cloud provider like AWS or Azure
- Experience with distributed computing technologies like Spark
- Experience with infrastructure development and deployment using tools like Terraform, CloudFormation, or ARM Templates
- Experience with a platform-level team whose customers are other developers
- Excited about using functional languages in a production environment, such as Clojure
- Proficiency in CI/CD tools (e.g. Jenkins, GitLab)
Responsibilities
- Ingest and process massive volumes of data daily, blending batch and real-time events to create accurate, comprehensive customer profiles
- Training machine-learning models to build customer knowledge graphs
- Turn the latest research algorithms into reliable, high-scale production systems
- Empower users with complex, real-time querying capabilities, helping them obtain meaningful insights from aggregated data
- Build deep integrations with industry-leading platforms like Databricks, Snowflake, and multiple paid-media connectors
- Deliver enriched data assets and insights to diverse systems, helping our clients unlock new value from their data
Other
- Eight or more years of full-time professional programming experience and a Bachelor's Degree or above in Computer Science, Software Engineering, or related STEM fields
- Collaborative, inclusive, and inspired by your customers' success
- Self-managed PTO and the flexibility to do your best work in the way that works for you