The company is seeking to improve the accuracy, scalability, and predictive capabilities of its data platform to influence key business decisions for its customers.
Requirements
- 5+ years of engineering experience, including 3+ years working on distributed data and ML systems
- Experience with Databricks SQL, PySpark, and MLFlow
- Experience with Dagster for orchestration and LakeFS for data versioning
- Knowledge of cutting-edge ML techniques
- Experience working as both a software engineer and a data scientist (bonus)
- Experience building data products in the marketing, KYB, or credit underwriting space (bonus)
Responsibilities
- Design, build, and maintain the core small business data product
- Collaborate with cross-functional teams to build out new product capabilities
- Engineer scalable, maintainable data/ML systems
- Build on top of the company's innovative data infrastructure
- Apply a data-focused, first-principles approach to problem solving
Other
- Understand the unmet needs of customers and create capabilities to address these
- Inspire teammates to perform at their best while fostering a collaborative, supportive team culture
- Strong collaboration skills and a desire to work in a cross-functional environment
- A principled, metrics-driven approach to solving complex data problems
- Cares deeply about creating value and wants to learn, ship, and iterate quickly