Federato is looking to solve the problem of enabling insurers to provide affordable coverage to people and organizations facing complex modern risks like climate change and cyber-attacks. Their AI/ML-driven platform aims to optimize risk portfolios and improve underwriting decisions by leveraging data and providing real-time insights, ultimately helping underwriters meet their goals and serve society.
Requirements
- Proven experience designing and maintaining scalable data pipelines (e.g., using Airflow, Dagster, or Prefect).
- Experience with software development practices like version control, CI/CD, or dbt testing strategies.
- Strong proficiency in SQL and Python, with bonus points for Typescript (or similar) experience
- Comfort working with version control, CI/CD systems, and cloud infrastructure (e.g., AWS, GCP, Terraform).
- Prior exposure to ML ops or experience supporting AI/ML-driven products
- Enthusiasm for building internal tools or frameworks to improve team velocity.
Responsibilities
- Collaborate with Data Science, Product Managers and Software Engineers to build robust ETL pipelines that enable the Product Support team to deliver compelling user-facing features
- Contribute to architecture decisions, observability tooling, and data quality initiatives that keep our platform robust and maintainable.
- Contribute to a scalable internal framework for managing prompt engineering pipelines and other AI workflows.
- Enforce and elevate engineering best practices across the AI/ML org, including code quality, testing, and documentation.
Other
- 5+ years of experience in data engineering, backend engineering, or related roles with a focus on data infrastructure.
- Comfortable navigating ambiguity and working closely with business stakeholders to understand their data needs.
- Proven track record of designing high-impact data products and pipelines in fast-paced environments.
- Prior experience working in or adjacent to insurance, fintech, or risk modeling domains.
- Contributions to open-source data tools or involvement in the data community.