Speechify aims to solve the problem of reading being a barrier to learning by providing text-to-speech products that convert various forms of text into audio, enabling faster and more effective reading and comprehension.
Requirements
- Experience shipping Python-based services
- Experience being responsible for the successful operation of a critical production service
- Experience with public cloud environments, GCP preferred
- Experience with Infrastructure such as Code, Docker, and containerized deployments.
- Preferred: Experience deploying high-availability applications on Kubernetes.
- Preferred: Experience deploying ML models to production
Responsibilities
- Work alongside machine learning researchers, engineers, and product managers to bring our AI Voices to their customers for a diverse range of use cases
- Deploy and operate the core ML inference workloads for our AI Voices serving pipeline
- Introduce new techniques, tools, and architecture that improve the performance, latency, throughput, and efficiency of our deployed models
- Build tools to give us visibility into our bottlenecks and sources of instability and then design and implement solutions to address the highest priority issues
Other
- This is a key role and ideal for someone who thinks strategically, enjoys fast-paced environments, passionate about making product decisions, and has experience building great user experiences that delight users.
- Work ethic, solid communication skills, and obsession with winning are paramount.
- The United States base salary range for this full-time position is $140,000-$200,000 + bonus + equity depending on experience
- Tell us more about yourself and why you're interested in the role when you apply.
- And don’t forget to include links to your portfolio and LinkedIn.