Speechify is looking to solve the problem of making reading accessible to everyone by building high-quality datasets at petabyte-scale and low cost to support model training operations for its text-to-speech products
Requirements
- Proficiency with bash/Python scripting in Linux environments
- Proficiency in Docker and Infrastructure-as-Code concepts and professional experience with at least one major Cloud Provider (we use GCP)
- Experience with web crawlers, large-scale data processing workflows is a plus
Responsibilities
- Be scrappy to find new sources of audio data and bring it into our ingestion pipeline
- Operate and extend the cloud infrastructure for our ingestion pipeline, currently running on GCP and managed with Terraform
- Collaborate closely with our Scientists to shift the cost/throughput/quality frontier, delivering richer data at bigger scale and lower cost to power our next-generation models
- Collaborate with others on the AI Team and Speechify Leadership to craft the AI Team’s dataset roadmap to power Speechify’s next-generation consumer and enterprise products
Other
- BS/MS/PhD in Computer Science or a related field
- 5+ years of industry experience in software development
- Ability to handle multiple tasks and adapt to changing priorities
- Strong communication skills, both written and verbal