Building the next-generation AI-native knowledge platform to help organizations easily access and retrieve internal knowledge using the power of LLMs at Zoom
Requirements
- Have experience (4+) in backend or distributed systems engineering
- Have experience designing and operating large-scale data ingestion pipelines (message queues, vector stores, Temporal, Elasticsearch etc.)
- Track record of building highly available, multi-tenant backend services
- Have experience with document-level permission modeling and secure data handling
- Possess with cloud-native tools such as Docker, Kubernetes, and AWS
- Have experience in Go is a bonus
- Have experience integrating with SaaS platforms (Google Workspace, Microsoft 365, Slack, etc.)
Responsibilities
- Designing and implement a scalable RAG system for real-time Q&A across internal content (meetings, messages, documents, whiteboards, videos, etc.).
- Building robust ingestion and indexing pipelines for semi-structured data sources with fine-grained, permission-aware access control.
- Developing APIs and backend systems to enable efficient querying, retrieval, and ranking.
- Collaborating with ML/NLP engineers to iterate on embedding models and improve search quality.
- Ensuring reliability, low latency, and scalability across the entire data retrieval and augmentation stack.
- Monitoring system performance and optimize for high-throughput, low-latency workloads under real-world load.
Other
- 4+ years of experience
- Productivity mindset with experience using AI tools effectively
- Location based compensation structure
- Hybrid, Remote, or In-Person work style
- Benefits program offers a variety of perks, benefits, and options to help employees maintain their physical, mental, emotional, and financial health