Our client is building the enterprise standard for trustworthy knowledge systems using AI. The platform powers fast, verified answers using advanced Retrieval-Augmented Generation (RAG) pipelines. They are scaling their QA function to meet the demands of complex microservice architectures, ML pipelines, and cloud-native deployments.
Requirements
- 4+ years in automation; 2+ years testing API-first microservices
- Strong understanding of testing distributed systems (integration, mocking, contract tests)
- CI/CD debugging experience
- UI automation with Playwright or Cypress
- Proficiency in Go (Ginkgo/Gomega), Python, or JavaScript/TypeScript
- Confident navigating cloud-native environments (GCP, AWS)
Responsibilities
- Build and maintain automation suites for API, UI, and system-level tests using Supertest, Pytest, Playwright, Cypress, or Ginkgo/Gomega
- Own end-to-end testing for distributed microservices, focusing on API behavior and data consistency
- Debug CI/CD pipelines (GitHub Actions, Bitbucket, Jenkins) across GCP and AWS environments
- Work with ML teams to evaluate generative AI models and validate model output
Other
- You take ownership, hunt root causes relentlessly, and write code that others can build on.
- You’re direct, iterative, and not afraid to break brittle systems to build better ones.