Compass Group USA is looking to lead AI adoption across software development teams by creating shared context (standards, practices, and common services) that enables teams to ship safe, reliable, and cost-effective AI features. The company aims to define and govern an AI reference architecture, establish platform capabilities, and guide the rollout of agentic AI for tangible business outcomes.
Requirements
- 5+ years in software/enterprise architecture; 3+ years building LLM/AI solutions at scale.
- Deep knowledge of LLM patterns: RAG, fine-tuning/adapter strategies, function/tool calling, structured output, evaluation, caching.
- Strong platform background: API design, microservices, IAM/zero-trust, observability, CI/CD, IaC (Terraform), and cost management.
- Data foundations: domain modeling, data lifecycle governance, vector/graph stores, catalog/lineage, feature stores.
- Cloud proficiency (AWS/Azure/GCP) and modern data/AI stacks (e.g., Python, Typescript, Docker/K8s, message brokers).
- Proven ability to drive standards across many teams; excellent facilitation and storytelling skills.
- Experience designing and delivering developer education (curriculum design, hands-on labs, code reviews, certification rubrics, and outcome-based evaluation).
Responsibilities
- Own the enterprise AI reference architecture (applications, data, security, and operations).
- Define canonical domain models and patterns for shared context: knowledge sources, metadata, embeddings, ontologies, and retrieval.
- Stand up common services: identity-aware retrieval (RAG), vector/graph storage, prompt/skill catalogs, tool/function registries, evaluation harnesses, and LLM observability.
- Publish and govern AI engineering standards: prompt patterns, tool/agent orchestration, structured outputs, evaluation/testing, telemetry, and rollout gates.
- Architect agent workflows (plan–act–observe–reflect loops), tool usage, and multi-agent collaboration for real-world processes.
- Implement guardrails for privacy, safety, IP, and data residency; integrate content filtering, PII handling, and audit trails.
- Design and deliver a developer education program on implementing agentic coding (plan–act–observe–reflect loops, tool/skill catalogs, safety guardrails, evaluation and regression testing, cost control, and SDLC integration).
Other
- Partner with Legal, Risk, and Security on policy, red-teaming, incident response, and vendor due diligence.
- Lead developer enablement: playbooks, training, office hours, internal community of practice.
- Drive citizen intelligence/development governance with safe rails and supported toolchains.
- Advise product leaders on AI opportunity sizing, success metrics, and ethical considerations.
- Experience standing up internal AI platforms or Centers of Excellence.