Drive the development of a modular, reusable Gen AI product suite that enables cross-functional teams to deploy AI solutions rapidly without deep business context.
Requirements
Hands-on experience with LLM integration (e.g., OpenAI, Anthropic, Llama 2) and frameworks (LangChain, LlamaIndex).
Expertise in RAG workflows: Document chunking (sentence transformers), vector DBs (Pinecone, FAISS), and hybrid search.
Familiarity with text-to-SQL systems, few-shot/chain-of-thought prompting, and traditional ML (clustering with scikit-learn, PyTorch).
Proficiency in Python, API design (FastAPI, Flask), and cloud platforms (AWS Sagemaker, Azure AI).
Experience with CI/CD, containerization (Docker), and infrastructure-as-code (Terraform).
Frontend integration (React/Streamlit for config UIs) and middleware (message queues, auth systems like R2D2).
Experience with open-source projects (contributor/maintainer).
Responsibilities
Define the product vision and roadmap for reusable Gen AI modules (e.g., RAG, prompting frameworks, hybrid ML/LLM systems).
Architect parameterized, business-agnostic solutions that abstract complexity (e.g., pre-configured prompts, vector DB connectors, chunking logic).
Design APIs and microservices to expose modules as reusable components (e.g., “text-to-SQL service,” “RAG-as-a-service”).
Standardize patterns (e.g., prompt templates, chunking strategies, few-shot training pipelines) across use cases.
Integrate LLM workflows (e.g., OpenAI, Claude) with traditional ML (clustering, classification) and enterprise systems (databases, UI tools).
Optimize performance of Gen AI components (cost, latency, accuracy) and ensure scalability (e.g., load balancing for vector DBs).
Build developer tools (SDKs, UI templates) to help teams self-serve (e.g., drag-and-drop prompt builders, vector DB configurators).
Other
Foster an open-source-like community for contributions.
Partner with business teams to map their needs to pre-built modules.
Foster an open-source-like community: Create contribution guidelines, review external code, and incentivize modular feature additions.
Develop documentation, tutorials, and sandbox environments for testing modules.
Train teams on best practices (e.g., prompt engineering, security for LLM outputs).