Natera is looking to build and scale a new Generative AI and ML Solutions team to leverage AI for improved productivity, efficiency, and experience, and to solve the company's AI/ML challenges.
Requirements
- Expertise in Generative AI, including LLMs, prompt engineering, RAG, fine-tuning, training, and evaluation methodologies.
- Proven track record of delivering production-grade AI solutions in customer-facing and internal products especially using LLMs and ML models
- Strong understanding of production software engineering best practices, CI/CD, testing, observability, error handling, and security.
- Experience with AWS based AI services and other specialized AI platforms (e.g., AWS Bedrock, Snowflake AI, Google AI, OpenAI, xAI).
- Demonstrated ability to optimize AI model performance and costs for large-scale deployments especially LLMs.
- Familiarity with AI governance frameworks, bias detection, explainability, and compliance (e.g., HIPAA, CLIA, FDA).
Responsibilities
- Own the end-to-end technical vision for the entire AI/ML platform, from data ingestion, MLOps, model serving, fine-tuning, foundation model training, RAG, and agentic applications.
- Make the critical "build vs. buy vs. open-source" decisions that balance speed, cost, and long-term defensibility
- Design, build, and scale an AI/ML platform that provides standardized tooling, infrastructure, and workflows for LLM training, fine-tuning, retrieval-augmented generation (RAG), AI orchestration, and deployment.
- Develop reusable components and services (e.g., vector databases, prompt libraries, agent frameworks, model registries, evaluation pipelines, safety/guardrail modules) to accelerate delivery of AI solutions across product engineering teams.
- Ensure reliability, scalability, and compliance of the AI/ML platform by implementing robust observability, governance, and cost-optimization strategies tailored for large model serving and API consumption.
- Own the full lifecycle of AI solutions — from prototyping and deployment through ongoing monitoring, maintenance, and enhancements — ensuring solutions remain accurate, performant, and relevant as business needs evolve.
- Continuously improve deployed AI systems by incorporating feedback, retraining models, and updating components to adapt to changing data, regulatory requirements, and operational realities.
Other
- Recruit, hire, mentor, and retain an elite team of T shaped AI engineers, applied ML engineers, data scientists, and platform engineers.
- Design a rigorous hiring process to find "unicorn" talent and foster a culture of continuous learning and excellence.
- Partner with business and product owners to identify, design, and implement high-impact AI solutions that drive measurable outcomes, ensuring alignment with strategic priorities.
- Implement robust processes for quality assurance, model governance, and performance monitoring.
- Drive adoption of a combination of hyper-scaler AI services and specialized cloud native AI solutions to accelerate time-to-market.