HyperFi is looking to build intelligent systems that plug into a larger product, specifically designing prompts, evaluating them, and wrapping them in real workflows that run reliably, to make sense of real-world complexity
Requirements
- Python (primary language for all LLM + orchestration work)
- LangChain + LangGraph + LangSmith
- Databricks + PySpark for processing, labeling, and training context
- Gemini + model routing logic
- Postgres, and custom orchestration via MCP
- GitHub Actions, GCP
Responsibilities
- Build agentic LLM pipelines using LangChain, LangGraph, and LangSmith
- Design and iterate on prompt strategies, with a focus on consistency and context
- Construct retrieval-augmented generation (RAG) systems from scratch
- Own orchestration of PySpark and Databricks workflows to prepare inputs and track outputs
- Instrument evaluation metrics and telemetry to guide prompt evolution
- Work alongside product, frontend, and backend engineers to tightly integrate AI into user-facing flows
Other
- 5–7 years building production-grade ML, data, or AI systems
- Must be based in San Francisco, Las Vegas, or Tel Aviv
- Full-time role with competitive comp
- Flexible hours, async-friendly culture, engineering-led environment
- Strong grasp of prompt engineering, context construction, and retrieval design