Dispatch Energy is building a cutting-edge internal platform powered by a large language model (LLM) to augment workflows across the enterprise—from engineering and development to business operations.
Requirements
- Experience using at least 5 of the following technologies, languages, or frameworks: GitHub, Tensorflow, PyTorch, Pandas, NumPy, Django, Hugging Face, Perplexity, Docker, Cursor, SQL, RAGFlow, Pinecone, Spark, Apache Arrow.
- Strong fundamentals in software engineering and a passion for applied machine learning.
- Experience in distributed learning research.
- Familiarity with power markets or energy infrastructure.
Responsibilities
- Work across the entire ML/AI stack supporting the LLM-powered platform.
- Assist with training, fine-tuning, and deploying models using open-source frameworks.
- Support data engineering and orchestration pipelines for LLM inference and retrieval.
- Collaborate on frontend features to integrate the LLM into real-world Dispatch workflows.
- Help build scalable, containerized services using tools like Docker and Kubernetes.
- Contribute to performance benchmarking, GPU optimization, and inference serving.
Other
- Ability to work autonomously in a fast-paced startup environment.
- Curiosity, creativity, and a bias toward building.
- New York-based candidates preferred but remote applicants welcome.