Frontier Technology Inc. (FTI) is seeking an AI/ML Engineer to design, build, and deploy advanced machine learning solutions to support defense and national security missions. The role focuses on solving complex challenges through execution and direct impact on operational systems.
Requirements
- Strong Python development skills with hands-on experience building AI/ML solutions.
- Direct experience with ML frameworks such as PyTorch, TensorFlow, scikit-learn, Hugging Face, or LangChain.
- Proven ability to build and deploy MLOps pipelines using MLflow, Kubeflow, DVC, or equivalent.
- Working knowledge of vector databases (Milvus, Pinecone, Chroma, FAISS) and retrieval-based architectures (RAG, hybrid, graph).
- Familiarity with DoD/IC AI assurance, security, and deployment environments.
- Experience fine-tuning and evaluating LLMs or smaller task-specific models using LoRA, QLoRA, or PEFT.
- Familiarity with agentic frameworks (LangGraph, AutoGen, CrewAI, DSPy) and multi-agent reasoning.
Responsibilities
- Design, develop, and deploy AI/ML models and pipelines that meet mission and performance objectives.
- Build, train, and fine-tune models using frameworks such as PyTorch, TensorFlow, scikit-learn, Hugging Face, and LangChain.
- Develop and operationalize MLOps pipelines (MLflow, Kubeflow, DVC, or custom training/inference orchestration).
- Implement and optimize vector databases (Milvus, Pinecone, Chroma, FAISS) and retrieval architectures (RAG, graph, hybrid).
- Write clean, efficient Python code for data ingestion, feature engineering, embeddings, and inference services.
- Experiment with fine-tuning and optimization of LLMs and task-specific models (LoRA, QLoRA, PEFT).
- Contribute to agent-based applications using frameworks like LangGraph, AutoGen, CrewAI, or DSPy.
Other
- Must be a U.S. citizen and be willing to obtain and maintain a security clearance, as needed.
- 4–6 years of professional experience developing and deploying AI/ML solutions in production environments.
- Understanding of prompt engineering, retrieval quality, and grounding methods.
- Exposure to GPU-based or edge inference environments.
- Experience integrating AI capabilities into production systems or mission applications.