Job Board
LogoLogo

Get Jobs Tailored to Your Resume

Filtr uses AI to scan 1000+ jobs and finds postings that perfectly matches your resume

hackajob Logo

AI/ML Evaluation and Alignment Engineer

hackajob

Salary not specified
Oct 16, 2025
Remote, US
Apply Now

Leo Technologies is seeking to solve the problem of ensuring ethical, safe, and reliable deployment of LLMs and generative AI systems in public safety and intelligence use cases.

Requirements

  • Bachelor's or Master's in Computer Science, Artificial Intelligence, Data Science, or related field.
  • 3-5+ years of hands-on experience in ML/AI engineering, with at least 2 years working directly on LLM evaluation, QA, or safety.
  • Strong familiarity with evaluation techniques for generative AI: human-in-the-loop evaluation, automated metrics, adversarial testing, red-teaming.
  • Experience with bias detection, fairness approaches, and responsible AI design.
  • Knowledge of LLM observability, monitoring, and guardrail frameworks e.g Langfuse, Langsmith
  • Proficiency with Python and modern AI/ML/LLM/Agentic AI libraries (LangGraph, Strands Agents, Pydantic AI, LangChain, HuggingFace, PyTorch, LlamaIndex).
  • Experience integrating evaluations into DevOps/MLOps pipelines, preferably with Kubernetes, Terraform, ArgoCD, or GitHub Actions.

Responsibilities

  • Build and maintain evaluation frameworks for LLMs and generative AI systems tailored to public safety and intelligence use cases.
  • Design guardrails and alignment strategies to minimize bias, toxicity, hallucinations, and other ethical risks in production workflows.
  • Partner with AI engineers and data scientists to define online and offline evaluation metrics (e.g., model drifts, data drifts, factual accuracy, consistency, safety, interpretability).
  • Implement continuous evaluation pipelines for AI models, integrated into CI/CD and production monitoring systems.
  • Collaborate with stakeholders to stress test models against edge cases, adversarial prompts, and sensitive data scenarios.
  • Research and integrate third-party evaluation frameworks and solutions; adapt them to our regulated, high-stakes environment.
  • Work with product and customer-facing teams to ensure explainability, transparency, and auditability of AI outputs.

Other

  • Strong problem-solving skills, with the ability to design practical evaluation systems for real-world, high-stakes scenarios.
  • Excellent communication skills to translate technical risks and evaluation results into insights for both technical and non-technical stakeholders.
  • Understanding of cloud AI platforms (AWS, Azure) and deployment best practices.
  • Provide technical leadership in responsible AI practices, influencing standards across the organization.
  • Document best practices and findings, and share knowledge across teams to foster a culture of responsible AI innovation.