Sev1Tech is seeking an AI Integration Engineer to integrate AI models into production systems, ensuring robust performance, real-time monitoring, and secure operations. This role focuses on building dashboards for real-time and historical model health, detecting data drift, and managing AI logging, while ensuring secure-by-design practices and alignment with business objectives.
Requirements
- Hands-on experience with dashboarding tools (e.g., Grafana, Kibana) and observability platforms (e.g., Prometheus, Datadog).
- Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) for AI deployment.
- Proficiency in Python; knowledge of JavaScript, C++, or Go is a plus for UI or system-level integration.
- Experience with containerization (Docker, Kubernetes) and API development (REST, GraphQL).
- Expertise in logging frameworks (e.g., ELK Stack, OpenTelemetry) and visualization tools (e.g., Plotly, Chart.js).
- Understanding of AI model metrics (e.g., F1 score, latency) and drift detection techniques (e.g., PSI, KS test).
- Knowledge of AI vulnerabilities (e.g., prompt injection, model inversion) and mitigation strategies (e.g., differential privacy, ART).
Responsibilities
- Integrate AI/ML models into applications (e.g., web, mobile, IoT) using APIs (REST, gRPC) and platforms like TensorFlow Serving or AWS SageMaker.
- Create real-time and historical dashboards using Grafana, Kibana, or Plotly to monitor model health (e.g., latency, accuracy) and data drift.
- Implement monitoring pipelines with tools like Evidently AI or Weights & Biases to detect data drift and model degradation, triggering alerts as needed.
- Set up logging systems with ELK Stack, OpenTelemetry, or LangSmith to capture AI events, errors, and traces for debugging and auditing.
- Apply secure-by-design principles to protect models and data from vulnerabilities (e.g., adversarial attacks, data leakage) using tools like Adversarial Robustness Toolbox (ART).
- Optimize model inference for performance (e.g., via quantization, edge deployment) and ensure compatibility with cloud (AWS, Azure) or on-premises infrastructure.
- Perform end-to-end testing of AI integrations, including stress testing and validation of dashboard metrics.
Other
- 4+ years in software engineering or AI integration, with experience deploying AI models in production.
- Partner with data scientists to understand model requirements, DevOps for infrastructure alignment, and stakeholders for reporting needs.
- Ensure integrations comply with regulations like GDPR, HIPAA, or NIST AI RMF for secure data handling.
- Strong problem-solving skills for debugging integration issues and optimizing dashboards.
- Excellent communication to translate technical metrics into business insights.