Vectra is looking for an AI/ML Engineer to design, build, and deploy machine learning systems for threat detection, LLM-powered agent reasoning, RAG pipelines, and adversarial behavior modeling within their AI-driven threat detection and response platform.
Requirements
- 3+ years of hands-on experience in applied ML or MLOps (or PhD with practical implementation).
- Strong ML fundamentals (classification, clustering, anomaly detection, embedding techniques).
- Experience working with real-world noisy data, ideally in time series, logs, or graph-structured form.
- Experience with LLMs (e.g., OpenAI, Mistral, Llama), embeddings, and vector databases. Well-versed in techniques for long context (chunking, sliding window, hierarchical summarization).
- Solid skills in Python, PyTorch or TensorFlow, and ML pipelines.
- Familiarity with cybersecurity data (SIEM logs, alerts, EDR telemetry, threat reports).
- Infra skills for ML (Docker, K8s, GPU scheduling, model serving).
Responsibilities
- Build and fine-tune models for threat detection, anomaly detection, and behavioral modeling (supervised, unsupervised, and semi-supervised).
- Implement and optimize LLM-powered agents that reason over structured and unstructured security data.
- Develop RAG pipelines that combine embeddings, vector search, and context injection.
- Work with streaming and historical security data (logs, events, alerts) to train and evaluate models.
- Collaborate with backend and platform teams to deploy models in scalable, low-latency environments.
- Continuously improve model performance, robustness, and explainability.
Other
- This is a hybrid role with the expectation of working in our San Jose office 3 days per week.
- Strong collaboration and ownership mindset—you write clean code, ask great questions, and iterate quickly.
- Someone who is looking for a hybrid work environment, with a minimum of 3 days in the San Jose office.
- High agency, fast learning, direct access to customers and users.
- Backed by seasoned operators and security leaders.