LogicMonitor is looking for a Software Engineer to build scalable data pipelines, APIs, and retrieval frameworks that fuel their AI products (Edwin AI, Dexda, and other AIOps products) by designing, building, and optimizing the data infrastructure for GenAI-powered insights.
Requirements
- Experience building streaming data pipelines (Kafka / Spark or any similar technology).
- Strong programming background in Java and Python, including microservice design.
- Experience with ETL, data modeling, and distributed storage systems.
- Familiarity with LLM pipelines, embeddings, and vector retrieval.
- Understanding of Kubernetes, containerization, and CI/CD workflows.
- Awareness of data governance, validation, and lineage best practices.
- Implement schema contracts and streaming protocols (REST, gRPC, SSE, WebSockets) between services.
Responsibilities
- Design and build streaming and batch data pipelines that process metrics, logs, and events for AI workflows.
- Develop ETL and feature‑extraction pipelines using Python and Java microservices.
- Integrate data ingestion and enrichment from multiple observability sources into AI‑ready formats.
- Build resilient data orchestration using Kafka, Airflow, and Redis Streams.
- Implement retrieval‑augmented generation (RAG) pipelines with vector databases (Milvus, Qdrant, OpenSearch, Neo4j Vector).
- Develop data indexing and semantic search for large‑scale observability and operational data.
- Build and maintain Java microservices (Spring Boot) that serve AI and analytics data to Edwin and AIOps applications.
Other
- 3+ years of experience in backend or data systems engineering.
- Bachelor’s degree in Computer Science, Data Engineering, or a related field.
- Strong communication and collaboration across AI, Data, and Platform teams.
- Candidates must be authorized to work in the United States on a full-time, permanent basis without requiring new or initial employer-sponsored work authorization.
- Work Location: San Francisco, CA