Developing and optimizing anomaly detection algorithms to power a highly scalable stream processing platform and provide actionable insights to customers.
Requirements
- Proficient in crafting machine learning models, including neural networks, traditional machine learning models, and transformer models
- Fluent in machine learning frameworks such as SKLearn, XGBoost, PyTorch, or Tensorflow
- Proficient in Python and able to transform abstract machine learning concepts into robust, efficient, and scalable solutions
- Strong Computer Science fundamentals and object-oriented design skills
- History of building large-scale data processing systems
- Background working in a fast-paced development environment
Responsibilities
- Design, implement, and maintain large-scale AI/ML pipelines for real-time anomaly detection
- Train and tune models and perform model evaluations using Deep Learning Machine Learning (AI/ML) Models, and Large Language Models
- Design and implement sophisticated anomaly detection algorithms, such as Isolation Forests, LSTM-based models, and Variational Autoencoders
- Create robust evaluation frameworks and metrics to assess the performance of these algorithms
- Implement and optimize stream processing solutions using technologies like Flink and Kafka
Other
- 3 - 5 years of software development experience and a minimum of 2 internships with direct experience in building and evaluating ML models and delivering large-scale ML products
- MS or PhD in a relevant field
- Strong team collaboration and communication skills
- US – COMPENSATION RANGE – MESSAGE TO APPLICANTS: 177,600 - 225,900 USD