The company is looking to develop and optimize anomaly detection algorithms that power their highly scalable stream processing platform.
Requirements
Proficient in crafting machine learning models, including neural networks, transformer models, Large Language Models, decision trees, and other traditional machine learning models
Fluent in some of these machine learning frameworks such as SKLearn, XGBoost, PyTorch, or Tensorflow
Proficient in Python and able to transform abstract machine learning concepts into robust, efficient, and scalable solutions
Strong Computer Science fundamentals and object-oriented design skills
History of building large-scale data processing systems
Responsibilities
Design, implement, and maintain large-scale AI/ML pipelines for real-time anomaly detection
Train and tune models and perform model evaluations using Deep Learning Machine Learning (AI/ML) Models, and Large Language Models
Design and implement sophisticated anomaly detection algorithms, such as Isolation Forests, LSTM-based models, and Variational Autoencoders
Create robust evaluation frameworks and metrics to assess the performance of these algorithms
Implement and optimize stream processing solutions using technologies like Flink and Kafka
Other
3 - 5 years of software development experience and a minimum of 2 internships with direct experience in building and evaluating ML models and delivering large-scale ML products
MS or PhD in a relevant field
Strong team collaboration and communication skills
Background working in a fast-paced development environment