Abnormal AI is looking to solve the problem of modern cyber threats by leveraging AI-native technologies and building a team of top-tier engineers to redefine how software is built.
Requirements
- Experience with building and operating distributed systems and services at a high scale (~billions of transactions each day)
- Streaming data systems - using Kafka, Spark, Map/Reduce or similar to process large data sets
- Working on large scale data storage systems, such as DynamoDB, Aerospike, Redis and Databricks
- 3-5 years of overall software engineering experience
- Strong sense of best practices in developing software
- Experience with AI-native technologies such as Generative AI tools like Cursor, GitHub Copilot, and Claude
Responsibilities
- Build out storage and retrieval infrastructure for our Aggregates Platform
- Allows our various ingestion pipelines and detection algorithms to send us billions of data points per day to be stored and processed.
- Making sure that such a large scale data architecture works at our scale, low latency requirements and scales with our cellular architecture
- Work on a series of projects towards scalability, reliability and cost optimisations.
- Partner with our various customer teams
- Understand the use cases and scale requirements of our customers and work to implement them.
- Help build our group through excellent interview practices
Other
- Excellent communication skills across both verbal and written mediums
- Ability to work in a remote and distributed team
- Ability to be a talent magnet - someone who through the interview process demonstrates their own strengths in a way that attracts candidates to Abnormal and to the Aggregates team
- Ability to accurately assess candidates technical skills, cultural fit and likelihood of success at Abnormal
- Bachelor's degree or higher in Computer Science or related field (not explicitly mentioned but implied)