Abnormal AI is revolutionizing cybersecurity by leveraging AI-native technologies to combat modern cyber threats. We’re building a team of top-tier engineers who are excited to use Generative AI tools like Cursor, GitHub Copilot, and Claude to redefine how software is built – faster, smarter, and more efficient.
Requirements
- Experience with building and operating distributed systems and services at a high scale (~billions of transactions each day)
- Streaming data systems - using Kafka, Spark, Map/Reduce or similar to process large data sets
- Working on large scale data storage systems, such as DynamoDB, Aerospike, Redis and Databricks
- 3-5 years of overall software engineering experience
- Strong sense of best practices in developing software
Responsibilities
- Build out storage and retrieval infrastructure for our Aggregates Platform
- Allows our various ingestion pipelines and detection algorithms to send us billions of datapoints per day to be stored and processed.
- Making sure that such a large scale data architecture works at our scale, low latency requirements and scales with our cellular architecture
- Work on a series of projects towards scalability, reliability and cost optimisations.
- Partner with our various customer teams
- Understand the use cases and scale requirements of our customers and work to implement them.
- Help build our group through excellent interview practices
Other
- You are someone who wants to make an impact.
- You are passionate about solving customer problems
- You want to apply those skills on a problem that leaves the world in a better place.
- you should be comfortable with a level of uncertainty beyond what you’d find at a more mature company or even a more mature team at Abnormal.
- you will need to have excellent communication skills across both verbal and written mediums.