Strider Technologies is looking to develop and optimize large scale data processing pipelines and machine learning models to transform publicly available data into critical insights.
Requirements
Python
Rust
Ray
vLLM
Flask
SageMaker
GitHub Actions
AWS (ECS, Lambdas, Step Functions)
DynamoDB
RDS
Elasticsearch
Responsibilities
Develop large scale data processing pipelines
Create data assets from unstructured data
Scale batch and online inference
Automate and optimize machine learning training workflows
Enhance and expand CI/CD processes
Author design documents
Participate in code reviews
Other
5+ years of software engineering experience
Previous experience working in a data heavy role
Natural problem solver with an affinity for data
Opinionated about how software is built
Proficient at breaking down large, sometimes ambiguous, problems into well-defined tasks
Value shipping code early and often
Well-honed mental model for how software systems execute and interact