Blueprint is looking to solve the problem of unlocking value from existing assets by leveraging cutting-edge technology to create additional revenue streams and new lines of business for organizations.
Requirements
- Strong experience with database query languages, preferably Kusto; experience with SQL or similar languages is acceptable
- Hands-on experience building and troubleshooting data orchestration pipelines
- Experience with deployment automation and infrastructure-as-code tools
- Strong engineering skills in at least one programming language such as C or PowerShell
- Experience designing or supporting scalable, distributed systems
- Familiarity with version control systems (e.g., Git) and collaborative development workflows
- Experience with cloud-native analytics platforms such as Azure Data Explorer (Kusto)
Responsibilities
- Design, develop, and maintain data pipelines for ingesting, transforming, and delivering security and system log data
- Review, optimize, and troubleshoot query logic to ensure accuracy, performance, and reliability of data pipelines
- Implement orchestration workflows using cloud-native data integration tools for ETL/ELT processes
- Define and maintain infrastructure-as-code artifacts for repeatable and automated deployments
- Support deployment pipelines to enable consistent releases across multiple cloud environments
- Monitor pipeline health, investigate failures, and perform data quality analysis
- Contribute to refactoring efforts that reduce platform dependencies and improve scalability
Other
- 3–5 years of experience in data engineering, platform engineering, or DevOps roles
- Strong communication skills and ability to work across distributed teams
- Bachelor’s degree in Computer Science, Data Science, or a related field
- Active U.S. Government Security Clearance (preferred but not required)
- Ability to work in a team environment and collaborate with cross-functional teams