Galaxy is looking to improve and consolidate its data backbone across all diverse business lines, including Trading, Custody, Asset Management, and Mining, by hiring a Data Engineer to work on data pipelining and big data processing frameworks.
Requirements
- Experience with building data pipelines, APIs, and reporting tools
- Strong knowledge of data structures, their implementations, and optimization techniques
- Experience with various backend technologies (Python, Go, Java)
- Knowledge of Kafka, AWS SQS, or MQ
- Knowledge of Relational and non-relational Databases (Postgres, SQL Server, Databricks)
- Knowledge of Spark
- Knowledge of Vertx, Spring, Flask
Responsibilities
- Work across all of Galaxy’s diverse business lines to improve and consolidate the company’s data backbone
- Build high throughput, low latency streaming pipelines using messaging queues
- Work on production support issues and improvements on CI/CD automation and DevOps
- Build deployment processes for applications, including unit testing, integration testing, and deployment to a cloud provider (EC2, Kubernetes)
- Work on data pipelining and big data processing frameworks
- Improve and consolidate the company’s data backbone
- Involved in improvements on CI/CD automation and DevOps
Other
- 3+ years of professional experience working in software development
- University degree in computer science, physics, engineering, or equivalent programming experience
- Seek Excellence, Be Selective To Be Effective, Be Highly Aligned, Loosely Coupled, Disagree Transparently, Encourage Independent Decision-Making, Build Dream Teams
- Flexible Time Off (paid)
- Company-paid health and protective benefits for employees, partners, and other dependents