At Steampunk, our goal is to build and execute a data strategy for our clients to coordinate data collection and generation, to align the organization and its data assets in support of the mission, and ultimately to realize mission goals with the strongest effectiveness possible.
Requirements
- Python
- AWS
- Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
- Data streaming systems: Storm, Spark-Streaming, etc.
Responsibilities
- Lead and architect migration of data environments with performance and reliability.
- Assess and understand the ETL jobs, workflows, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
- Support an Agile software development lifecycle
- Contribute to the growth of our Data Exploitation Practice
- Design and develop solutions to high-impact, complex data problems
- Work with the best and data practitioners around
Other
- Ability to hold a position of public trust with the US government.
- 5-7 years industry experience coding commercial software and a passion for solving complex problems.
- Experience working in an Agile environment
- Excellent communication and customer service skills
- Passion for data and problem solving