Steampunk is looking for a Senior Data Engineer to develop enterprise grade data platforms, services, and pipelines for their clients
Requirements
- Python
- AWS
- Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
- Data streaming systems: Storm, Spark-Streaming, etc.
Responsibilities
- Lead and architect migration of data environments with performance and reliability
- Assess and understand the ETL jobs, workflows, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
- Support an Agile software development lifecycle
- Contribute to the growth of our Data Exploitation Practice
- Craft database / data warehouse solutions in cloud (Preferably AWS. Alternatively Azure, GCP)
- Experience in tools such as Big data tools: Hadoop, Spark, Kafka, etc.
Other
- Ability to hold a position of public trust with the US government
- Master's Degree in related program and 7 years of relevant experience; OR Bachelor's Degree in related program and 10 years of relevant experience; OR No degree and 16 years of relevant experience
- Possesses at least one professional certification relevant to the technical service provided
- Experience working in an Agile environment
- Excellent communication and customer service skills