Steampunk is looking for a Data Engineer to develop enterprise-grade data platforms, services, and pipelines for their clients, focusing on migrating data environments with performance and reliability, and addressing technical inquiries related to data products.
Requirements
- 2-4 years direct experience in Data Engineering with experience in tools such as:
- Big data tools: Hadoop, Spark, Kafka, etc.
- Relational SQL and NoSQL databases, including Postgres and Cassandra.
- Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- AWS cloud services: EC2, EMR, RDS, Redshift (or Azure equivalents)
- Data streaming systems: Storm, Spark-Streaming, etc.
- Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Responsibilities
- Lead and architect migration of data environments with performance and reliability.
- Assess and understand the ETL jobs, workflows, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature / functionality of data products
- Experience in crafting database / data warehouse solutions in cloud (Preferably AWS. Alternatively Azure, GCP).
- Key must have skill sets – Python, AWS
- Support an Agile software development lifecycle
- You will contribute to the growth of our Data Exploitation Practice!
Other
- Ability to obtain a U.S. government Security Clearance
- Master's Degree and 3 years of relevant experience; OR Bachelor's Degree and 5 years of relevant experience; OR No degree and 9 years of relevant experience
- Possesses at least one professional certification relevant to the technical service provided. Maintain a certification relevant to the product being deployed and/or maintained.
- 2-4 years industry experience coding commercial software and a passion for solving complex problems.
- On camera during interviews and assessments.