Steampunk is looking for a Data Engineer to help their clients develop enterprise-grade data platforms, services, and pipelines in Databricks. The goal is to enable clients to become data-driven organizations by leveraging visual analytics platforms to analyze, visualize, and share information.
Requirements
- Databricks
- SQL
- PySpark/Python
- AWS
- Big data tools: Databricks, Apache Spark, Delta Lake, etc.
- Relational SQL (Preferably T-SQL. Alternatively pgSQL, MySQL).
- Data pipeline and workflow management tools: Databricks Workflows, Airflow, Step Functions, etc.
Responsibilities
- Lead and architect migrations of data using Databricks with focus on performance, reliability, and scalability.
- Assess and understand ETL jobs, workflows, data marts, BI tools, and reports
- Address technical inquiries concerning customization, integration, enterprise architecture and general feature/functionality of data products
- Support an Agile software development lifecycle
- You will contribute to the growth of our AI & Data Exploitation Practice!
- Experience working with database/data warehouse/data mart solutions in cloud (Preferably AWS. Alternatively Azure, GCP).
- Key must have skill sets – Databricks, SQL, PySpark/Python, AWS
Other
- Ability to hold a position of public trust with the US government.
- 2-4 years industry experience coding commercial software and a passion for solving complex problems.
- Seasoned Data Engineer to work with our team and our clients
- technologist with excellent communication and customer service skills and a passion for data and problem solving.
- On camera during interviews and assessments.