The company is seeking a Data Engineer to maintain and enhance its data warehouse and pipelines, and to contribute to data analysis and reporting initiatives. The goal is to build robust data solutions and create actionable insights through compelling visualizations.
Requirements
- Strong proficiency in Python, SQL, and PySpark.
- Experience with cloud data platforms including Snowflake, BigQuery, and Databricks. Databricks experience highly preferred.
- Proven experience with workflow orchestration tools (Airflow preferred).
- Experience with AWS (preferred), Azure, or Google Cloud Platform.
- Proficiency in PowerBI (preferred) or Tableau.
- Familiarity with relational database management systems (RDBMS).
- Proficient with Git for code management and collaboration.
Responsibilities
- Maintain, enhance, and optimize existing data warehouse architecture and ETL pipelines.
- Design and implement scalable ETL/ELT processes ensuring data quality, integrity, and timeliness.
- Monitor and improve pipeline performance, troubleshoot issues, and implement best practices.
- Create and maintain comprehensive documentation for data engineering processes, architecture, and configurations.
- Partner with business teams to gather requirements and translate them into technical solutions.
- Build and maintain PowerBI dashboards and reports that drive business decisions.
- Develop new data models and enhance existing ones to support advanced analytics.
Other
- 3+ years in data engineering or closely related roles.
- Fluent English communication skills for effective collaboration with U.S. based team members.
- Demonstrated experience building and maintaining production data pipelines.