The company is seeking a Data Engineer to design, build, and maintain scalable data infrastructure to enable data-driven decision-making by ensuring clean, reliable, and accessible data across the organization. This involves ensuring smooth, secure, and accessible data flow between systems for analytics, maintaining on-premises databases, and supporting key infrastructure roles.
Requirements
- Basic SQL and Python skills
- Familiarity with relational databases and data formats (CSV, JSON, DAX, SQL)
- Proficiency in SQL, Python, DAX, and ETL tools (e.g., Airflow, dbt)
- Experience with cloud platforms (AWS, Azure, GCP, Fabric)
- Expertise in data warehousing (e.g., Snowflake, Redshift)
- Strong understanding of data architecture and performance tuning
- Strong programming skills (Python, Java, Scala, DAX)
Responsibilities
- Assist in building and maintaining basic ETL pipelines
- Perform routine data cleaning and validation
- Support data ingestion from internal and external sources
- Design and optimize ETL/ELT workflows from API, flat files and other sources
- Develop and maintain data models and schemas
- Monitor data pipeline performance and troubleshoot issues
- Architect scalable data solutions across cloud and on-prem environments
Other
- This role can be scaled from entry-level (Tier 1) to strategic leadership (Lead), depending on experience and organizational needs.
- Document processes and escalate complex issues
- Collaborate with analysts and developers to meet data needs
- Mentor junior engineers and contribute to technical strategy
- Other duties assigned as needed.