The partner company is looking for a Data Engineer to design and maintain robust data pipelines and integrations that power analytics, reporting, and business intelligence, enabling data-driven decision-making.
Requirements
- 3+ years of Python development experience.
- 5+ years of SQL experience with schema design and dimensional modeling.
- 5+ years of experience with AWS cloud services such as S3, Lambda, RDS, Step Functions, Redshift.
- 5+ years of building data pipelines using PySpark, AWS Glue (AWS Certification preferred).
- Experience with Agile software development practices.
- Strong problem-solving, troubleshooting, and analytical skills.
- Preferred: experience with MSSQL, SSIS, SQL Studio, data lakes, data mesh, streaming platforms, and cloud development (AWS or GCP).
Responsibilities
- Design, develop, and maintain scalable data pipelines to support growing data volume and complexity.
- Build and maintain API integrations for internal and external data sources.
- Collaborate with analytics and business teams to improve data models feeding business intelligence tools.
- Implement monitoring systems and processes to ensure data accuracy, integrity, and availability.
- Conduct root cause analysis on data and processes to answer business questions and identify improvement opportunities.
- Work with engineering and business teams to plan long-term data platform architecture.
- Support initiatives across machine learning, AI, reporting, and marketing optimization by delivering high-quality, structured data.
Other
- BS or MS degree in Computer Science, Engineering, or a related technical field.
- Strong technical skills, cloud expertise, and collaboration abilities are essential for success.
- Opportunities to work on innovative data solutions in a collaborative, fast-paced environment.
- Professional development and learning opportunities.