Ibotta is seeking a Data Engineering Intern to contribute to ongoing business initiatives by building, monitoring, and maintaining data ETL pipelines, and enhancing data infrastructure.
Requirements
- Use Scala, Java, or Python to utilize Hadoop/Spark to collect and manage data at scale
- Learn and leverage various AWS and data lake technologies and tools to enhance and maintain our overall infrastructure
Responsibilities
- Build, monitor, and maintain data ETL pipelines
- Use Scala, Java, or Python to utilize Hadoop/Spark to collect and manage data at scale
- Work with upstream Data Producers to automate and standardize ingestion of event data into our data lake
- Contribute to data tiers standards and create new bronze or silver data sets
- Learn and leverage various AWS and data lake technologies and tools to enhance and maintain our overall infrastructure
- Work with internal stakeholders to identify pain points and new opportunities for automation
- Evangelize Data Engineering and supporting capabilities with Platform and Analytics teams
Other
- Juniors working towards a bachelor’s degree with a focus in Computer Science, Engineering, Data Analytics or related field
- Proven ability to think creatively and implement ideas from start to finish
- Good written and verbal communication skills
- Hunger to learn and collaborate with your teammates
- Possess a strong work ethic
- Analytical and problem-solving abilities
- Self-directed and self-motivated
- This will be a full-time, 12 week, internship during the summer of 2026.
- This is a hybrid position located in Denver, Colorado and requires 3 days in-office per week.
- Candidates must live in the United States.
- The hourly compensation for this role is: $33.46 per hour worked.
- Applicants must be currently authorized to work in the United States on a full-time basis.