Ibotta is seeking to solve real-world business challenges and contribute to ongoing business initiatives through data engineering
Requirements
- Scala
- Java
- Python
- Hadoop/Spark
- AWS
- Data lake technologies and tools
Responsibilities
- Build, monitor, and maintain data ETL pipelines
- Use Scala, Java, or Python to utilize Hadoop/Spark to collect and manage data at scale
- Work with upstream Data Producers to automate and standardize ingestion of event data into our data lake
- Contribute to data tiers standards and create new bronze or silver data sets
- Learn and leverage various AWS and data lake technologies and tools to enhance and maintain our overall infrastructure
- Work with internal stakeholders to identify pain points and new opportunities for automation
- Evangelize Data Engineering and supporting capabilities with Platform and Analytics teams
Other
- Juniors working towards a bachelor’s degree with a focus in Computer Science, Engineering, Data Analytics or related field
- Proven ability to think creatively and implement ideas from start to finish
- Good written and verbal communication skills
- Hunger to learn and collaborate with your teammates
- Possess a strong work ethic