Transforming financial services worldwide by providing innovative, flexible, and scalable banking solutions
Requirements
- Proven experience in data engineering or a related field
- Proficiency in Python programming language
- Hands-on experience with data lakes or data warehousing concepts
- Experience working within cloud environments such as AWS, GCP, or Azure
- Strong knowledge of data modeling and building ETL/ELT pipelines
- Experience with Delta Lake or Apache Iceberg (preferred)
- Experience with Java programming (preferred)
- Background in operating large-scale data lakes (preferred)
- Experience with Apache Spark or similar big data frameworks (preferred)
Responsibilities
- Design and develop data processing solutions capable of handling large volumes of structured and unstructured data
- Develop, optimize, and maintain data models within our data lakehouse environment
- Implement and improve infrastructure as code for data platform components to ensure automation and scalability
- Monitor and ensure the operational stability of the data platform through proactive troubleshooting and performance tuning
- Collaborate with product managers and engineering teams to understand data requirements and deliver effective solutions
- Assist in integrating the data platform with other internal systems to ensure seamless data flow and accessibility
Other
- 30 days of working abroad per year to promote global collaboration
- Four-week paid sabbatical after five years of service to support personal growth
- Hybrid or remote working arrangements based on location
- Equal employment opportunities to all applicants regardless of race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, disability, or any other protected status