CIBC is looking to build a relationship-oriented bank for the modern world and needs talented professionals to design and develop applications using various technology platforms, solve business problems with technical solutions, and implement database management solutions.
Requirements
- Proficiency in the Data technology stack, including ETL, Azure SQL and REST APIs
- Expertise in designing and deploying data applications on cloud solutions, such as Azure or AWS
- Hands-on experience in performance tuning and optimizing code running in DataBricks, Talend or similar ETL tools.
- Proficient in programming languages like PySpark and Python
- Solid understanding of SQL, T-SQL and/or PL/SQL.
- Hands-on experience designing and delivering solutions using the Azure Data Analytics platform (Cortana Intelligence Platform) including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics.
- Significant automation experience.
Responsibilities
- Responsible for the detailed technical design and development of applications using various technology platforms.
- Performs tasks of technical depth and breadth, utilizing a solid understanding of business dynamics to conduct impact analysis and provide feedback on problems with recommended solutions.
- Determines methods and approaches to projects, transforming business requirements specifications into programming instructions, designing, coding and testing programs.
- Plays a key role in the development and implementation of database management solutions, supporting the company’s backup plans.
- Configures and Develops custom ETL Solutions to ingest data into Azure SQL Data Warehouse, code data quality and transformation logic for data movement within data warehouse, develop code to publish data from data warehouse to data mart for consumption by applications or BI tools.
- Designs and develops SQL Server data objects including tables, schemas, views, functions and stored procedures.
- Designs and implements data ingestion pipelines from multiple sources using Azure DataBricks Apache Spark and/or Azure DataBricks, developing scalable and re-usable frameworks for ingesting of data sets, integrating the end to end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times, working with event based / streaming technologies to ingest and process data is required.
Other
- You’ll have the flexibility to manage your work activities within a hybrid work arrangement where you’ll spend 2 days per week on-site at our Chicago office, while other days may be remote.
- You need to be legally eligible to work at the location(s) specified above and, where applicable, must have a valid work or study permit.
- We may ask you to complete an attribute-based assessment and other skills tests (such as simulation, coding, MS Office).
- You're driven by collective success.
- You put our clients first.