Improve BlackRock’s ability to enhance retail sales distribution capabilities and services suite by creating, expanding and optimizing data and data pipeline architecture
Requirements
- 4+ years of hands-on experience in computer/software engineering with majority in big data engineering
- 4+ years of strong Python or Scala programming skills including hands-on experience creating and supporting UDFs and modules like pytest
- 4+ years of experience with building and optimizing ‘big data’ pipelines, architectures, and data sets
- 4+ years of hands-on experience on developing on Spark in a production environment
- 4+ years of experience using Hive, Yarn, Sqoop, Transact SQL, No-SQL and GraphQL
- Strong experience implementing solutions on Snowflake
- Experience with data quality and validation frameworks, especially Great Expectations for automated testing
Responsibilities
- Lead in the creation and maintenance of optimized data pipeline architectures on large and complex data sets
- Assemble large, complex data sets that meet business requirements
- Act as lead to identify, design, and implement internal process improvements and relay to relevant technology organization
- Work with stakeholders to assist in data-related technical issues and support their data infrastructure needs
- Automate manual ingest processes and optimize data delivery subject to service level agreements
- Keep data separated and segregated according to relevant data policies
- Demonstrated ability to join a complex global team, collaborate cross-functionally and take ownership of major components of the data platform ecosystem
Other
- 4+ years of overall experience
- Bachelor's degree or higher
- Ability to work in a team and collaborate cross-functionally
- Strong communication and problem-solving skills
- Ability to work in a hybrid work model with at least 4 days in the office per week