The company is looking to transform legacy systems and develop a consumer-centric, low latency analytic environment leveraging Big Data technologies.
Requirements
- 4+ years hands-on working experience in Hadoop, Spark (Scala and Python), Ab Initio, Kafka, MapReduce, HDFS, and Hive
- Understand CI/CD. Fluent with Git and Jenkins
- Experience working in real-time data ingestion
- Experienced in sourcing and processing structured, semi-structured and unstructured data
- Experience in Data Cleansing/Transformation, Performance Tuning
- Experience in Storm, Kafka, and Flume would be a plus
- Hortonworks or Cloudera or MapR Certification would be a plus
Responsibilities
- Implement a big data enterprise data lake, BI and analytics system using Hadoop, Hive, Spark, Ab Initio, Redshift, Kafka, and Oracle
- Work closely with product owner, scrum master, and architects to convey technical impacts to development timeline and risks
- Coordinate with data engineers and platform administrators to drive program delivery
- Advocate technical development, DevOps process, and application standards across enterprise data lake and enterprise data warehouse
- Perform other duties and/or special projects as assigned
Other
- Bachelor's degree in Computer Science or Engineering; Master's degree preferred
- Excellent written and oral communication skills. Adept and presenting complex topics, influencing and executing with timely / actionable follow-through
- Strong analytical and problem-solving skills with the ability to convert information into practical deliverables
- You must be 18 years or older
- You must have a high school diploma or equivalent
- You must be willing to take a drug test, submit to a background investigation and submit fingerprints as part of the onboarding process
- You must be able to satisfy the requirements of Section 19 of the Federal Deposit Insurance Act
- Legal authorization to work in the U.S. is required