Nike Inc. is looking to solve the problem of processing and storing large data volumes by designing and implementing features in collaboration with product owners, data analysts, and business partners
Requirements
- Programming languages such as Python, Java, and Scala
- Big Data Frameworks such as Hadoop, Hive, Spark, and Databricks
- ETL Tools such as Informatica and PLSQL
- Scripting such as Unix, and PowerShell
- Databases, such as Oracle, MYSQL, SQL Server, Teradata, and Snowflake
- Cloud Technologies such as AWS, Azure Cloud, EC2, S3, Azure Blob, API Gateway, Aurora, EC2, RDS, Elastic Cache, and Spark Streaming
- Analytics Tools, such as Tableau and Azure Analysis Services
Responsibilities
- Design and implement features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology
- Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes
- Design and implement distributed data processing pipelines using tools and languages prevalent in the Hadoop or Cloud ecosystems
- Build utilities, user defined functions, and frameworks to better enable data flow patterns
- Build and develop job orchestration and scheduling using Airflow
- Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing
- Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
Other
- Must have a Master’s Degree in Computer Science, Engineering, Computer Information Systems, Electronics and Communications, or Technology
- 2 years of experience in the job offered or a data engineering related occupation
- Telecommuting is available from anywhere in the U.S., except from AK, AL, AR, DE, HI, IA, ID, IN, KS, KY, LA, MT, ND, NE, NH, NM, NV, OH, OK, RI, SD, VT, WV, and WY
- Ability to work across teams to resolve operational and performance issues
- Ability to participate in integration testing efforts