Engineers in this role accomplish business objectives by monitoring system functions from all points of system processing, identifying, and assisting in solving processing problems.
Requirements
- 5+ years in Python/PySpark
- 5+ years optimizing Python/PySpark jobs in a hadoop ecosystem
- 5+ years’ working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie
- 5+ years with designing and developing cloud applications: AWS, OCI or similar
- 5+ years in distributed/cluster computing concepts
- 5+ years with relational databases: MS SQL Server or similar
- 3+ years with NoSQL databases: HBASE (preferred)
Responsibilities
- optimizing Python/PySpark in a Hadoop ecosystem
- Working with large data sets and pipelines using tools and libraries of Hadoop ecosystem such as Spark, HDFS, YARN, Hive and Oozie
- Designing and developing cloud applications: AWS, OCI or similar
- Develop high quality software modules for Cotiviti, Inc. product suite
- Conduct unit and integration testing
- Analyze and resolve software related issues originating from internal or external customers
- Analyze requirements and specifications and create detailed designs for implementation
Other
- Ability to communicate clearly with key stakeholders
- Critical thinking
- Healthcare experience
- Independently troubleshoot and resolve issues with minimal or no guidance
- Collaborate closely with offshore development teams to provide technical translation of business requirements and ensure software construction adheres to Cotiviti best practices coding techniques