Seeking a Software Engineer (Artificial Intelligence / Machine Learning) to leverage Python/PySpark within the Hadoop ecosystem and cloud platforms for complex, specialized tasks.
Requirements
- 9+ years’ experience in Python/PySpark
- Experience within the Hadoop ecosystem (Spark, HDFS, Hive, YARN, Oozie)
- Strong experience in cloud development (AWS/OCI)
- Experience in distributed computing
- Experience with relational & NoSQL databases
- Experience in API development
- Knowledge of CI/CD tools, microservices, Docker/Kubernetes, and Agile practices preferred.
Responsibilities
- Develop AI/ML solutions using Python/PySpark within the Hadoop ecosystem (Spark, HDFS, Hive, YARN, Oozie).
- Engage in cloud development on AWS/OCI.
- Implement distributed computing solutions.
- Work with relational & NoSQL databases.
- Develop APIs.
Other
- These duties are too complex and specialized to be performable with a bachelor’s degree related to computer science or computer information systems or information technology.
- Healthcare domain experience is a big plus.