Blend solutions covering optimization and analytics to drive business results and digital transformation. Develop digital solutions to help internal and external clients make better decisions with data.
Requirements
- Cloudera Hadoop for data storage; Apache Spark as an analytics engine
- performing ETL processing based on Dynamic Airflow pipelines; OOP language development
- database and data warehouse; Pycharm developing environment, PySpark; Databricks
- big data systems, including Spark, Hadoop, CDP, Cloudera, and owning the full software development cycle from coding, QA, UAT and rollout
Responsibilities
- Programmatically author, schedule and monitor data pipelines in Python.
- Make sure that produced code is secure, stable and operational before being released to production.
- Complete the unit testing of components for integration into larger subsystems.
- Implement automated tests and contribute to release and integration planning.
- Resolve high-priority defects and deploy fixes to production systems.
Other
- This position requires a Bachelor's degree or foreign equivalent in Electronic Engineering, Software Engineering, or related field of study plus two (2) years of experience in the job offered or as a Software Engineer, Programmer Analyst, Technical Consultant, Consultant, or related occupation.
- Can work remotely.
- We are open to applications from career returners.
- We may request you to complete one or more assessments during the application process.
- We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us.