The company is looking to solve big data problems using Hadoop, Spark, and other related technologies, specifically in the banking domain.
Requirements
- Bigdata Hadoop & ecosystem, Scala/Python, PySpark
- Oracle PL/SQL, CI/CD, Data Lake, Informatica, Teradata
- SQL
- Unix shell scripting
- PySpark
- Scala
- Python
Responsibilities
- Strong experience in Bigdata, Hadoop, Spark, Python
- Good experience in end-to-end implementation of data warehouse, Data lake, data marts, CI/CD pipeline
- Strong knowledge and hands-on experience in SQL, Unix shell scripting
- Good understanding of data integration, data quality and data architecture
- Experience in Relational Modeling, Dimensional Modeling and Modeling of Unstructured Data
- Good understanding of Agile software development frameworks
- Experience in Banking domain
Other
- BACHELOR OF COMPUTER SCIENCE
- Excellent communication
- Strong communication and Analytical skills
- Ability to work in teams in a diverse, multi-stakeholder environment comprising of Business and Technology teams
- Experience and desire to work in a global delivery environment