Design and develop data applications using big data technologies (AWS, Spark) to ingest, process, and analyze large disparate datasets.
Requirements
- AWS
- Spark
- AWS Glue
- Aurora Postgres
- EKS
- Redshift
- PySpark
Responsibilities
- Work with development teams and other project leaders/stakeholders to provide technical solutions that enable business capabilities
- Design and develop data applications using big data technologies (AWS, Spark) to ingest, process, and analyze large disparate datasets
- Build robust data pipelines on the Cloud using AWS Glue, Aurora Postgres, EKS, Redshift, PySpark, Lambda, and Snowflake.
- Build Rest-based Data API using Python and Lambda.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from various data sources using SQL and AWS ‘big data’ technologies.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Implement architectures to handle large-scale data and its organization
Other
- Hybrid work location
- Minimum 10 years for experience in data engineering.