The partner company is looking for a Data Engineer to build, optimize, and maintain data pipelines, cloud-based solutions, and analytics platforms, ensuring high-quality, scalable data solutions, data quality and security, and supporting data migration and modernization initiatives.
Requirements
- Proficient experience with programming languages (Java, Python, SQL, Pyspark).
- Experience in data pipeline and ETL development.
- Familiarity with database technologies (Oracle, MySQL, Postgres, SQL Server, MongoDB).
- Experience in data modeling, stored procedures, and API development (RESTful, SOAP).
- Demonstrated experience with cloud platforms: AWS (S3, Lambda, ECS, SQS) and Azure (CosmosDB, Functions).
- Experience with containerization and orchestration (Docker, Kubernetes).
- Knowledge of CI/CD tools (Jenkins, Maven, Git, Bitbucket, GitHub).
Responsibilities
- Design, develop, and maintain robust data pipelines and ETL processes for ingesting, transforming, and loading data from diverse sources.
- Build and optimize data architectures and models to support analytics, reporting, and operational needs.
- Implement and manage CI/CD pipelines for data engineering workflows using tools such as Jenkins, Maven, and Git.
- Develop and deploy cloud-based solutions leveraging AWS (S3, Lambda, ECS, SQS), Azure (CosmosDB, Functions), and containerization (Docker, Kubernetes).
- Collaborate with DBAs and application developers to design and integrate data models, stored procedures, and APIs.
- Ensure data quality, integrity, and security through rigorous validation, monitoring, and logging.
- Support data migration, integration, and modernization initiatives, including legacy system upgrades and cloud adoption.
Other
- US Citizenship is required.
- Minimum FIVE (5) years of experience in data engineering, software development, or related roles.
- Strong analytical, troubleshooting, and communication skills.