The company is looking for a Data Engineer to design and develop scalable data pipelines, ingest data from various sources, and implement data storage solutions.
Requirements
- +5 years of experience in a Data Engineering roles.
- +4 years of experience with object-oriented or object function scripting languages such as Python.
- Experience with Big Data tools such as Hadoop, Spark, Kafka, among others, is expected.
- Proven experience with both relational SQL and NoSQL databases, such as PostgresSQL and Cassandra or MongoDB.
- Experience with Azure/AWS cloud services is a must.
Responsibilities
- Designing and developing scalable, efficient, and reliable data pipelines to extract, transform, and load data from various sources to a target system.
- Responsible for ingesting data from various sources such as databases, files, APIs, or social media platforms using Python libraries like pandas, NumPy, and requests.
- Designing and implementing data storage solutions using relational databases like MySQL or PostgreSQL, NoSQL databases like MongoDB or Cassandra, or big data stores like Hadoop or Spark.
- Working closely with data scientists to understand their requirements and implement data pipelines that meet their needs.
- Ensuring the security of the data pipeline by implementing access controls, encryption, and authentication mechanisms.
- Working closely with DevOps teams to ensure smooth deployment of the data pipeline to production environments such as AWS or Azure.
Other
- Job Opportunity only available for Africa residents.
- B2 English Level or higher.