Torch is seeking a Data Engineer to design, build, and maintain data pipelines and systems for efficient processing and analysis of critical operational data, ensuring data integrity, scalability, and accessibility for organizational success.
Requirements
- Experience with programming languages commonly used in data engineering, such as Python, Java, or Scala.
- Strong understanding of relational and non-relational databases (e.g., SQL, NoSQL).
- Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud, and related data tools.
- Proven ability to design and implement scalable ETL frameworks and data integration platforms.
- Familiarity with data pipeline and workflow management tools, such as Apache Airflow or similar frameworks.
- Strong problem-solving skills and a detail-oriented mindset to ensure data accuracy and reliability.
Responsibilities
- Design, develop, and maintain scalable data pipelines and ETL processes to support operational workflows.
- Collaborate with cross-functional teams to collect, clean, and structure raw data from various sources to ensure data accuracy and consistency.
- Optimize data systems and architecture for performance and scalability to handle large datasets efficiently.
- Monitor and troubleshoot data processing issues to ensure uninterrupted operational workflows.
- Implement and maintain security best practices to ensure data privacy and compliance with company and regulatory requirements.
- Document systems, processes, and data flow to ensure knowledge sharing and alignment across teams.
- Stay updated on industry best practices, emerging tools, and technologies to continuously drive improvements in data engineering processes.
Other
- Proficiency in English with excellent communication skills to collaborate effectively with team members.
- Prior experience working in operations/processing environments is a plus.
- Bachelor’s degree in Computer Science, Data Science, or a related field, or equivalent practical experience.