The company is seeking to support data-driven decision-making processes, especially in the context of its insurance-focused business operations by developing and optimizing data pipelines and ensuring the accuracy and integrity of data flows.
Requirements
- Strong proficiency in PySpark, Python, SQL
- Experience in data modeling, ETL/ELT pipeline development, and automation
- Hands-on experience with Azure Data Factory, Azure Databricks, Azure Data Lake
- Experience with Delta Lake, Delta Live Tables, Autoloader & Unity Catalog
- 7–12 years of experience in Data Engineering with Databricks & Cloud
Responsibilities
- Collaborate with data analysts, reporting team and business advisors to gather requirements and define data models
- Develop and maintain scalable and efficient data pipelines
- Implement robust data checks and validate large datasets
- Monitor data jobs and troubleshoot issues
- Review and audit data processes for compliance
- Work within Agile methodologies including scrum and sprint planning
Other
- Bachelor’s degree in Computer Science, IT, or related field
- 7–12 years of experience
- Strong analytical and communication skills
- Preferred: Knowledge of insurance industry data requirements
- This is a remote position