The company is seeking a Lead Data Engineer to design, build, and optimize data engineering solutions leveraging AWS and Snowflake, to deliver scalable, secure, and high-performance data platforms, particularly around claims and loss in the insurance industry.
Requirements
- Strong expertise in Big Data concepts and cloud implementations
- Hands-on experience with SQL, Python, and PySpark
- Solid understanding of data ingestion and processing frameworks
- Proven experience with Snowflake, AWS Glue, EMR, S3, Aurora, and RDS
- Ability to code, debug, performance tune, and deploy into production
- Experience working in Agile/Scrum environments
- Experience with DevOps tools (Jenkins, Git, CI/CD pipelines)
Responsibilities
- Lead the design, development, and implementation of end-to-end data solutions using AWS and Snowflake
- Develop and maintain ETL pipelines, ensuring data quality, integrity, and security
- Optimize data storage and retrieval processes to support warehousing and advanced analytics
- Provide technical leadership and mentorship to junior engineers
- Partner with business stakeholders to deliver actionable, data-driven insights
- Ensure adherence to industry standards and best practices in data engineering
- Apply insurance knowledge (claims and loss) to enhance solution effectiveness
Other
- Bachelor’s or Master’s in Computer Science, Information Technology, or a related field
- Excellent communication skills, with ability to explain technical concepts to non-technical stakeholders
- Industry experience in insurance, ideally with claims and loss processes
- Strong attention to detail and ability to thrive in fast-paced environments
- 10+ years of experience in Data Engineering and delivery