The company is looking to design, maintain, and optimize scalable data solutions, specifically focusing on ETL processes, big data workloads, and Python/Apache Spark, with expertise in Snowflake.
Requirements
- Demonstrable experience with big data workloads.
- Hands-on expertise with Python and Apache Spark (or equivalent).
- Strong proficiency in SQL, data modeling, and warehouse optimization.
- In-depth experience with Snowflake development and administration.
- Familiarity with data security principles and compliance best practices.
- Ability to design and optimize large-scale data environments
- Snowflake certification (SnowPro Core or Advanced).
Responsibilities
- Design, build, and optimize ETL pipelines for data ingestion, transformation, and processing.
- Integrate and process data from multiple sources, including Snowflake, Oracle, and big data platforms.
- Troubleshoot and resolve data-related issues while ensuring consistency, accuracy, and availability.
- Leverage Python and Apache Spark (or equivalent) for data processing at scale.
- Oversee the operation and performance of the Snowflake data warehouse.
- Optimize queries, storage, and compute resources to improve efficiency.
- Design and maintain new tables and data models to meet evolving business needs.
Other
- 3+ years of experience in Backend Data & Snowflake Engineering
- Strong analytical, problem-solving, and communication skills.
- Ensures Accountability
- Tech Savvy
- Communicates Effectively