Capgemini is looking to hire a Data Engineer with expertise in DBT, Snowflake, and PL/SQL to design, develop, and maintain robust data transformation pipelines that support business intelligence, analytics, and data science initiatives for their clients.
Requirements
- Strong hands-on experience with DBT (modular SQL development, testing, documentation).
- Proficiency in Snowflake (data warehousing, performance tuning, security).
- Advanced knowledge of PL/SQL and experience with stored procedures, functions, and packages.
- Solid understanding of data modeling concepts (star/snowflake schemas, normalization).
- Experience with version control systems (e.g., Git) and CI/CD practices.
- Familiarity with orchestration tools (e.g., Airflow, dbt Cloud, Prefect) is a plus.
Responsibilities
- Design and implement scalable data models and transformation pipelines using DBT on Snowflake.
- Write efficient and maintainable PL/SQL code for complex data processing and transformation tasks.
- Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver high-quality solutions.
- Optimize Snowflake performance through query tuning, clustering, and resource management.
- Ensure data quality, integrity, and governance through testing, documentation, and monitoring.
- Participate in code reviews, architecture discussions, and continuous improvement initiatives.
- Maintain and enhance CI/CD pipelines for DBT projects.
Other
- 3+ years of experience in data engineering or a related field.
- Day One Onesite - Hybrid work location.
- Collaborate with data analysts, data scientists, and business stakeholders.
- Participate in code reviews, architecture discussions, and continuous improvement initiatives.
- Capgemini is an Equal Opportunity Employer encouraging diversity in the workplace.