SMBC Group is looking to design, build, and maintain scalable data pipelines using Azure services to deliver reliable, data-driven solutions.
Requirements
- Azure Data Factory
- Azure Data Lake Storage
- Azure Databricks
- Azure Logic Apps
- Azure Log Analytics
- Azure Event Hubs
- Proficiency in SQL and experience with NoSQL databases.
- Experience with Spark and streaming technologies (Kafka, Event Hubs).
- Programming skills in Python, Scala, or Java for data processing and automation.
- Experience building and maintaining microservices based data system infrastructure.
Responsibilities
- Design and implement data pipelines for ingestion, transformation, and storage using Azure Data Factory, Databricks and related tools.
- Develop custom solutions for job alerting, job monitoring .
- Implement ETL processes for batch and streaming services using Azure Databricks and Apache Spark to integrate data from multiple sources into data lakes or warehouses.
- Ensure data quality, security, and compliance with organizational and regulatory standards.
- Collaborate with data analysts, and business stakeholders to deliver data-driven solutions.
- Monitor and troubleshoot data pipelines for performance and reliability.
- Identify and implement process improvements such as automation and cost optimization.
Other
- Proven professional experience as a Data Engineer or similar role.
- SMBC’s employees participate in a Hybrid workforce model that provides employees with an opportunity to work from home, as well, from an SMBC office.
- SMBC requires that employees live within a reasonable commuting distance of their office location.
- Prospective candidates will learn more about their specific hybrid work schedule during their interview process.
- SMBC provides reasonable accommodations during candidacy for applicants with disabilities consistent with applicable federal, state, and local law.