The company is seeking a Data Engineer to design, build, and maintain robust cloud-based data pipelines and architectures using Microsoft Fabric and the Azure data ecosystem to deliver reliable, high-quality data for analytics, reporting, and operational use cases.
Requirements
- Hands-on experience with Microsoft Fabric or Azure data platforms.
- Strong SQL and data modeling skills.
- Experience with ETL/ELT pipelines and orchestration (Azure Data Factory, Fabric pipelines, or similar).
- Programming proficiency in Python and/or PySpark.
- Familiarity with Azure Data Lake, Synapse, SQL DB, and Key Vault.
- Exposure to NoSQL databases (e.g., MongoDB Atlas) is a plus.
- Familiarity with DevOps practices, CI/CD, or containerization is a plus.
Responsibilities
- Design, develop, and maintain scalable and efficient pipelines in Microsoft Fabric and Azure.
- Implement ETL/ELT processes to integrate data from diverse sources.
- Leverage Microsoft Fabric and Azure services to build integrated, cloud-native data platforms.
- Develop data models to support reporting, analytics, and machine learning use cases.
- Optimize data lakehouse/warehouse solutions for performance and cost-efficiency.
- Monitor, troubleshoot, and optimize pipelines for reliability, performance, and data quality.
- Apply best practices in data governance, security, and compliance.
Other
- 3–6+ years of experience in data engineering or related field.
- Strong problem-solving and analytical mindset.
- Effective communicator who can collaborate across teams.
- Comfortable working in a fast-paced, cloud-first environment.
- Relevant Microsoft Azure certifications (e.g., Azure Data Engineer Associate).