Transforming data from ERP, MES, supply chain, and quality systems into trusted, actionable insights that support analytics, reporting, and operational visibility across Grain Millers' manufacturing footprint.
Requirements
- Strong proficiency in Python, SQL, and modern big data technologies (e.g., Azure Data Lake, Hadoop, Azure Data Factory).
- Expertise in cloud data platforms (Azure, AWS, or GCP) and orchestration tools (Airflow, Data Factory, Synapse, Databricks).
- Experience with data modeling, warehousing concepts, and distributed systems.
- Familiarity with infrastructure-as-code and CI/CD pipelines (Terraform, Bicep, Ansible).
- Master's degree in Computer Science, Engineering, Information Systems, or related field with 5+ years of large-scale data engineering experience; OR a Bachelor's degree with 10+ years of experience.
Responsibilities
- Design, build, and maintain scalable data pipelines using tools such as Azure, Databricks, Airflow, dbt, and Spark.
- Develop ETL/ELT processes to ingest structured and unstructured data from diverse systems.
- Partner with architects, engineers, and analysts to design and optimize data models for analytics and operations.
- Ensure high data quality, availability, and performance across enterprise systems.
- Collaborate with stakeholders to build dashboards and reporting solutions in BI tools such as Power BI or Tableau.
- Implement strong data governance practices, including data security, privacy, and compliance.
Other
- Strong communication, problem-solving, and collaboration skills.
- Experience in agribusiness or food manufacturing environments preferred.
- Master's degree in Computer Science, Engineering, Information Systems, or related field with 5+ years of large-scale data engineering experience; OR a Bachelor's degree with 10+ years of experience.
- EOE/AA. Race/Color/Gender/Religion/National Origin/Disability/Veteran