Operationalizing and scaling AI capabilities across Coca-Cola's 17MM+ connected equipment fleet to reduce Total Cost of Ownership (TCO), increase transactions, and provide real-time market insights.
Requirements
- Expert-level proficiency in designing, building, and operating production-grade AI/ML pipelines on Microsoft Azure (e.g., Azure Machine Learning, Azure Kubernetes Service, Azure Functions, Azure Databricks).
- Strong software engineering background with extensive experience in Python, including developing robust, production-quality code and APIs.
- Proficiency with containerization technologies (Docker) and orchestration platforms (Kubernetes).
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) and deploying models trained with these frameworks.
- Solid understanding of cloud infrastructure concepts, networking, and security best practices relevant to AI deployments.
- Experience with Git and CI/CD tools (e.g., Azure DevOps, GitHub Actions).
- Familiarity with IoT, telemetry data, and embedded systems (exposure to KOS or similar OS is a plus).
Responsibilities
- Design, develop, and maintain robust, scalable MLOps pipelines for the entire ML lifecycle, including data versioning, model training, model versioning, testing, deployment, and monitoring, ensuring reproducibility and reliability.
- Design, build, and maintain robust MLOps pipelines and scalable AI infrastructure on Azure, operationalizing models developed by Data Scientists and integrating successful innovations from the AI & Cloud Innovation Engineer into the Unified IoT Ecosystem and KOS, ensuring high performance, reliability, and multi-tenant capabilities.
- Implement automated CI/CD processes for AI artifacts, ensuring rapid and reliable deployment of models into production environments (e.g., Azure ML, Azure Kubernetes Service).
- Work hands-on to containerize (e.g., Docker) and orchestrate (e.g., Kubernetes) AI services for efficient resource utilization and high availability across the global equipment fleet.
- Develop and manage API endpoints for AI models, ensuring secure, low-latency, and high-throughput inference services for consumption by applications and other systems.
- Collaborate with Lead Data Engineers and Digital Technology Solutions (IT) to provision, configure, and optimize cloud-based AI infrastructure (e.g., GPU clusters, specialized compute instances) on Azure.
- Integrate AI capabilities seamlessly into existing GEP applications and platforms, including remote equipment management tools, content management systems, marketing solutions, and analytics dashboards.
Other
- Bachelor's degree in Computer Science, Software Engineering, Data Science, or a related quantitative field. Master's or Ph.D. preferred.
- 7+ years of hands-on experience in AI/ML engineering, MLOps, or productionizing machine learning models in cloud environments.
- Proven ability to work independently and drive technical projects from conception to production.
- Collaborative Integrator: Works effectively across diverse technical teams (Data Science, Data Engineering, IT) and with business stakeholders to ensure seamless AI integration.
- Results-Driven & Accountable: Focuses on delivering tangible business value through deployed AI, taking ownership of the end-to-end operational success of solutions.