Project

Demand forecasting and cost optimization for a chain of 8,000 convenience stores

Decision making processes for 8,000 shops supported with explainable models

Decision making processes for 8,000 shops supported with explainable models

Scalable and 100% reusable cloud ML pipeline

Scalable and 100% reusable cloud ML pipeline

Food waste was reduced and revenue increased

Food waste was reduced and revenue increased

Client

Our client is a chain of small and medium-sized modern convenience stores in Poland. Over 20 years of trading, they had earned significant market coverage and brand awareness. The company had established its stable, enduring position primarily through aggressive expansion, focusing on being a leader in the modern convenience stores segment through investments in innovation and artificial intelligence.

Challenge

Forecasting the sales of a set of products per store, time-frame, and day

The convenience store needed a solution to accurately estimate the sales levels of a defined group of products for each store, day, and particular time interval. The information would then act as assumptions in other optimization tools, helping store owners plan supply and distribution. The core challenge was ensuring supply always met demand

Ensure business users had confidence in the quality of the output

To further the solution’s replicability, deployment, and monitoring, we built a robust machine learning pipeline. We introduced an end-to-end ML pipeline, which included efficient data integration, data validation, model tuning, comprehensive evaluation steps, experiment tracking, and model explainability. 

It was equally important to design solutions that followed standard ML and software engineering best practices to ensure business users had confidence in the quality of the output.

Building a scalable, automated solution

Another challenge of the project was the client wanted a scaling solution with automated model training, model deployment, scalability, and optimized cloud services costs.

Sharing knowledge

Even though we provide ongoing support for every solution, knowledge transfer was a primary requirement in this project. As a result, we delivered explanatory workshops both during and after development, giving the client a comprehensive understanding of our methodology and process. These activities extended into ML workshops for junior data scientists that helped propagate knowledge and best practices throughout the client’s organization.

Solution

Process optimization, sales forecasting, and risk mitigation

First of all, we reviewed the existing approach, dug into the business problem, and recommended optimizations. The main challenge in tackling the predictions was the level of detail expected from the outcome: each product for each store required a separate prediction per time interval and day.

We built a solution that satisfied the business requirements within the suggested changes. We also proposed several new approaches to tackle problems that might occur in the future (but were yet to materialize).

Rebuilding the existing solution in line with ML & software development best practices

The existing solution did not meet industry standards. Therefore, we rebuilt it using industry best practices. Starting from data loading and preprocessing, we were able to fix optimization issues and process years of historical data for the client’s entire pool of stores. 

Within that part of the pipeline, we reviewed potential data sources and supported the process of analyzing and gathering them. As a team, we balanced the model and data-centric approaches used in ML (we always believe effort invested in both streams is the most efficient way to do ML projects).

Starting with a model-centric approach, we achieved a solution that all stakeholders were satisfied with, but there was still scope for improvement. And we temporarily switched to a data-centric system after confronting a low slope of improvement (based on invested time). We then hit upon several significant breakthroughs by acquiring new data sources and fixing data quality issues within the whole dataset. 

We also introduced other essential components, including: 

  • Model tuning
  • Custom loss functions (to tackle business requirements to penalize underestimation more than overestimation)
  • Experiment tracking
  • Model evaluation
  • Model explainability

Within these components, we achieved a fully-automated pipeline that hit all the required validation and monitoring aspects. Moreover, the client can now easily track, replicate and validate all experiments. 

In maximizing the model’s effectiveness, we used a black-box algorithm, which can cause business users to question a model’s output. To tackle this challenge, we introduced explainability, which benefited our development team as well as our client. 

Our developers could precisely validate the results, understanding patterns and identifying weak spots — while the business had confidence in the solution and could explain predictions to stakeholders. The solution enables users to describe the general model alongside batch and single predictions.

Even though the project’s scope was to handle a defined set of product groups, the solution we created was widely applicable. The code allowed us to quickly introduce the next group of products from different product branches.

A focus on simple deployment, automated scheduling, and cost optimization

In the next phase, we considered deployment. To preserve the solution’s integrity, we stuck to Microsoft Azure: the client’s existing cloud provider. Besides deployment and automated scheduling, cost optimization was another significant factor to consider. 

Thanks to the clean codebase and the quality of the architecture, deployment was relatively straightforward. We wrapped the solution as a Docker Image, which we also used to build an Azure ML environment. Within Azure ML, we defined the whole processing graph containing data loading, model training with evaluation, and registering a new model version. We also created a similar pipeline to perform the model’s inference. 

Each pipeline was easy to control in terms of configuration, including business assumptions, scheduler, and resource usage. And thanks to comprehensive testing and choosing the most accurate resources, we minimized the costs of the whole cloud infrastructure. Moreover, each pipeline was executed in a serverless way. Hence resources were released after the process finished to avoid additional costs. 

Computation time optimization was achieved by parallelization, with each product processing individually and independently.

Knowledge transfer & education workshops

Our collaboration with the client during the development and testing phases led to the delivery of dedicated workshops for the client’s data science team. We shared our team’s and general industry best practices across machine learning and software development during these sessions. 

We used the weekly workshops to explain the end-to-end pipeline in detail, giving the client a clear understanding of the methodology used and the components in each project. 

Moreover, we delivered several hands-on workshops, including live coding sessions, which allowed us to develop a project together and introduce recommended technology stack and code standards.

  

Technologies used

Azure ML & Azure ML Pipelines

SHAP

LightGBM

Python

Python

Docker

Snowflake

Optuna

MLFlow

AI SOLUTIONS WE’RE PROUD OF

We have helped clients from various industries achieve their goals and meet even the most ambitious business plans. We have helped clients from various industries achieve their goals and meet even the most ambitious business plans.

Read more call_made An AI-based Assistant Streamlining Meeting Organization At A High-end Staffing Provider

An AI-based Assistant Streamlining Meeting Organization At A High-end Staffing Provider

Read more call_made GPT-Based Student Assistant: Professional Guide to Finding the Perfect Educational Path

GPT-Based Student Assistant: Professional Guide to Finding the Perfect Educational Path