Luk

S&OP MasterClass™

5 faldgruber ved brug af AI i planlægning af forsyningskæden og hvordan man kan undgå dem

Velkommen til denne S&OP MasterClass.

Disse MasterClasses dykker ned i Integrated Business Planning og Supply Chain Planning generelt og giver dig forhåbentlig nogle gode inputs undervejs.

Læs mere om PERITO IBP

939540814

Sådan undgår du de faldgruber, der er ved brug af AI i forsyningskæden

AI kan helt sikkert forbedre din styring af forsyningskæden. Men hvordan? Og hvilke risici skal man undgå?

Det er emnet for denne episode, hvor vi ser nærmere på fem faldgruber ved brug af AI, og hvad du kan gøre for at mindske risiciene. De faldgruber, vi dækker, er:

  1. Hvordan undgår man black box-faldgruben?
  2. Hvordan får man succes med begrænsede ressourcer?
  3. Hvordan styrker man det svageste led i datalink?
  4. Hvordan forhindrer man, at AI lærer de forkerte ting?
  5. Hvordan undgår man, at AI kommer i konflikt med medarbejdernes incitamenter?

Din vært er Søren Hammer Pedersen med gæst Stephen Skovlund, begge fra Roima Intelligence.

Podcasten er produceret af Montanus.

Lyt med og få mest muligt ud af dine AI-initiativer i forsyningskæden.

Relevante links fra podcasten

I denne episode

Her finder du vigtige tidsstempler fra podcast-episoden, så du lettere kan finde de emner, der interesserer dig.

00:09 – Introduktion til emnet

01:36 – Organisatorisk støtte i forbindelse med indførelse af AI

06:40 – Datakvalitetens kritiske rolle i implementeringen af AI

10:59 – Faldgrube 1: Den sorte boks og forklaringsevne

12:19 – Faldgrube 2: Kom i gang med begrænsede ressourcer

14:28 – Faldgrube 3: Datastyrken

17:20 – Faldgrube 4: Forhindre AI i at lære de forkerte ting

18:26 – Faldgrube 5: AI i konflikt med medarbejdernes incitamenter

23:38 – Konklusion og gode råd til at begynde AI-rejsen

Transskriberet version

Søren Hammer Pedersen (00:09):

Hello everybody. A warm welcome to this S&OP MasterClass from Roima. My name is Søren Hammer Pedersen, and I’ll be your host for today’s session. The purpose of these S&OP MasterClasses is to dive into hot and trending topics within supply chain planning, give you our perspective on these, and hopefully give you some tips and tricks that you can use in your own supply chain plan.

(00:32):

Today’s session is part of another session we did not long ago on AI and supply planning, and today, we are continuing this topic. So, in the first podcast, we looked very much into what AI and supply chain planning are; today, we are diving into two main questions. Is I, my organization, my company ready to use AI in supply chain planning? More importantly, where are the pitfalls when you start to include AI in your planning, and how would you mitigate them?

(01:05):

But as the same as last, I’m not alone here in the studio. Again, I brought my good friend and colleague Stephan Skovlund, who was also part of the first session. Welcome, Stephen.

Stephan Skovlund (01:16):

Thank you.

Søren Hammer Pedersen (01:17):

So, let’s just get into it. You know the audience here is supply chain professionals. They’re thinking about AI, and one of the big questions they have at the moment is, “I want to do this AI, but are we all ready for this?” What’s your thought on this?

Stephan Skovlund (01:36):

It’s a good question—and a big question. One of the difficult things about AI is that it is so many things. It can be language models, machine learning models, or image models. So, I guess because there are so many moving parts when we talk about AI, a good way of approaching AI readiness is to look at it from a framework perspective.

(02:06):

Let me try to create an overview of the AI methods that you can consider applying in your company. Imagine that we have an X-axis. At the left of the X-axis, we have out-of-the-box solutions, and at the far right of the X-axis, we have white-glove solutions that are highly custom. And then on the Y-axis, we have value. So we have low value at the bottom and high value at the top, a very simple framework.

(02:39):

So, if we start by just placing the different AI methods that could be relevant, we can start in the lower part of this matrix, and we can look at out-of-the-box and relatively low value but out-of-the-box, quite simple to work with. Here we have chatbots that you can easily apply to a database and start querying your database based on these chatbots.

(03:02):

We discussed this in the last video, but the idea is that it’s relatively simple. It’s something that you can set up and generate value, but it’s not where you need to hire a huge consultancy team to operate it. So that is one example. That is one way of framing it.

(03:21):

If we move a bit to the right, you can say somewhere in the middle of this matrix we could talk about a machine learning model. A machine learning model also comes like a language model that is out of the box, like a default model. But the thing here is that with machine learning, you need to tweak the parameters more and train it on your data. So here you start thinking about what the relevant data is, how we train it, who will train it, who will answer questions, and so on. So that is slightly more advanced than a chatbot you can just query.

(03:58):

At the far right, you then have the white-glove projects, where you hire a big consultancy team and automate end-to-end workflows. This sounds very futuristic, so let me just give you an example. You could actually deploy what is called an agent framework. This is a method where you use different AI agents that work in a sequence.

(04:23):

One of those agents could be a specialist in forecasting, another could be trained as a specialist in capacity planning, and a third could specialize in assembling all this information and creating a report. So, there are different kinds of agents who do different things, but they all work in a sequence, in a flow.

(04:43):

And this is not very far off. There are a lot of startups that work in this area. One of them is crewAI, another very famous one. Microsoft has made an AutoGen framework for this. So this is possible, but of course, it takes a lot more training and a lot more customization because you need to align this with the rules, the logic, all these things that are very specific for the company, and the language models know nothing about these things. They need to be trained for that, and that takes a lot of time and effort.

(05:16):

So that’s a way of framing it, the landscape. Are you going out of the box? Are you going for the middle, or are you going for the far-right white gloves projects?

Søren Hammer Pedersen (05:24):

Yeah. So, a good way to assess whether I’m ready is to start by assessing how many resources I actually have that I can invest in this.

Stephan Skovlund (05:34):

Exactly.

Søren Hammer Pedersen (05:35):

Because that will have a huge effect on which kind of solution that you could apply in your-

Stephan Skovlund (05:40):

Exactly. That is how the framework should be used because it gives you an idea of the resources that are required. And of course also a little bit about the value you can expect from this. As I also mentioned in the previous session we had, a lot of companies we are talking with are very much into getting their feet wet, are very much into how we can learn to use it and just do a small scope project, but so that we get the feeling of what it can do for us.

(06:07):

So here the chatbot and the machine learning could be good candidates because here you can take something that has been proven and that can generate a high level of value, but without doing a huge consultancy project. So that could be a good place of starting for most companies in the supply chain field.

Søren Hammer Pedersen (06:27):

Yeah, I think, still on this topic, one of the questions I get a lot anyway when I have these discussions is around data. So we talk about, “Am I ready for AI?” You must talk about data.

Stephan Skovlund (06:40):

Absolutely, yes. That is the next question. Data: once you have mapped the different relevant AI approaches, the next question is, of course, data. Do we have the data required for this? And I think there is a kind of a misconception when we talk about AI, a slight misconception that the more the merrier. That is not always the case.

(07:05):

When we talk about AI, it’s of course in most cases quite data hungry, but it’s also very data quality hungry. And if you are experiencing a lot of issues with your master data, for example-

Søren Hammer Pedersen (07:20):

Which they all do.

Stephan Skovlund (07:20):

... which sometimes is the case, you know that you are heading for trouble here. But it’s not the same as saying you have issues with all the master data. That is also a common, I would say, misconception that some of your master data is perfect, but not just everything. But you need a very good idea of the data you have at hand that you can use for this. A good way of thinking about it is to look at what are the minimum requirements that you need to make this work.

(07:52):

So an example of this is just if we go to a forecasting example. Of course, you need sales observations, and you need those on a daily level if we are framing it for machine learning. And out of this, the machine learning model will actually be able to generate, I don’t know, 5 or 10 more variables just out of the date variable. It can be a week; it can be months, et cetera. And on top of this, just by adding, let’s say, product group, you could certainly have a highly complex model, but it’s actually only based on three variables.

(08:27):

It’s a misconception that you need a lot of data. Actually, you can do it with much less if it’s good quality. So I would advise that you think closely about the minimum data requirement for this specific project and start there. And you can always, of course, build after that.

Søren Hammer Pedersen (08:46):

Could you actually have the thought that now we are on the master data, that AI could assist you in predicting master data if you have poor quality? Or could they help with this quality? So we have this in-between step.

Stephan Skovlund (08:57):

Yes. I’m not sure if they could heal the master data. I was not as fast as saying that it would be like a self-healing algorithm that you could let loose. But the idea is that AI has this critical reasoning component that a lot of statistics don’t have. And until now, when we are facing a problem with data issues, we are typically using alerts. We are setting up some screens and we are making some Boolean criteria, yes, no, or numerical criteria.

(09:35):

This is rigid, and it works to a certain extent. What it doesn’t catch is all the gray zone areas where we are almost at a threshold, we are almost approaching a criteria, but we’re not there. Still, it is a concern, or a flag should be raised. An AI model could be very efficient for these areas, but it would need training. It would need to see some of the use cases we would label as gray zones.

(10:04):

But I think this could be a valuable area because many people or companies are dealing with problems with master data and using a tremendous amount of resources for fixing it.

Søren Hammer Pedersen (10:14):

So, I hear you say that, yes, most companies, if not all, are ready to get their feet wet using AI because, in a simple setup, it might not require that much in terms of resources and data. But of course, there are huge variations here that could be really, really big.

(10:36):

But if you are going to get your feet wet, and most of our listeners probably will at some point, of course, a very interesting point is what you should be very aware of here. Where are some of the pitfalls? And naming some of those pitfalls, how can we mitigate them? Could you elaborate a bit on your experience there?

Stephan Skovlund (10:59):

Yes. I think it’s a good question, Søren. Many companies are holding themselves back because of these pitfalls. Many of these are known, but they’re not discussed much. How do we mitigate them? So, let’s start with some of the most common pitfalls.

(11:19):

Of course, explainability is a key issue. If you are going to deploy a model you’re using for decision-making that is far more advanced than what you had been using earlier, then you also need somebody who can explain the output if the storm comes. That is just a question of time. In the past few years, we’ve seen many storms in the face of corona and the supply crisis.

(11:47):

Now, if you are in such a situation and you deploy something very sophisticated, but no one really know the inner workings of that, chances are that it will not survive very long before you’re going back to something you really understand. And I think that has been the deathbed of many advanced methods in the past, and it will be the same with AI. You need a level of explainability. So how do you get that? How do you mitigate that risk?

(12:19):

There are two obvious routes for this. One is that you build the in-house competencies for people with skills in this domain. That can be quite a steep cost if you’re a small, medium-sized company. Another way is to team up with a company that has this service, like managed services. We have been doing these managed services for a decade now, not only AI but mainly statistical services in data analysis and so on.

(12:50):

But just to use that as an example, when Corona happened, a lot of our clients were hit very significantly because suddenly the sales were, I don’t know, falling by 80%, something like that, from one day to the other. No statistical model can react to that the right way. So here, we needed to do a lot of tweaks and changes in how the models were set up. That was not a skill that the companies had themselves in-house.

(13:21):

If we had not supported them in this area, the chances are that they would have returned to purely manual planning, and then all the efforts spent building these models would have been wasted. So, a very critical consideration is to team up with somebody who knows about this or to invest in the skills in-house.

(13:43):

But this goes very much back to what you’re choosing. If you choose a simple, out-of-the-box chatbot for talking with your data, you’ll be fine. If you go with the AI that is slightly more advanced that you’ll be using for your decision-making, it’s absolutely crucial that you have somebody who knows how this is running.

Søren Hammer Pedersen (14:05):

So, the advice here to mitigate is that if you don’t have the managed service part, as PERITO IBP does, then you need to take it in steps that ensure the explainability is there the whole way.

Stephan Skovlund (14:19):

Yes.

Søren Hammer Pedersen (14:20):

So, don't go for the big bang, but take it in strides that you can actually make sure you can handle.

Stephan Skovlund (14:26):

Yes.

Søren Hammer Pedersen (14:27):

I think that’s good advice.

Stephan Skovlund (14:28):

Yes. I think another very important pitfall, and something that is perhaps not so discussed, is that AI is very sensitive to data shifts. I’ve just talked about corona, which is a bit of a data shift, but it can also-

Søren Hammer Pedersen (14:46):

Quite a big one.

Stephan Skovlund (14:46):

It can also be more sudden and difficult to see, but the shift still impacts these models a lot. So, for example, not so long after ChatGPT was launched, many people started to experience that the quality of ChatGPT actually got worse, and there was a concept of laziness.

(15:12):

So, for example, if you get ChatGPT to make a summary of something you find interesting, I don’t know. In the beginning, these summaries were quite elaborate, quite detailed. But over time, there was a tendency that these summaries got shorter and shorter and more and more general. This has something to do with the amount of data that you ingest in these AI models, and that actually harms the performance.

(15:40):

So it’s not like ... And this is back to the concept of self-healing because this is a buzzword you will hear often with AI, which is completely wrong. There is no AI model so far that is completely auto-healing. It doesn’t exist. If that were the case, OpenAI would only have one employee.

(15:59):

So, the whole concept of AI is that yes, you need a lot of data for many of the models, but you also need somebody who can validate the data, clean up the data, and make sure that the model doesn’t drift in any certain areas that you don’t want. You want it to stay on a neutral course. So that is important. The maintenance aspect is important. It’s not self-healing.

Søren Hammer Pedersen (16:25):

A lot of sales guys will be hurting after that comment because there’s a lot of PowerPoints out there saying that you should just flip the switch. But I hear you say that there still needs to be hands-on to some extent.

Stephan Skovlund (16:37):

It’s back to psychology, it’s basically what it is. The best story always wins. I would be far more like if I went into a room and talked with a company and said, “It’s self-healing. It just works all the time. You just push play, and you don’t really ... It’s lights-out planning, basically.” But that’s not the case.

(16:56):

I think that attitude and saying things like that actually hurt this business because it’s a fantastic technology, but you just have to acknowledge that it’s not effortless. It’s going to take a lot of effort, and it’s also going to create some new tasks that you’re not doing today but will have to take on. So yes, it can automate many things, but yes, it will also create new tasks.

Søren Hammer Pedersen (17:20):

So, back to the pitfalls, you mentioned that the AI models are going in the wrong direction. How do you stop them from learning the wrong things or drifting?

Stephan Skovlund (17:30):

Yeah, that’s a very good question. In Roima, we have set up a standout approach for managing this. One of the things that we are tracking a lot is bias. So, for example, if we see that our model is increasingly overestimating or underestimating, we are looking into what is causing that.

(17:51):

Typically, it’s because there is an outlier or an anomaly in the data that we need to clean out that hasn’t been cleaned out. That can be an explanation. But it can also be that there has been a sudden shift, and you can say the whole assumption about the data is something that needs to be questioned. So, having robust governance around this AI is the key to preventing these things from happening, but that is something you need to set up on top of the AI deployment.

Søren Hammer Pedersen (18:26):

I guess also now you talk about governance, and governance sometimes leads to people. I guess there’s a pitfall around that as well. We have a supply chain professional community that has been working with supply chain planning for many years and maybe has some very fixed way of working with this. AI will come in and change this a lot, I guess.

Stephan Skovlund (18:52):

Yes. It will require new skills for sure. I mean, there are many things. If you take the whole workflow, for example, of working with forecasting, there are many things here that AI can help automate, but there are also many areas where you would need to question and where you will need to validate those decisions. And these, you can say, new skills and upgrading the workforce are things that need to be assessed together with having this new AI model implemented.

Søren Hammer Pedersen (19:24):

But I guess you also need to have a positive culture around it-

Stephan Skovlund (19:29):

Yes.

Søren Hammer Pedersen (19:30):

I guess it's because I can imagine two kinds of AI projects out of my head. One proves that AI is amazing and will help us, and one proves that AI will do all the wrong things, so we should do what we have always done.

Stephan Skovlund (19:48):

It’s not difficult to sell the concept of AI in a boardroom because automation has so many positive aspects. But as you’re getting closer to the people who are going to use it daily, it’s quite threatening in many ways that you here have something that maybe can do what you’ve been doing for many years, perhaps in some ways better. So it’s also about looking at it from a more holistic perspective and not selling as either or, but seeing how it can augment the workforce.

(20:18):

I don’t think you can reason and say,” This is dangerous, so I should not use it.” That’s not going to work that way. It’s going to be,” Yes, this is new. Yes, this can do many things I used to be in the future, but combining what I know with the AI will make me much better.” And this is the way to look at it the right way, I think. That is also the way I think many companies are starting to communicate it because it has so much potential this way.

Søren Hammer Pedersen (20:44):

I think that’s also a very good point, making it positive for the individual in their planning that this will help them. And maybe also linking it, I guess, to every other area where AI comes into other parts of our lives, improving things. So, making a real positive story.

Stephan Skovlund (21:01):

Yeah. But let me say, Søren, there is one thing that I really want also to flag, and that that is a pitfall. I would say don’t go for perfection. Go for building a robust model instead. So with that, I mean that when you look at AI, it’s very easy to get into the details and all the things you could use because this has explanatory power for what we’re trying to achieve.

(21:33):

I can give you an example. Some years ago, I was helping a food chain, and it was a bakery chain, and that chain needed some kind of forecasting to help them better predict the sales of cakes and so on. And of course, the weather element played a key part in this. The more it rains, the worse it is for the baker.

(21:54):

So, the idea was that we had some very clear ideas about what was driving the sales in those bakery stores. Eventually, we started to build the models. We included the temperature, the wind, the rain, and so on. On the first day of using this, we looked at the weather forecast quite intensively: no rain.

(22:18):

But it was finally a rainy day, and this model failed completely. The reason for that failure was that, yes, the weather forecast is telling you that it’s going to rain, but no, it’s not going to rain exactly on your doorstep. That is an example of something where, yes, it has explanatory power, but it is not operational.

(22:44):

I often see many companies thinking about this aspect in the wrong way. They’re thinking about all the elements that can explain and predict, but in reality, they cannot use them. If you take a food manufacturer who has a two-week manufacturing lead time and so on and is very focused on the weather, maybe it could be better to focus on other things because you can simply not react.

(23:10):

Of course, seasonality is another thing, but whether it’s going to be a beautiful summer day in three days has very little value in reality. So, you need to think hard about the data you build your AI around and always try to start with robustness and simplicity. You can always build out from there but keep it robust.

Søren Hammer Pedersen (23:38):

I think that’s excellent advice, and I can see time is running out now. So start simple and make it robust. I think that’s good. I can already see the contours of our next masterclass here going into practice. How would we actually set this up in the engine room of this? This could be very interesting to dive into more.

(23:58):

But thanks for all your good advice here. I hope that also you listeners out there got some good tips and tricks and know what to look for now when starting this exciting AI journey. So, thank you for tuning in.

(24:12):

And as always, if you are interested in knowing more about some of the topics we talked about here supply chain planning in general, or our supply chain planning solution ready PERITO IBP, feel free to reach out to Roima. We’re ready to talk to you and hope to hear from you. Hope you tune in next time as well. See you soon.

Lær mere om Perito IBP