Why your forecast doesn not need to be explainable
.png)
About Nicolas Vandeput
Demand planning expert, author, and founder of SupChains and SKU Science with 10 years of experience in inventory management and demand forecasting. Previously, a supply chain data scientist at Bridgestone managing inventory across Europe, Nicolas is now deeply committed to teaching supply chain professionals how machine learning can transform forecasting without sacrificing accuracy for storytelling.
The problem with choosing explainability over accuracy
You're building a forecast. Your leadership team wants one they can explain to the board. Sales want to understand why you're predicting what you're predicting. Your team wants to see the logic behind the numbers. So, if you opt for explainability, you choose a simpler statistical model you can decompose into trend, seasonality, and noise.
Here's the uncomfortable truth: you've just made your forecast worse.
Nicolas Vandeput pushes back hard on this temptation. "More explainability is always better," he acknowledges, "but it usually comes at a tradeoff; you can get more explainability but at the cost of forecasting quality." When you prioritize being able to explain your forecast, you're optimizing storytelling, not accuracy. You're preparing to convince people, not preparing for the reality of what will actually happen.
The real question isn't whether you can explain it. It's whether your forecast scales your supply chain decisions. Supply chains operate at scale, hundreds, thousands, or tens of thousands of decisions daily. Those decisions can't be made by humans by clicking through dashboards all day. They need to be made by algorithms, supervised by people. That changes everything.
"Once you understand that scalability is important," Nicolas explains, "it means what's important is that you make a forecast as good as possible, not as explainable as possible." A good forecast at scale beats a perfect story every time.
Don't start with statistics; start with machine learning.
Most teams approach this wrong. They think to start simple with a moving average, then maybe graduate to ARIMA or exponential smoothing, then maybe, if we're brave, try machine learning.
Nicolas's experience suggests that the opposite path is actually easier. "From a user point of view," he says, "moving directly to machine learning is much easier because all these cases are solved."
Here's what he means. With traditional statistical models, you're constantly fighting edge cases. Promotions? Your model won't handle that without manual tweaks. New products? You need to forecast manually until you have enough history. Shortages distort demand signals? You have to manually adjust. Seasonal drops? You're spending time selecting parameters.
With machine learning done right, many of these edge cases are much easier to handle. Promotions, new products, shortages, short histories, you're not constantly fighting parameters and manual workarounds. "You can really just focus on, what do I know as a human that the model is not aware of?" Nicolas explains. The planner's job becomes gathering insights the algorithm can't find, calling customers, talking to sales, and understanding market conditions.
You still need to clean data either way. Machine learning doesn't excuse bad data. But if you're going to clean up the data anyway, you might as well get the better forecast on the other side.
Data cleaning is your real bottleneck (and AI won't fix it).
Nicolas delivers a sharp reality check on the AI hype: "I am not aware of a single company, organization, or team that used AI to do data cleaning in the sense that AI spots errors on master data and corrects them on its own."
Data cleaning isn't about removing outliers with algorithms. That's the wrong approach entirely, and Nicolas advises against it. Real data cleaning means setting business rules, talking to colleagues, and deciding: Is this transaction real or an error? Should we keep it or exclude it?
He shares a real example: a project where they found transactions at a zero price. "You don't need a strong algorithm to understand if something is wrong," he explains. Was it a correction? A giveaway? A return? The answer comes from a conversation, not an algorithm. Another case: someone had copied a part number into the price field, showing millions instead of actual prices. You spot it; that's not ML. That's asking, "Does this make sense?"
The pattern Nicolas sees again and again: "It's not so much the algorithm; it's more having a discussion about the business logic." Once you spot the issue, you negotiate with stakeholders about whether to keep or exclude the data.
This is why data science projects drag on. Not because modeling is hard. It's an endless, necessary conversation about what's right. There's no AI shortcut here, and honestly, you wouldn't want one.
The role of the planner changes (but doesn't disappear).
Companies sometimes see machine learning forecasting as an automation project, a way to cut headcounts. Nicolas doesn't see it playing that way in practice.
"I don't see so many companies after the project reducing the number of planners," he explains. What changes is what planners do. They stop tweaking models and selecting parameters. They start gathering information the model can't see, such as promotional calendars, customer calls, market intelligence, and data quality.
The best analogy: "It's a bit like how you would play on the stock market, but you would have access to insider information. As a planner, you know something that the model doesn't know. And then I think it's a great time to go there and change the forecast."
But this is critical; you have to know something. "If you know something, do something," Nicolas says. "But if you don't know anything special, it's best to just let it be." The worst thing a planner can do is override a good machine learning forecast without having real additional information.
The shift is from execution (building the forecast) to judgment (deciding when to override it). It's a better job, but it's a different job.
Inventory simulation is the only way to know if your policy works.
Here's where Nicolas gets almost evangelical. The world of forecasting is mature and data-driven; you know if your forecast is right or wrong. The world of inventory is harder to evaluate, which is why it often drifts into storytelling and 'trust me' debates.
He worked with a company that had a positive forecast bias of 12 percent. Instead of fixing the forecasts, they asked him to optimize inventory policies with that bias as a constraint. He set up a competition: different inventory policies tested against their actual historical sales data. The winner? A custom safety stock formula the team created years ago. It worked best not because it was mathematically elegant; it wasn't, but because it solved their actual problem.
"Before it would have been you telling me, 'Nicolas, please believe that what I say is correct, but you have no way to prove it,'" Nicolas says. The simulation proved it. That's the shift inventory optimization needs: stop debating which approach is theoretically best. Run them all against your data and pick the one that wins.
This matters because inventory assumptions are easy to get wrong. A forecast-driven policy might be "keep three weeks of forecast" for a product. Simplistic, right? Not really. When tested against real historical data with a good forecast, it delivers excellent results with minimal complexity. But you only know that if you simulate it, not if you debate it.
Your next steps
If your demand planning team is stuck choosing between explainability and accuracy, between statistical models and machine learning, or between theoretical elegance and practical results, here's where to start:
- Run an accuracy benchmark against the moving average this week. Not against industry benchmarks or internal targets. Just track: can your current process beat a simple moving average? If yes, how much? A 20-30 percent improvement in forecast error is genuinely excellent. If you're not hitting that yet, that's your real target, not some arbitrary 85 percent accuracy target
- Stop debating inventory policy and start simulating it. Take your three best policy ideas. Test them against 12-24 months of your actual historical sales data. See which one actually minimizes inventory while maintaining your service level. Pick that one. That conversation ends.
- If you're considering machine learning, start with the demand forecast, not with data cleaning complexity. Clean your data to the point where it's reliable (talk to your teams, set business rules, and remove obvious errors), then let the model handle the rest. The planner's job becomes adding judgment, not building formulas.
- Ask yourself: Do I know something the model doesn't? Before you override any forecast or policy, ask that question. If the answer is no, trust the algorithm. If yes, act on it. But don't let gut feelings masquerade as knowledge.
Forecasting accuracy and inventory optimization don't require perfect explanations. They require good data, a better model, and human judgment used exactly when it matters. That's what separates good supply chains from great ones.
Start with one change from this list. You'll be surprised how much it clarifies everything else.




