Forecast Value Add (FVA) measures whether each step of the forecasting process statistical baseline, ML adjustment, sales overlay, consensus actually improves the forecast or makes it worse. It compares the accuracy of each step against the previous step and against a naive baseline (typically last period's actuals).
FVA exists to answer a question most companies avoid asking out loud: are our overlays helping? Demand planning teams spend significant time gathering sales input, marketing intelligence, and management overrides. FVA tells you whether that effort is paying off or whether the final consensus forecast is actually worse than the unmodified statistical baseline.
The answer surprises most teams the first time they measure it. Roughly 40-60% of sales overlays, in our experience and in published industry data, destroy forecast accuracy rather than improve it. FVA is how you find out which ones.
Horizon captures every forecast step automatically statistical baseline, ML adjustment, each named overlay, final consensus without the planner needing to manually save snapshots. FVA is calculated as soon as actuals arrive and is visible alongside the next forecast cycle.
The FVA report is sliceable by overlay owner, SKU, product family, and time period. A demand planner can see at a glance which sales reps' inputs improved accuracy this quarter, which destroyed it, and which were neutral. The same view shows whether the statistical model itself is adding value or whether the team should switch to a different method for specific SKU clusters.
For overlays consistently showing negative FVA, Horizon flags them for review in the next planning cycle. This converts FVA from a quarterly report into an in-cycle intervention.
Most forecasting teams measure accuracy and bias. Few measure FVA. The result is that demand planning processes accumulate steps over time sales review, marketing review, finance overlay, management adjustment and nobody knows which steps add value and which subtract it. The process gets longer and longer because removing a step requires evidence, and the evidence (FVA) is never collected.
The practical impact is huge. A team running a 5-step forecasting process where 3 steps are value-destroying is doing several days per month of work to make the forecast worse. The hours spent on consensus reviews, the calendar load on senior leaders, the friction with sales all of it produces a forecast less accurate than what the statistical model would have produced alone. Until FVA is calculated, that's invisible.
FVA also reframes the conversation with sales. Instead of arguing about whether sales overlays are helpful in principle, you have monthly data showing which specific overlays improved accuracy and which hurt it. The conversation becomes data-driven and bilateral.
FVA compares the accuracy of two adjacent forecast steps. The simplest version:
FVA (Step B vs Step A) = Accuracy of Step B − Accuracy of Step A
Where accuracy is typically expressed as MAPE or WMAPE. A positive FVA means Step B improved accuracy over Step A. Negative FVA means Step B made the forecast worse.
The standard reference point in FVA reporting is the naive forecast usually last period's actuals, or a simple moving average. Every step of your process should produce a forecast that beats the naive forecast. If a step underperforms naive, that step is destroying value.
A demand planning team runs a 4-step process. Measured over 6 months on a representative SKU:
Now calculate FVA for each step:
The team is doing three days of consensus work per month to make the forecast 8 percentage points worse than it was after the ML step. That's the FVA insight in numbers.
The forecast at each step (statistical baseline, after each overlay, final consensus) needs to be saved as a frozen snapshot. Most teams that try to implement FVA discover they only save the final consensus, which makes FVA impossible to calculate retroactively. Save every step from cycle one.
FVA needs actual sales to score against. For monthly forecasts at 3-month horizon, you need to wait 3+ months before FVA is meaningful. Don't try to score a forecast until you have a full lag period of actuals.
FVA at the aggregate level hides where overlays are adding vs destroying value. Calculate FVA at SKU-month and aggregate up. You'll often find a sales rep's overlays add value for some product families and destroy it for others that level of detail enables targeted intervention.
FVA only changes behaviour when the data goes back to the person making the overlay. Send each sales rep their personal FVA which of their overlays improved accuracy, which hurt it, and by how much. The conversation that follows is far more productive than abstract debates about whether sales should be involved in forecasting.
Decide a threshold below which an overlay layer is removed from the process. Common practice: if an overlay layer averages negative FVA over 6 rolling months, it's removed for 3 months as a trial. If accuracy improves during the trial, the layer is permanently removed.