What Is Forecast Bias?

The Working Definition

Forecast bias is a systematic tendency for forecasts to be consistently higher or lower than actual demand. A forecast with positive bias is chronically too high (over-forecasting). A forecast with negative bias is chronically too low (under-forecasting). Bias is the directional signal in forecast error accuracy tells you how big the errors are, bias tells you whether they lean one way.

The reason bias is dangerous: random forecast error averages out over time, but biased error compounds. A forecast that's 10% too high every month doesn't average to zero it produces a steady buildup of excess inventory. A forecast that's 10% too low every month produces persistent stockouts and lost sales. Bias is the kind of error a business actually feels in the P&L.

This page covers the formula, how to interpret it, where bias usually comes from (the answer is usually human, not algorithmic), and the four-step process to eliminate it.

Key Takeaways

How Horizon Surfaces and Corrects Bias

Horizon calculates bias automatically at every forecast hierarchy level SKU, customer-SKU, location, category alongside MAPE and tracking signal. Planners see bias for every SKU in the same view where they review accuracy, so the directional problem can't hide behind a clean accuracy number.

The FVA report compares bias at each step of the forecasting process statistical baseline, ML adjustment, sales overlay, consensus. This is how teams identify whether bias is coming from the model or from human overlays. In most implementations, this single view changes how sales overlays are reviewed: the overlay that consistently adds bias without improving accuracy gets retired.

For SKUs flagged with persistent bias, Horizon surfaces the underlying time series so the planner can identify whether it's a data issue (one-time spike in history), a trend break, or a known structural change and apply the right correction.

Why Bias Hurts More Than Random Error

Random error and biased error sound similar but behave very differently. A forecast with high random error but zero bias produces inventory that swings up and down around the right level safety stock can absorb it. A forecast with low random error but persistent bias produces inventory that drifts steadily in one direction safety stock cannot fix it because the problem is not variability, it's a wrong mean.

A concrete example. Two forecasts for the same SKU, 12 months of history. Forecast A averages 1,000 units, actuals average 1,000 units, but individual months vary by ±15%. Forecast B averages 1,100 units, actuals average 1,000 units, individual months vary by only ±5%. By MAPE alone, Forecast B looks better (lower variability). But Forecast B is silently producing 10% excess inventory every single month roughly 1.2 months of cover sitting unused, year after year. Forecast A is noisier but unbiased.

The trap: most teams track only accuracy. They reward forecasters whose numbers look stable and don't notice that the stability comes from a consistent over-forecast. Tracking bias alongside accuracy catches this in a way no other metric does.

The Forecast Bias Formula

The basic calculation

Bias is the average signed error meaning you do not take the absolute value. The sign matters.

Formula: Bias = Σ (Forecast − Actual) / n

A positive number means over-forecasting. A negative number means under-forecasting. Zero means the forecast was unbiased on average.

Worked example

Three months of data:

Bias = (100 + 100 + 100) / 3 = +100 units. The forecast is consistently 100 units high. This is textbook positive bias.

Percentage bias

To make bias comparable across SKUs of different scale, express it as a percentage:

Formula: Bias % = Σ (Forecast − Actual) / Σ Actual × 100

For the example above: 300 / 2,700 = +11.1% bias. A general rule: bias outside ±5% is worth investigating; bias outside ±10% is a process problem.

Tracking signal

A more sensitive measure is the tracking signal, which is cumulative bias divided by MAD. It flags persistent bias even when the absolute number looks small.

Formula: Tracking Signal = Σ (Forecast − Actual) / MAD

A tracking signal between -4 and +4 is considered normal. Outside that range, the forecast has drifted and needs intervention.

Where Bias Usually Comes From

Statistical forecast models, by construction, produce nearly unbiased forecasts on stable history. So when bias appears, it's rarely the algorithm. The four common sources:

1. Sales overlays (most common)

Sales teams, especially in incentive-driven environments, systematically over- or under-forecast depending on quota mechanics. Under-forecasting is common when the forecast becomes the quota. Over-forecasting is common when the forecast triggers manufacturing capacity decisions sales wants in place.

2. Outdated assumptions in the statistical model

If the historical period being modelled included a one-time spike (a one-off large order, a pandemic-era surge), and the spike isn't excluded from training, the model will project it forward and produce positive bias.

3. Trend extrapolation in a flattening market

Models that fit a positive trend keep projecting it even after demand stabilises. The bias appears 3-6 months after the trend actually breaks.

4. Failure to update for known structural changes

Lost customer not removed from baseline. New competitor not factored in. Channel shift not modelled. All produce one-directional drift.

Four-Step Process to Eliminate Bias

  1. Measure bias at the SKU level, every cycle. Don't rely on aggregated reporting bias often hides in the long tail of SKUs.
  2. Identify the source. Compare bias of statistical baseline vs final consensus forecast. If baseline is unbiased and consensus is biased, the bias is being introduced by overlays. If the baseline itself is biased, the model or data needs fixing.
  3. Apply correction. For statistical bias, exclude one-time events from training data or retune the model. For overlay bias, share the bias data back with sales/marketing visibility usually corrects behaviour faster than process changes.
  4. Re-measure next cycle. Bias correction is iterative. Don't expect one intervention to fix it permanently.