What Is Forecast Collaboration?

The Working Definition

Forecast collaboration is the structured process by which sales, marketing, product, and finance contribute their domain knowledge into the demand forecast and demand planning consolidates those inputs into a single agreed number that downstream functions execute against. It's the difference between a forecast generated in isolation by a planner and a forecast that reflects what the whole organisation knows.

The word "collaboration" carries some baggage. In many companies, forecast collaboration has degenerated into a negotiation sales fights for a low forecast (to beat quota), marketing fights for a high forecast (to justify investment), finance fights for whatever matches budget. Real collaboration is not negotiation. It's the structured exchange of information so the resulting forecast is more accurate than any single contributor could produce alone.

This page explains what good forecast collaboration looks like, the common failure modes, and the discipline that makes it work.

Key Takeaways

How Horizon Supports Collaborative Forecasting

Horizon's collaboration model is built around the five principles. Each contributor has their own interface for adding overlays sales sees customer-level views, marketing sees promotion and launch views with structured fields for SKU, time period, magnitude, and reason. Overlays are tagged with the named owner automatically.

Conflicts between overlays (e.g. sales adds +500, marketing adds +800 to the same SKU) are surfaced as exceptions for demand planning to resolve. The consensus forecast is owned by demand planning, with the contributing overlays preserved as an audit trail.

FVA is calculated per contributor and per overlay type, visible in a dashboard that each contributor can see for their own inputs. The system also flags overlays from a contributor whose FVA has been persistently negative not to punish, but to surface a conversation.

For companies that have been running unstructured collaboration (email, meetings, late adjustments), the transition to structured collaboration is a process change as much as a tool change. Most implementations include 2-4 weeks of process design alongside the system configuration.

Why Collaboration Matters Despite the Mess

Demand planners working alone even with excellent tools produce forecasts that are blind to information they cannot access. Sales reps know which customers are planning expansions, which are at risk, which have promised orders not yet entered. Marketing knows which promotions are landing, which launches are coming, which competitive shifts are affecting category demand. Product knows which discontinuations are imminent. Finance knows which budget assumptions changed.

None of this information is in the historical data. A forecast generated only from history will systematically miss it. The choice is not whether to collaborate it's whether to collaborate in a structured way (overlays with named owners and reasons, captured in the planning system) or in an unstructured way (emails, hallway conversations, late-cycle adjustments that arrive after the plan is published).

The companies that do collaboration well typically run 5-10 percentage points more accurately than companies that don't, on the same underlying statistical baseline. The companies that do collaboration badly often run worse than the statistical baseline alone collaboration degrades the forecast rather than improves it. The difference is process discipline.

What Good Forecast Collaboration Looks Like

Principle 1: Statistical baseline first, overlays second

The collaboration process starts with the system-generated statistical baseline. Contributors add overlays on top they don't generate forecasts from scratch. This separates the unbiased machine forecast from human judgment, which makes FVA measurement possible later.

Principle 2: Overlays are specific, named, and reasoned

Each overlay has a named owner, a specific reason, and a clear scope. "Customer A confirmed expansion +500 units in Q3 Europe" is a valid overlay. "Sales feels Q3 looks soft" is not. The discipline matters because vague overlays cannot be evaluated for accuracy and cannot be reasoned about later when they prove wrong or right.

Principle 3: One number, owned by demand planning

After collaboration, demand planning owns the consensus forecast as a single number. Sales, marketing, and finance contribute to it but do not own it. This is the structural choice that prevents collaboration from degenerating into negotiation there's no "sales forecast vs planning forecast" to argue about; there's one forecast with structured inputs.

Principle 4: Disagreements are surfaced, not buried

When sales and the statistical model disagree on a SKU, the collaboration process should surface that disagreement, not hide it. The disagreement gets adjudicated based on data what does sales know that's not in the history? Is the information specific enough to act on? The conversation is data-driven, not authority-driven.

Principle 5: FVA closes the loop

Six months after the cycle, FVA measurement reveals which overlays improved accuracy and which destroyed it. Contributors see their own FVA sales rep A's overlays improved accuracy on average, sales rep B's destroyed it. This feedback loop turns collaboration into a learning system rather than a recurring argument.

Common Failure Modes

Failure 1: Collaboration becomes negotiation

Without the "one number, owned by planning" principle, each function brings their own forecast and the meeting becomes a debate about whose number wins. Symptoms: meetings run long, decisions get escalated, the final forecast reflects organisational politics rather than information.

Failure 2: Overlays are bulk adjustments

Sales submits "increase Q3 by 8%" rather than specific overlays per SKU/customer with reasons. The forecast moves but accountability evaporates. When Q3 misses, no one owns the adjustment because no one made a specific claim.

Failure 3: No feedback loop

Without FVA, contributors don't see whether their overlays improved or hurt accuracy. The same overlay patterns repeat cycle after cycle. The team is collaborating but not learning.

Failure 4: Late overlays

Overlays arrive after the forecast has been published downstream. Production has already committed schedules; procurement has already placed orders. The late overlay either gets ignored or forces costly replanning. Set a cutoff date and enforce it.

The Practical Rhythm

Most mature collaborative forecasting runs on a monthly cycle:

Within this rhythm, FVA from prior cycles informs which overlays to weight heavily and which to challenge.