This page is for a supply chain leader at a manufacturing company evaluating planning software for the first time, or replacing a tool that has stopped paying back. Most evaluation guides list 30+ features and produce decision paralysis. This one focuses on the eight capabilities that genuinely separate platforms that work from platforms that don't, plus the red flags worth catching early.
The guide assumes you've already concluded that Excel or your ERP's planning module isn't sufficient if that decision is still open, the move-from-Excel decision is a separate conversation. From here on, the question is: what makes one planning platform better than another for a manufacturer?
Horizon was built for mid-market and enterprise manufacturers ($300M-$3B revenue range, 500-5,000 active SKUs) and the eight capabilities above are part of the core platform rather than premium modules.
Per-SKU model selection runs every cycle across a candidate ensemble. Multi-echelon inventory optimization is the default for multi-location customers, with risk pooling math at upstream nodes. Finite capacity scheduling handles sequence-dependent setups, parallel resources, and resource calendars. Financial integration is native the volume plan converts to revenue and margin automatically.
Structured overlay capture with FVA reporting is built in, not on roadmap. Exception engines flag 10-20% of SKUs per cycle. Pre-built connectors exist for SAP S/4HANA, Oracle NetSuite, D365, and Infor. Implementations run 6-10 weeks for the first module; full deployment 4-9 months.
The honest scope: Horizon best fits manufacturers in process, discrete, and CPG categories with the scale described above. Companies needing fashion/apparel SKU complexity, semiconductor-level fab scheduling, or extremely specialised constraints sometimes need different tools we'll be specific about that fit before any commercial discussion.
Planning software is typically a 5-10 year platform commitment. Total cost over that period (license + implementation + internal team + ongoing consulting) usually runs 3-7x the year-one license cost. The wrong choice doesn't just waste money it forces a painful migration or expensive workarounds that calcify over time.
The asymmetry is worse than most software categories because the data complexity is high. Once master data has been migrated, forecasts and overlays have accumulated, and operational rhythms have settled into the platform, switching becomes genuinely difficult. Companies sometimes stay on platforms they regret for years because the cost of leaving is greater than the cost of working around the limitations.
The good news: most of the failure modes are predictable and can be caught during evaluation. The guide below focuses on what to verify before signing, not after.
Different SKUs need different forecasting methods. The platform should run candidate models (Holt-Winters, ARIMA, Croston for intermittent, gradient-boosted trees for volatile) and pick the best per SKU automatically. Platforms that require manual model selection per SKU don't scale.
What to verify: Ask to see a sample SKU's model selection history which models were considered, why the chosen model was picked, how often selection re-runs.
For manufacturers with multiple stocking locations (plants, central DCs, regional DCs), multi-echelon inventory optimization (MEIO) typically releases 15-25% more working capital than single-echelon methods. If the platform only offers single-echelon optimization, that capability gap is significant.
What to verify: Verify the math is genuinely multi-echelon, not a sequence of single-echelon calculations. Ask the vendor to walk through how the math handles risk pooling at upstream nodes.
Process and discrete manufacturers face sequence-dependent setup times running product A before product B has a different changeover cost than running B before A. Schedulers that ignore this produce schedules that don't reflect shop-floor reality. Verify the constraint model handles sequence-dependent setups, parallel resources, and resource calendars.
What to verify: Show the vendor a representative changeover matrix and ask them to schedule against it. Generic scheduling demos can hide real constraint-handling limitations.
If you're moving to IBP not just S&OP the operational plan must convert to financial outcomes (revenue, margin, working capital) inside the platform. Bolted-on financial reporting is not the same as native financial integration.
What to verify: Change a SKU mix in the demand plan and verify margin projection updates automatically. Compare scenarios side-by-side on financial outcomes.
Collaborative overlays need to be specific, named, and reasoned not bulk adjustments. The platform should capture each overlay with owner, reason, and scope, and calculate FVA so contributors can see which of their inputs improve accuracy.
What to verify: Ask to see an FVA report from a real customer. Vague answers or "FVA is on our roadmap" responses indicate the capability isn't real.
Reviewing 5,000 SKUs per cycle is impossible. The platform should flag SKUs needing attention based on accuracy drop, bias drift, large cycle-over-cycle change, and other configurable triggers. Without exception management, planner productivity caps regardless of other features.
What to verify: Watch a planner walk through their cycle in the tool. If they review every SKU, the exception engine isn't really being used.
The platform reads master data and transactions from ERP and writes recommended decisions back. Pre-built connectors to your specific ERP (SAP, Oracle, D365, NetSuite, Infor) save 4-12 weeks of implementation time and reduce ongoing integration maintenance.
What to verify: Ask the vendor for their integration documentation for your specific ERP. Generic "we can integrate with anything" is a red flag pre-built connectors are different from "we'll build it during implementation."
The software is half the equation; the implementation is the other half. Talk to 3+ reference customers ideally similar in size, industry, and ERP about their implementation experience. Did the timeline match the original promise? Who from the vendor was on the project? What did they wish they'd known earlier?
What to verify: Insist on speaking to reference customers chosen by you, not only those the vendor pre-selects. Vendors with strong references should be comfortable with this.
If the implementation plan involves significant custom development to handle "your specific requirements," the product probably doesn't fit your business out-of-box. Custom development typically inflates timelines 50-100% and creates ongoing maintenance debt. Standard-configuration deployments are dramatically lower risk.
Vendors that describe their AI without naming algorithms (gradient-boosted trees, LSTM, ARIMA, ensemble methods) usually have less AI than they claim. Substance comes with specificity.
If the vendor's references describe using only basic features demand planning but not optimization, or S&OP but not IBP the advanced capabilities may not be production-ready even if they appear in demos.
A 4-8 week paid proof-of-concept on your actual data reveals more than any demo. Vendors confident in their product accept POCs readily. Vendors that resist or only offer scripted demos may be hiding capability gaps.
A 3-year TCO that comes in 30-50% below comparable platforms usually means the implementation cost is understated, services are excluded, or scope has been quietly narrowed. Ask for an explicit assumption sheet behind every TCO number.
For a mid-market manufacturer ($300M-$1B):