How Good is Good?

Forecast accuracy isn’t the issue.

 

Arbitrary accuracy targets are often established for demand planning teams that have little to no basis in reality.  Often, they are based on external benchmarks. A measure that is questionable at best, and dangerous at worst. If you’re going to use one, ask yourself these questions before you jump to any conclusions:

  • Are the numbers self reported as part of a survey or are they calculated by a single method by whoever is doing the benchmark?
  • If they’re self reported, are they all calculated the same way?
  • Are the forecasting resources at the company comparable?
  • Are they based on the same forecasting lag?
  • Is the distribution channel mix comparable?
  • Are the products comparable?
  • Are there similar promotions?

The list goes on.

If you answered favourably to all of the above questions, still be careful. Benchmarks have to be taken with a grain of salt. Maybe a case. Believe it or not, even seemingly cut and dry metrics like wMAPE can still be calculated differently company to company. So who is comparable to you? Only you are.

 

A better way

Rather than trying to determine an arbitrary upper accuracy target pulled out of who-knows-where, you can establish the lower watermark. One that says “you should be at least as good or better than the naïve forecast”. From there, measure up using forecast value add (FVA). Created by Michael Gilliland, FVA utilizes simple statistical methods to determine how each step and participant in a forecasting process impacts accuracy. This is better than a benchmark because it is relative to your company’s particular product portfolio and situation. You can understand how well you would be doing if you weren’t forecasting beyond a naïve.

By establishing that lower water mark, you understand a couple of things:
1)  If the people, tools, and process that you are bringing to bare on your demand planning is adding value, and how much.
2) Where could things be improved?

As one director of demand planning at a top food and beverage company put it:

 

“I can understand where my team’s effort is being spent, and where it is paying off”

 

Let’s run through a common situation:
If we’re spending our time constantly adjusting low volume SKUs that should be easy to forecast statistically, what would happen if instead we reallocated those resources to high volume SKUs that are tough to forecast. Letting the stat models ride on the low-volume-predictable ones might give you a marginally worse forecast,  but stack that up next to how much value can be added to the high-volume-unpredictable SKUs. What you’ll find are significant planner effectiveness gains, doing more with less.

 

Still not convinced?

Remember that benchmarks represent a fixed point in time. If you just set a 25% wMAPE target at a SKU level, it doesn’t take into account change over time. It may or may not have been appropriate for last year, but is it still appropriate for this year? Have circumstances changed?

By constantly using a naive forecast, you’re capturing things like if a portfolio is getting harder to manage. Maybe you’ve got more promotions, lost SKUs, or lost or brought on business.  None of this is comparable in a benchmark.

Measuring FVA can help shift the conversation at the company from an accuracy target that’s not being achieved, to the error that’s being removed from the forecast.

 


A later post will dig deeper into how you can calculate FVA at your company.


Author: Marcus Rogers

 

Like what you just read? Sign up for our news letter.

Our quarterly forecast for the future of supply chain, straight to your inbox. 

Comments are closed.