Want to make weather forecasting look good? Compare it to
predicting the economy. So concludes an ABC News Australia story by
finance reporter Sue Lannin, entitled "Economic forecasts no better than a random walk."
The story covers a recent apology by the International Monetary Fund
over its estimates for troubled European nations, and an admission by the Reserve Bank of Australia that its economic forecasts were wide of the mark.
An internal study by the RBA found that 70% of its inflation forecasts were close, but its economic growth forecasts were worse, and its unemployment forecasts were no better than a random walk. (Recall the random walk [or "no change" forecasting model] uses the last observed value as the forecast for future values.) In other words, a bunch of high-priced economists generated forecasts upon which government policies were made, when they could have just ignored (or fired) the economists and made the policies based on the most recent data.
Anyone who has worked in (or paid any attention to) business forecasting will not be surprised by these confessions. Naive forecasts like the random walk or seasonal random walk can be surprisingly difficult to beat. And simple models, like single exponential smoothing, can be even more difficult to beat.
While we assume that our fancy models and elaborate forecasting processes are making dramatic improvements in the forecast, these improvements can be surprisingly small. And frequently, due to use of inappropriate models or methods, and to "political" pressures on forecasting process participants, our costly and time consuming efforts just make the forecast worse.
The conclusion? Everybody needs to do just what these RBA analysts did, and conduct forecast value added analysis. Compare the effectiveness of your forecasting efforts to a placebo -- the random walk forecast. If you aren't doing any better than that, you have some apologizing to do.
An internal study by the RBA found that 70% of its inflation forecasts were close, but its economic growth forecasts were worse, and its unemployment forecasts were no better than a random walk. (Recall the random walk [or "no change" forecasting model] uses the last observed value as the forecast for future values.) In other words, a bunch of high-priced economists generated forecasts upon which government policies were made, when they could have just ignored (or fired) the economists and made the policies based on the most recent data.
Anyone who has worked in (or paid any attention to) business forecasting will not be surprised by these confessions. Naive forecasts like the random walk or seasonal random walk can be surprisingly difficult to beat. And simple models, like single exponential smoothing, can be even more difficult to beat.
While we assume that our fancy models and elaborate forecasting processes are making dramatic improvements in the forecast, these improvements can be surprisingly small. And frequently, due to use of inappropriate models or methods, and to "political" pressures on forecasting process participants, our costly and time consuming efforts just make the forecast worse.
The conclusion? Everybody needs to do just what these RBA analysts did, and conduct forecast value added analysis. Compare the effectiveness of your forecasting efforts to a placebo -- the random walk forecast. If you aren't doing any better than that, you have some apologizing to do.
Original article
0 comments:
Post a Comment