How to Predict the Future (and How Not to)

Moody’s and Standard & Poor’s have just been charged with knowingly misrepresenting the credit risk involved in some of the mortgage-backed securities they rated during the run-up to the 2008 financial crisis. The agencies will resist, saying that they simply erred in predicting the future, as anyone could have.
But there were many who called the housing bubble correctly, long before the collapse in 2008. Several economists had pointed out that there was in fact an enormous housing bubble being created by overheated home prices, and it became a matter of public concern long before 2008. The term “housing bubble” appeared in just eight news accounts during the year 2000, but during 2005 it appeared nearly 3500 times, with Google searches for the term growing nearly ten-fold from 2004 to 2005 alone.
“And yet, the ratings agencies—whose job it is to measure risk in financial markets—say that they missed it. It should tell you something that they seem to think of this as their best line of defense. The problems with their predictions ran very deep.” This is the indictment leveled by Nate Silver, in his fresh and readable book The Signal and the Noise: Why So Many Predictions Fail - but Some Don't (Penguin, 2012). It is the subject of this week’s Friday Book Share.
It remains to be seen whether the government will be able to prove its case that the ratings agencies intentionally misled investors with respect to the risks involved in mortgage-backed securities, but what can’t be denied is that the risk of a meltdown was enormous, and it was well known. So why were their predictions so terrible?
One important issue, Silver suggests, is that the agencies were either “unable or uninterested in appreciating the distinction between risk and uncertainty.” Risk, he says, is something that can be calculated, so you can put a price on it. You can price the risk of winning or losing at poker or roulette. By contrast, uncertainty is “risk that is hard to measure,” and your own best estimate could be off by a factor of 1000 or more. According to Silver, “Risk greases the wheels of a free-market economy; uncertainty grinds them to a halt. The alchemy that the ratings agencies performed was to spin uncertainty into what looked and felt like risk.”
Overall, the quality of predictions made by experts has been abysmal. The predictions of economic experts, political experts, and other commentators generally perform at about the rate of random chance. In fact, the most widely cited experts’ predictions are generally the least accurate. This is partly because you can generate media demand for your prognostications if they entertaining, controversial, or unusual – but these are not qualities usually associated with accuracy.
Nate Silver is the statistician and pollster whose “FiveThirtyEight” blog for the New York Times called the 2012 Presidential election flawlessly, correctly predicting the outcomes in all 50 states and the District of Columbia. He says one of the most important problems that all pollsters have – indeed all researchers, prognosticators, and pundits – is that they tend to predict things that confirm their own hopes and biases. Moreover, it’s very difficult to avoid having your predictions contaminated by your own subjective opinions. “Pure objectivity,” he says, “is desirable but unattainable in this world.” But pure objectivity is something he clearly aspires to, and he suggests that you should, too, if you want to be able to predict future events.
(Personal aside: I read The Signal and the Noise prior to the election, while I was still eagerly anticipating a Romney win, and my friends found it odd that I suddenly began suggesting glumly that Romney probably wouldn’t win, after all. My revised opinion was based on my purposeful adoption of some of the strategies Silver suggests for maintaining one's objectivity.)
Silver’s book reviews the accuracy of forecasting in a wide array of fields, from politics and economics to sports, weather, and terrorist attacks. Economic forecasting is still so bad, he says, that when it comes to forecasting recessions “a majority of economists did not think we were in one when the three most recent recessions, in 1990, 2001, and 2007, were later determined to have begun.” And earthquakes are so difficult to predict that we’re nowhere near a meaningful tool for doing so.
By contrast, there has been a radical increase in the accuracy of weather forecasting over just the last couple of decades. Today’s weather forecasters can predict a hurricane’s landfall within 100 miles of accuracy some 72 hours in advance, while as recently as 1985 this wouldn’t have been possible until 24 hours beforehand. Nevertheless, people’s biases are so strong that they will often ignore very good, quantitatively accurate forecasts and predictions. A full five days in advance of the Katrina disaster, the National Hurricane Center projected a direct hit on New Orleans, and 48 hours in advance of its arrival they predicted that a “nightmare scenario” might well arise when the levees were breached. Even so, the political leaders in New Orleans remained reluctant to act, delaying the call for evacuation until the very last minute. The result was that 80,000 people (20% of the city’s population) didn’t get out, and 2% of them (1600 folks) paid for this with their lives.
One of the most important tools for improving prediction is feedback. When meteorologists make daily predictions, they get daily feedback, and the result is a dramatic improvement, aided by computer tools. Business leaders, however, rarely get such immediate feedback, so inaccurate predicting skills in business are rarely improved. Biases intrude, subjectivity reigns, and no one goes back later to see what was correctly foreseen and what was not.
And while meteorologists’ predictive skills have greatly improved, the same cannot be said of climatologists' effort to predict global warming, because meaningful “feedback” about climate change won’t be available for decades or more. Nevertheless, Silver spends several pages evaluating the statistical predictions of the IPCC (International Panel on Climate Change), and his conclusion is that, while there can be little doubt that the atmosphere is likely to warm gradually with increasing levels of CO2, the IPCC's own forecasts tend to be more alarmist than necessary, and relative to other forecasts "might deserve a low but not failing grade."
The massive quantities of data now available, coupled with the computer processing power to sift through it and subject it to microsopic analysis, can easily give us a false sense of confidence. As op-ed columnist David Brooks said recently, it is as if there is a new "religion" of "data-ism," leading some to think that "data is a transparent and reliable lens that allows us to filter out emotionalism and ideology; that data will help us do remarkable things — like foretell the future." But data without common sense and intuitive, human judgment can be dangerously misleading. Just ask the ratings agencies.
According to Silver, “our predictions may be more prone to failure in the era of Big Data. As there is an exponential increase in the amount of available information, there is likewise an exponential increase in the number of hypotheses to investigate. For instance, the U.S. government now publishes data on about 45,000 economic statistics. If you want to test for relationships between all combinations of two pairs of these statistics—is there a causal relationship between the bank prime loan rate and the unemployment rate in Alabama?—that gives you literally one billion hypotheses to test.”
And with a billion hypotheses to work with, it isn’t at all difficult to find a few million spurious correlations. In fact, I just wrote about one such spurious correlation earlier this week, when I discussed the Super Bowl Stock Market Indicator (and if you want to see a few human biases up close and personal, just read a few of the irate comments by football fans!).

Original Source :  http://www.linkedin.com/today/post/article/20130208132421-17102372-how-to-predict-the-future-and-how-not-to

0 comments:

Post a Comment