Strategy

Outlook 2011: Predictions – Seer suckers

Psychologist Philip Tetlock discovers the future isn't unwritten — but what's written isn't always accurate.

(For the rest of the Outlook 2011 special report, click here.)

What will the future hold? It’s a common question this time of year, and one that finds no shortage of soothsayers squawking expert opinion across TV, radio, Internet and, yes, even magazines.

But how do we know which ones are right or wrong? The predictive difference between the Amazing Kreskin and Maggie the Monkey?

In 2006, psychologist Dr. Philip Tetlock, a professor at the University of California???Berkeley’s Haas School of Business, released the book Expert Political Judgment: How Good Is It? How Can We Know? The research project spanned 18 years and surveyed 284 professional experts, including academics, journalists, intelligence analysts and think-tank members who regularly offer their insights to the media, governments and businesses. Tetlock and his researchers looked at 82,000 forecasts on the future, and what they found was that for all their education, experience and airtime, most experts are about as accurate as the rest of us. Which is to say, not very.

Still, Tetlock did find that certain types of thinking are more accurate than others, borrowing from philosopher Isaiah Berlin’s fox and hedgehog analogy, where the fox knows many things while the hedgehog knows one big thing. Tetlock reasons fox-like thinkers are better forecasters, as they tend to be more self-critical, willing to adjust their stance when confronted with contrary evidence, and modest about their ability. Less successful forecasters resemble hedgehogs, with one big idea they tend to stretch and mould the facts to fit, albeit in a very articulate and persuasive manner. You don’t have to watch TV too long to realize how much the media loves a hedgehog.

Canadian Business staff writer Jeff Beer spoke to Tetlock about how we mere mortals can tell which experts to listen to, why the more media exposure an expert gets, the less accurate their predictions are, and why more experts aren’t held accountable when they’re wrong.

Q: How can people tell if they’re listening to or reading advice from a fox versus a hedgehog?

A: Well, one of the heuristics I occasionally offer is to look out for qualifiers. The more frequently one’s speech includes a lot of “howevers,” “althoughs” and “buts,” indicates some fox-like attributes. The more their speech is riddled with “moreover” and “furthermore,” the more it leans toward a hedgehog.

In one case, you’re putting on the mental brakes with the “buts” and “howevers” as in, “Here’s an argument, and I think it takes you somewhere but, but, but???? ,” while the other guy says, “Here’s an argument, and moreover …,” taking it further down a certain path. So there are some linguistic indicators.

Q: What if I want to test my own thought process?

A: Certainly one of the frameworks I use when thinking about cognitive style is how we talk to ourselves. And how we listen to ourselves talk to ourselves — the art of self-overhearing. As you listen to yourself talk to yourself, and you hear a lot of “moreovers” and “furthermores,” do you say to yourself, “Hmm, I’m having a good time building up all this intellectual momentum and attracting attention.” Or do you say, “Hmm, am I running off a cliff here?”

Q: Is there a particular discipline — politics, economics, technology, business — that skews more heavily toward a fox- or hedgehog-like approach?

A: I expected to find one, but I didn’t see a particularly strong relationship there. Part of my own stereotype was that I thought perhaps economists might be more hedgehog-like because they do have a more deductive cognitive style, working off mathematical models, axioms and well-defined premises, more so than other social scientists do. But frankly there are a lot of fox-like economists too, so that was a stereotype of mine that didn’t hold up that well.

I think the relationships emerge within disciplines rather than across disciplines. There certainly are hedgehog-like economists — free market fundamentalists, Marxists, and other various schools of thought that can be quite dogmatic at times. And there are certainly hedgehog political scientists, with realists, institutionalists and various groups like that, but that again is within particular disciplines.

Q: You’ve said that the bigger the media profile of the expert, often the less accurate his predictions. Why would an expert who gains fame for accuracy get less accurate once showered with attention?

A: Well, there are different schools of thought on that. One says the world is really pretty radically unpredictable, and whoever the experts of the moment are who are right are largely lucky. Therefore, you should expect some regression back toward the mean, just on purely statistical grounds. You don’t need a fancy psychological argument about that.

The other position says there is more of a hubris developing, in that people get pumped up and believe their own press clippings and can get carried away that way.

Q: Given we’re in an age where experts’ past predictions can easily be found online, are we seeing any evidence of increased accountability?

A: Mostly not, because experts are just so notoriously difficult to pin down. When you actually look at their predictions carefully, you have to ask yourself if the prediction passes the clairvoyance test. That test, which is completely hypothetical, of course, is where you take the prediction and give it to someone you know for a fact has the ability to see into the future. You ask that person whether the prediction is true or not. So that person can look to the end of 2011 or 2012 and tell you if Kim Jong Il is dead or alive, for example. That would be one that passes the test. But if an expert says there will be increased political instability in North Korea, that’s a more difficult thing to quantify and easy to wriggle out of.

Generally, expert predictions on op-ed pages, on television and in the blogosphere have a certain open-ended quality to them. Occasionally, someone is rash enough to say something that can be falsified — like former director of the CIA George Tenet telling George Bush it was a slam dunk that there were weapons of mass destruction in Iraq. So it’s easy to falsify an expert’s prediction if they say something has either a 100% or 0% chance of happening. But if they say it has an 80% chance, that offers them an out.

Q: Nate Silver gained a certain level of fame during the 2008 election via his Five-ThirtyEight blog for an impressive level of accuracy predicting the primaries, often contradicting what many TV talking heads had forecasted. Does this type of number crunching help make an expert more accurate than those who rely more on experience and “gut” predictions?

A: I met Nate Silver once when we spoke in connection to a book he’s doing, but I don’t know his work all too well. I do know he came from evaluating baseball statistics, and I am very sympathetic to the argument Michael Lewis made in Moneyball that there are baseball managers who outperformed others by using statistical evidence more systematically and relying less on hunches and intuition. Which, of course, runs quite contrary to Malcolm Gladwell’s argument in Blink. People don’t often notice the conflicts between these popular books, but in fact there is a deep one between those who believe in Blink and those that believe in “think.”

Q: Reading your findings is enough to put anyone off expert forecasting. Is there any point listening to them, fox or hedgehog? How can it get better?

A: It’s very hard to imagine the hedgehog style of thinking surviving in professions that get very quick unambiguous feedback about whether or not they screwed up. Imagine you’re a hedgehog car mechanic and you’re oblivious to certain causes of engine or brake failures. You’re going to go out of business fairly soon. Or if you’re a hedgehog dentist and you think the solution to everything is a root canal. I think the profession of meteorology is one that’s changed over the decades, becoming very good and making clean, clear predictions and often discussed as one of the best calibrated professional groups. They’ve got in the habit of making quantitative, testable predictions about relatively near-term events, so they get quick, clear feedback on those predictions. That is what’s essential in learning how to become well-calibrated.

What are the environments where hedgehogs are likely to survive and thrive? Those where there is very little pressure to make crisp, clean, falsifiable predictions, and when those predictions do start to get into trouble you’re surrounded by co-believers that help you out to make up cover stories. In other words, politics.

Q: What are you working on now, and when can we expect to see it?

A: Oh, I’d say perhaps around 2015 [laughs]. I work on a different time frame. I’m interested in interventions that can induce people to think in more constructively self-critical ways. I think prediction markets are one way of doing that. There are probably ways of organizing group discussion so they don’t degenerate into groupthink, and encourage more constructively self-critical thought as opposed to self-justifying thought. I think that’s the style of thinking that helps most. It doesn’t make you overly accurate in predictions, but certainly helps make you less inaccurate.