Thursday, August 30, 2007

"Expert" Opinion

Remember the book review essay we did way back on Philip Tetlock’s Expert Political Judgment which called into question the entire concept? Here’s more recent research that basically does the same thing.

A study about predicting the outcome of actual conflicts found that the forecasts of experts who use their unaided judgment are little better than those of novices, according to a new study in a publication of the Institute for Operations Research and the Management Sciences.

When presented with actual crises, such as a disguised version of a 1970s border dispute between Iraq and Syria and an unfolding dispute between football players and management, experts were able to forecast the decisions the parties made in only 32% of the cases, little better than the 29% scored by undergraduate students. Chance guesses at the outcomes would be right 28% of the time.
………
“Accurate prediction is difficult because conflicts tend to be too complex for people to think through in ways that realistically represent their actual progress,” the authors write. “Parties in conflict often act and react many times, and change because of their interactions.”
………
Analysis of additional data produced similar results. In one instance, the authors attempted to determine if veteran experts would be more likely to make accurate forecasts than less experienced experts. “Common sense expectations did not prove to be correct,” they write. “The 57 forecasts of experts with less than five years experience were more accurate (36%) than the 48 forecasts of experts with more experience (29%).”

The authors also asked experts about their previous experience with similar conflicts and looked at the relationship with the accuracy of their forecasts. Again, the expected conclusion did not prevail: those who considered themselves as having little experience with similar conflicts produced forecasts that were equally as accurate as those who were long-time veterans in the field.

The authors examined the confidence that the experts had in their forecasts by asking them how likely it was that they would have changed their forecasts had they spent more time on the task. Another surprise: 68 high-confidence forecasts were less accurate (28%) than the 35 low-confidence forecasts (41%).

Based on this study and earlier research, the authors conclude that there are no good grounds for decision makers to rely on experts’ unaided judgments for forecasting decisions in conflicts. Such reliance discourages experts and decision makers from investigating alternative approaches.
Instead, they recommend that experts use reliable decision-support tools. They cite two examples of decision aids that can improve forecasts. In an earlier study, Green reported that simulated interaction, a type of role playing for forecasting behavior in conflicts, reduced error by 47%.

Using another technique, structured analogies, the authors found favorable results. In that study, they asked experts to recall and analyze information on similar situations. When experts were able to think of at least two analogies, forecast error was reduced by 39%. This structured technique requires experts, and those with more expertise were able to contribute much more to making accurate forecasts.

The latest authors didn’t apparently try to replicate Tetlock’s distinction between “fox” and “hedgehog” experts who differed on their openness to new info and their own liabilities as experts. That research showed the “foxes” who were more open and more aware of their limitations did better than the hedgehogs in most cases. Still, this should alert us all that we need multiple sources of info and triangulated “proofs” before we launch our normal human hubris into the unknown. Not that that’s a big problem anywhere right now.

[Coincidentally, here’s another article, on the ordinary Joe’s ability to discern good from bad movies compared to “expert” critics, that shows that the gap isn’t all that great if you look pretty closely. Here’s the key conclusion:

"When using sequential and independent measures and when controlling for marketing-related aspects of a film's commercial impact -- our findings support the conclusion that ordinary consumers show "good taste" to a degree not hitherto recognized," the authors write. With proper controls for the contaminating influences of market success they find that "Films of the sort that win favorable evaluations of excellence from expert reviewers also tend to win approval from ordinary consumers and that films of the kind that ordinary consumers consider excellent tend to elicit liking and word-of-mouth or click-of-mouse recommendations."]

Add to Technorati Favorites del.icio.us