Many fund managers will be aware of Philip Tetlock’s book “Expert Political Judgment” published in 2005. In the book, Tetlock analyzes forecasts collected from 284 experts over twenty years. While he focuses primarily on the ability of political experts to predict future events, his research has clear applicability to making investment decisions and has been widely covered in the financial press. Using Isaiah Berlin’s prototypes, Tetlock argues that the fox, who knows many little things, is often a better forecaster than the hedgehog, who knows one big thing. More damningly, he finds that the average expert performs only slightly better than random guessing. A more detailed review of the book is available here from the New Yorker.
Some of the results are likely influenced by the incentives that forecasters face. A hedgehog ideologue who holds a strong, non-consensus view is often more useful to a fund manager than a redundant fox who has, perhaps correctly, adjusted his forecast towards the consensus view. However, the fund manager himself must aggregate viewpoints from various analysts along with other information to arrive at his own forecast and may be well-advised to act more like the fox who is skeptical of grand theories and is willing to combine diverse ideas and sources of information.
In 2011 the US Intelligence Advanced Research Projects Activity (IARPA) launched the Aggregative Contingent Estimation (ACE) Program. The objectives of this program were to develop innovations in the following areas:
(a) Efficient elicitation of probabilistic judgments, including conditional probabilities for contingent events; (b) mathematical aggregation of judgments by many individuals, based on factors that may include: past performance, expertise, cognitive style, metaknowledge, and other attributes predictive of accuracy; and (c) effective representation of aggregated probabilistic forecasts and their distributions.
To me, these sound precisely like the issues that investment organizations grapple with every day, although perhaps in a less formalized manner.
As one of five teams participating in the ACE program, a group led by Tetlock, Barbara Mellers and Don Moore launched an effort to regularly survey more than 1,500 people online. Last month, the group published a paper containing results based on the first two years of data. An excerpt from the abstract summarized their findings as follows:
Our group tested and found support for three psychological drivers of accuracy: training, teaming, and tracking. Probability training corrected cognitive biases, encouraged forecasters to use reference classes, and provided forecasters with heuristics, such as averaging when multiple estimates were available. Teaming allowed forecasters to share information and discuss the rationales behind their beliefs. Tracking placed the highest performers (top 2% from Year 1) in elite teams that worked together.
Individuals who were better forecasters had higher scores on measures of fluid and crystallized intelligence and open-mindedness. Thus, the latest results seem to provide further support for fox versus hedgehog dispositions influencing forecasting skill. These results will be discussed in a forthcoming paper.
Interestingly, they also found that the top 2% of forecasters, whom they called “super-forecasters”, actually got better between Years 1 and 2. Furthermore when these super-forecasters were grouped into teams, their performance vastly exceeded IARPA’s performance goals. Their study concludes that “forecasting is often viewed as a statistical problem, but forecasts can be improved with behavioral interventions.” As the best performing team out of the five, the Mellers-Moore-Tetlock group is continuing to receive funding from IARPA and we should see further results published from their ongoing experiment.