Superforecasting: the Art and Science of Prediction

 

Posted by on

New year, new predictions. Or at least that’s what many economists are doing during these weeks. But it turns out this year that a couple of months ago was published the book Superforecasting: The Art & Science of Prediction by Philip Tetlock and Dan Gardner, and I wanted to start the year with the topic of predictions. So economic forecasters, beware.

One reason to read the book is because Kahneman says to do so in the book cover (well, that’s enough reason for me). If for you that’s not enough, then you may be thinking that the whole topic of forecasting the future is a bit fanciful and it should not deserve any serious attention – and even more in economics. After all, one could argue that we have the overwhelming daily evidence of so-called experts making their predictions and failing spectacularly most of the time. If they are experts and cannot get it right, who could? It seems then that since Cassandra’s time, Homer’s Iliad character, forecasting has been an elusive activity for the mankind.

Penn Libraries call number: Inc B-720

Cassandra and the fall of Troy

On a more serious tone, the book is valuable because it goes through the results of the biggest social experiment on forecasting ever made, sponsored by the US agency IARPA – institution that obviously has real reasons to improve their forecasting skills. On the other hand, Philip Tetlock, one of the authors of the book, was already a name in the field of “expert judgement” much before the IARPA tournament was set up. His first research (spanning from 1984 to 2004) showed that in horizons over a year, the average opinion of a group of experts was not better than the one given by another non-expert group guessing randomly – and the experts’ performance was getting worse as the time span became longer – or, in more poetical terms, experts’ performance over a year was not better than a group of chimpanzees throwing darts.

The IARPA experiment was conducted for four years (from 2011) and five groups participated. IARPA sent the same questions to all groups, questions with a time horizon between one month and one year. The total number of questions over the four years amounted to 500, getting thus around 1,000,000 forecasts – forecasters were allowed to revise their predictions as much as they wished. Spoiler: Tetlock’s team (The Good Judgement Project) was the winner of the competition, and by wide margins: their performance was not only consistently superior to the rest of the teams, but also to intelligence analysts with access to confidential information. The best 2% performers of the GDP team were dubbed “superforecasters“. Furthermore, in the final years of the experiment, superforecasters were put together into teams, and their performance was even better (which has important implications for teams theory).

Such results are not the product of luck: the sample was so big that it had been very unlikely to keep that level of performance over so many forecasts. The takeaway is that some sort of superior short-term forecast can be achieved, or at least it is displayed by a certain group of people.

Therefore, the big question is: Is there a way to improve our forecasting skills? If my reading of the book is correct, then these are some of the main points to improve our forecasting skills:

  • Choose the right questions. Don’t try either impossible questions to answer (e.g. who will be the president of US in 2024?) or simple questions that won’t report you any additional benefits for your hard work.
  • Use granular probabilities, not “black or white”. Most of superforecasters deliver their predictions in a two-digit probability (e.g. 57%, 68%, 33%), trying to avoid the “Homer-Simpson style” (umm, 0%, 50% or 100%). Granted, at the beginning it may appear ridiculous to assign so accurate probabilities to uncertain events (anyone mentioned Keynesian uncertainty?), but the purpose of the exercise is to introduce rigour into our analysis and to force us to think carefully about the answer. It is also way to separate our inside view (i.e. beliefs) from the outside view (i.e. a statistical base rate).
  • Keep score. Keep track of your analysis and the reasons that led you to take certain decisions. Feedback is a fundamental component in the learning process, and it is hard to get any feedback if you have not kept proper records of your actions (besides, current thinking suffers from hindsight bias). The proper use of granular probabilities can help to improve the record of such reasons.
  • Fermi-izing. Break down a big question into many smaller sub-questions, so you can see where your ignorance lays and what you really know. This method was popularised by the physicist Enrico Fermi, who liked to go around teasing people with questions difficult to answer quickly (e.g. how many piano tuners are there in Chicago?), but that could be broken down into small ones (how many people live in Chicago? how many pianos are in average per person? etc.) to improve the forecasting performance. As an apart, it seems that this kind of back-of-the-envelope calculation works quite well in several domains, so superforecasters could get a valuable tool if they can master the Fermi method.
  • Update continuously. This is not an “everything or nothing” game. Change your predictions and update them if new events come to light. Some superforecasters updated their predictions several times a week.

… And try it

And finally, try to forecast events rigorously on a regular basis  (“rigorously” means to stick to the previous points). The authors make clear throughout the book that practice is an important ingredient if you want to become a seasoned superforecaster. In order to forecast in an appropriate environment, you can now forecast in the Good Judgement page, which maintains several forecasting tournaments in different fields (mostly in economics and geopolitics). I have signed up and I am currently participating in the tournament “The Economist’s World in 2016” (sponsored by The Economist) and it is fun. First, one thing is to read the questions as you are reading the book comfortably seated in your sofa and another very different thing is to forecast questions that are happening right now. And second, it is instructive to see the way superforecasters conduct their analysis. Whether I will improve my forecasting abilities or not in the future I don’t know, but we already knew that forecasting this sort of questions is tough.

2 thoughts on “Superforecasting: the Art and Science of Prediction

Leave a Reply

Your email address will not be published. Required fields are marked *