Superforecasting: The Art and Science of Prediction — Summary

Jak Nguyen
12 min readJul 24, 2020

Superforecasting: The Art and Science of Prediction (2015) is a 340 page book by Philip E. Tetlock and Dan Gardner.

Context

Forecasting is the process of making predictions of the future using data analysis.

US intelligence agency Intelligence Advanced Research Projects Activity (IARPA) is very interested in forecasting world events. So they ran a tournament from 2010–2015 called Aggregative Contingent Estimation (ACE) where forecasters competed by making predictions.

This book details research of the most notable partner from this study: The Good Judgment Project.

Their focus and findings?

Superforecasters.

Introduction

Superforecasters are people who make consistently and significantly better predictions than the rest of the population.

Even compared to expert intelligence analysts in their own field.

So why read this book? This book discusses the process of making accurate forecasts for your purposes. It lets you now that forecasting is NOT as an inherent talent, but a skill ready for you to learn.

Superforecasters possess the following characteristics:

  • Temperament: Cautious, Humble, Non-Determinist.
  • Cognition: Open-Minded, Intelligent, Curious, Reflective, Numerate.
  • Analytical: Pragmatic, Dragonfly-Eyed, Probabilistic, Thoughtful Updaters, Intuitive Psychologist.
  • Work Ethic: Growth Mindset, Resilience.

Reading Style & Tone

The book is thoughtful and accessible. It’s also SLOW. It’s boring and directionless for the first 100 pages. After that it gets more interesting.

This book isn’t a how-to-guide to Superforecasting. It’s full of useful but mostly basic information. If you would like to learn more, check out Further Reading below for more details.

Book Summary by Chapter

  1. Focus your time and effort on forecasts that are rewarding.
  2. Unpack problems: expose assumptions, catch mistakes, and correct biases.
  3. Consider the broader category before inspecting the case details.
  4. Revise your beliefs often, in small increments, to reduce the risks of under and over reacting to the new information.
  5. Find merit in opposing perspectives.
  6. Reject the fantasy of certainty and learn to think in terms of uncertainty.
  7. Avoid boasting or waffling. Aim to be humble and prudently decisive.
  8. Learn from experience in success and failure.
  9. Use precise questioning to bring out the best in others. And others bring out the best in you.
  10. Try, fail, analyse, adjust. Try again.
  11. Question everything. Including this list.

Chapter 1 — Focus and Triage

Ask good questions.

Focus on questions where your work will pay off. Don’t waste time on easy questions that can be solved by rules of thumb, or hard questions that are impossible.

Some people are good at predictions. The top 2% of people are REALLY good. These people are Superforecasters. Forecasting is not an innate talent — it’s a skill. You LEARN forecasting.

Lots of professionals make predictions, but their accuracy is never assessed. People such as:

  • Economists.
  • Meteorologists.
  • Policy and Law Makers.
  • Intelligence analysts.
  • Fund Managers.
  • Business analysts.
  • Scientists.
  • Statesmen.
  • Defence Force.
  • Management Consultants.
  • Technologists.

Research shows most experts are as accurate in making predictions as a chimpanzee is at throwing darts at a target. In other words: chance. The most important factor you should understand about predictions is time.

The further into the future the outcome, the less accurate predictions become. Predictions 5 years and onwards into the future may as well be chance.

Example

The Arab Spring of 2010–2012 — with uprisings and revolutions across the Arab world — was sparked by the immolation protest of a single man.

No intelligence organisation could have predicted this, let alone any individual. It is unwise to think you can predict events to this extent.

People used to think that once we understood the rules of reality, we can predict anything and everything. Enter Chaos Theory: the notion that a butterfly flapping its wings could set off a tornado.

The world is less predictable than you think. The world is also more predicable than you think. Both statements are correct.

You must discern which circumstances warrant your approach.

Superforecasters are ordinary people who make good predictions. They use specific techniques to do so, while thinking and perceive the world differently to the population.

Here is how they do it.

Chapter 2 — Simplify Problems

Modern medicine and science is VERY recent. So recent that there are people still alive who lived in a time when this wasn’t the case.

Example

The world didn’t have randomised medical trials until 1946, after World War II. That was only 3 generations ago.

Instead of science, all decisions were made on tradition and on authority. Experts trusted their own abilities and judgement without validation.

That’s because our psychology makes decisions based on 2 different systems to process information:

  • System 1 is our autopilot —fast, instinctive and emotional.
  • System 2 is our consciousness— slow, deliberate and rational.

The numbering is intentional. System 1 ALWAYS precedes System 2. System 1 is always quietly running in the background, and is what we base most of our daily decisions on. System 2 is more methodical but much slower. And although taking your time to think may yield more accurate results, it isn’t always practical. We don’t have the time.

The best strategy is to use both systems together.

When people don’t understand things, they usually make up explanations. Scientists are different — they always consider that their explanation is wrong. This is counter to human nature. Confirmation Bias: we don’t like evidence that contradicts our beliefs.

Another important factor to consider is intuition and pattern recognition — System 1. This helps us detect problems quickly, without requiring thinking. While useful, pattern recognition has problems. Like when the pattern is unconnected.

This is such a common effect in science that researchers call it Type 1 error.

Chapter 3 — Clarity in Inside and Outside Perspectives

To evaluate forecasts for accuracy, we have to understand what the forecast says. This is harder than you may think.

In common forecasts, lots of details and conditions are not clear and stated.

Example

Lots of political forecasts regarding Israeli and Palestinian relations don’t provide a timeframe. Without a timeframe, these forecasts are useless.

There’s also wording and context to consider. Saying something is LIKELY to happen is very different to saying something WILL happen.

For most people, probability is the most difficult concept to grasp.

Example

A meteorologist tells you there’s 70% chance of rain. If it doesn’t rain, you may think the forecast was wrong. But really a 70% chance of rain means 30% chance of NOT rain. The weather forecast is still technically correct.

To discern the accuracy of this forecast look at a large number of forecasts together — track the record of the meteorologist.

Now the question becomes:

“Of all the times you said 70% chance of rain, did it rain 70% of the time?”

This is Calibration. By graphing the percent correct with the number of forecasts, you can decide if the meteorologist is worth listening to.

There are 2 ways experts make predictions.

The first group are Big Idea idealists. They gather information into an existing framework. To make an argument, they accumulate reasons for why their argument is correct. They are confident in their abilities, even when they are wrong.

The second group use a variety tools to gather information. They are concerned with possibilities and probabilities more than certainties, and they admit to errors.

The second group of experts always beats the first group in predictions.

Big Idea idealists organise their thinking around the Big Idea, which distorts their perspective. All the information they gather doesn’t matter — it won’t make their predictions more accurate because they’re organising it to fit in with the Big Idea. These people appear confident, which makes other people more likely to believe them.

It’s difficult to see outside of your own perspective. So it’s good to gather information from many sources, and consider many different perspectives. See the world like a dragonfly.

Chapter 4 — Update Your Beliefs

You can both over react or under react to information.

Example

US invaded Iraq in 2003 on claims that the Suddam Hussein regime were hiding Weapons of Mass Destruction. US intelligence analysts were SO SURE those weapons were there.

They were VERY wrong. Very VERY wrong.

The intelligence analysts never tracked the accuracy of their work. To avoid losing even more credibility, the Intelligence Advanced Research Projects Activity (IARPA) created a study to learn more about predictions and how to measure it.

The study found that a small minority of people were VERY good at forecasting — they were more accurate than professional analysts.

People don’t understand randomness. Anyone could accidentally be right once. That’s just random odds. But there are people who are better at forecasting the future than others.

Superforecasters.

Chapter 5 — Guesstimates

Accurate forecasting is an unintuitive process, and making an estimate with inadequate or incomplete information is a useful skill taught to science and business students alike.

A Fermi estimate is a good way to break down a problem into smaller components, and figure out what you can reasonably estimate: everything left is what you don’t know, and you try to break down these things into as small a category as possible.

This process, rudimentary as it is, yields results with more accurate estimates.

As long as the initial assumptions in the estimate are reasonable quantities, the result will give an answer within the same scale as the correct result. And if not, sets a base for understanding for why this is the case.

Chapter 6 — Remove Uncertainty

The wisdom of the crowd: ask a group of people to predict something, and they will give you a variety of answers. Is that bad? No. The average of their estimates, is often a good approximate of the truth.

Probability is difficult for most people, because our default mode of thinking is simple. We divide the world into: Yes / No / Maybe. Before the modern world, we didn’t need more options than those 3.

Example

In prehistoric times and even much of written history, there was no need to solve Fermi problems. Your concerns were far more simple and immediate:

  • I heard something. Is this a threat? Yes=Run! No=Relax. Maybe=Caution.
  • I’ m hungry. Can I eat this? Yes=Yum! No=Discard. Maybe=Caution.

We weren’t sure of most things, so we spent much of our lives in a constant state of awareness around Maybe=Caution.

Example

If today has a 70% chance of rain and it doesn’t rain, that doesn’t mean the forecast was wrong. 70% chance of rain also means 30% chance of NOT rain.

But people want to know ‘Yes’ rain or ‘No’ rain and the best a weather forecaster can do is say ‘maybe.’

People like certainty, but there is always uncertainty.

There are two kinds of uncertainty:

  1. Being uncertain about things that are knowable
  2. Being uncertain about things that are unknowable.

With uncertainty about unknowable things, be cautious and keep predictions in the 30–60% range. Estimates of 50% are the least accurate because 50% is used to express uncertainty. A 50–50 chance is just another way of saying ‘maybe’.

People look for meaning. Sometimes when something happens, people say it was meant to happen. What are the odds, that you would meet your partner the time and place you did?

Improbable yes, but you had to be SOMEWHERE at that time. Not comforting to hear, but correct nevertheless.

Chapter 7 — Prudence vs Decisiveness

There’s no simple mapped method for good forecasts. But there are actions that are helpful:

  • Break the question down into smaller components.
  • Identify known and unknown.
  • Detail your assumptions and bias.
  • Consider the outside view, and frame the problem not as unique but as part of a larger event.
  • Look at how your opinions match or differ from others.
  • Dragonfly eyes — construct a unified vision. Describe your judgement as clearly and concisely as you can.

Once a prediction is made, the work isn’t over. Predictions are updated with new and additional information. These updated forecasts are often more accurate.

It is difficult to update a forecast because:

  • You can under correct or over correct.
  • When confronted with new information, we may stick to our beliefs. Opinions can be more about self-identity than the event itself.
  • Emotional investment makes it hard to admit you’re wrong.
  • Once people publicly take a stance, it’s hard to get them to change their opinion.
  • It’d hard to distinguish important from irrelevant information.

The trick is to update a forecast frequently but in most cases make only small adjustments. Sometimes, you do need to make a dramatic change. If you are really off target, incremental change is useless.

There is much to consider.

Chapter 8 — Growth From Failure and Success

Some people think they are who they are and they can never change and grow. Because they think they can’t do it, then it becomes true. This is a self-fulfilling prophecy.

Superforecasters have growth mindsets. They learn from their experiences.

To succeed, we try. To improve we try, fail, analyse, adjust, and try again. We learn by doing. Repeat it. This is true of every skill. Learning to forecast is the same way.

You don’t learn it by reading. You do. Apply yourself. Practice.

Be okay with being wrong. Mistakes are part of the learning process. To learn from failure, know when you fail. Receive feedback. Without feedback, you will continue thinking inaccurately. Feedback should be timely. Otherwise, hindsight bias sets in.

Debrief is very important. Deconstruct your forecast. What did you get right? What did you get wrong? Why? Just because what you predicted passed, doesn’t mean your process was solid — it could have been coincidence or chance. It is natural to take credit for correct forecasts and minimise the element of chance, but analysis will help you to improve.

Possess grit, tenacity and resilience in your findings.

Chapter 9 — Managing Teams

Example

The Bay of Pigs Invasion in 1961 was poorly planned and executed. The Kennedy administration lost credibility. But they were much better with the Cuban Missile Crisis in 1962.

After the Bay of Pigs Invasion failed, Kennedy launched an investigation to figure out what went wrong. They identified it was the decision-making process of the team: they were casualties of groupthink. They adjusted their beliefs to conform with the team, at the expense of making good decisions.

The Kennedy team developed a skeptical method, questioning their assumptions. With the Cuban Missile Crisis, their improved method spared the world a nuclear war. This is an important lesson to learn:

It’s possible for a group to change their decision-making process for the better.

There is no need to search for perfect candidates when a motivated team learns to change.

And despite risks of groupthink, working in a team sharpens judgement and reach greater goals than individuals can achieve alone.

Question: should forecasters work in teams, or individually?

  • Disadvantages: Teams can make people lazy.
  • Advantages: People share information when they work in teams. They offer more perspectives. With many perspectives, aggregation becomes more accessible.

Teams are clearly more accurate than people. Furthermore, when Superforecasters were put together in teams, they out-forecasted the prediction markets.

These findings, although not an automatic recipe for success, highlight the importance of good group dynamics. Teams should also be open minded; they should have a culture of sharing. Finally, diversity is exceptionally important — even more so than ability.

These are Superteams.

Chapter 10 — Superteams

Superteams operate best outside of hierarchical structures. But business and government organisations are hierarchical.

How do these fit together?

Decentralised Command, or Auftragstaktik.

https://www.youtube.com/watch?v=cQCIpieOkak

Chapter 11 — The Black Swan and Time

Traps:

  • Thinking that what you see is all there is.
  • Forgetting to check your assumptions.
  • Not paying enough attention to the scope of a question.

Example

Will the Assad regime fall in Syria this year?

Your answer will reflect whether you think the Assad reign will EVER fall. This is insensitivity to SCOPE: the extent of the matter that something deals with to which it is relevant.

Superforecasters demonstrate better scope sensitivity. They are using System 2 to check on System 1 automatically.

Nassim Taleb’s The Black Swan: The Impact of the Highly Improbable (2007)

Originally, 100% of swans in Europe were white. No one could imagine that there existed black swans. They aren’t supposed to exist, yet they do.

How do you precisely define something previously inconceivable, let alone predict it?

The further into the future, the less accurate the forecast. Predictions decrease with time, up to 5 years out when it equals chance.

Long-term forecasts aren’t feasible, yet organisations make long term forecasts. This is so long term plans and resources can be made. The best advice is to prepare for surprise.

Plan resilience and adaptability — consider scenarios for unlikely things to happen, and project your response.

Chapter 12 — Accountability

People like being told what they want to hear. The reverse is also true. Often influencers and strong opinions are more powerful than data analysis.

Of course it is.

People will use information to defend their agendas and accuracy takes a back seat. Status quo becomes in charge. Good forecasting therefore becomes very important. It is the difference between success and failure, peace and war.

Tracking results is the best way to evaluate past predictions and improve future forecasts. It will also hold people accountable. The most common check is the Brier score — which measures the difference between a prediction and the result.

However, data analysis is only the beginning. We can’t measure things that are untenable. The really important questions are the hardest to score. So how do you look at a complex situation?

Break it down into smaller questions. As I’m sure you must have a lot of questions.

--

--

Jak Nguyen

I’m only human, darling. Principal @ s2 Photography & Wedding Officiant @ Yarra Events