Review: Superforecasting

The 2015 book Superforecasting by Philip E. Tetlock and Dan Gardner is about predictions.

Like a lot of popular nonfiction books, much of its content is now “in the air”, especially if (like me) you’re a consumer of podcasts and blogs about business, psychology, politics, science, and tech. As often occurs, I found returning to a single long-form source to be more enlightening than merely interpreting it based on comments in a couple dozen secondary sources.

I found the material to be roughly divisible into micro and macro analysis. The micro portion is an exploration of how people who are excellent at making testable predictions do it: tools they use, mindsets they develop, skill sets they leverage. This is the part of greatest interest to me, and I gained a lot.

I’d already read Thinking, Fast and Slow – maybe the canonical popular book on the psychological systems and failings that govern human thought. The authors refer frequently to that book’s author, Daniel Kahneman, and some of the examples are familiar from such sources. While I had picked up a lot from that, I still gained actionable new insights from this book:

  • break down intractable problems into tractable sub-problems, a lesson that disciplines from computer science to military strategy have also recognized (compare “divide and conquer”)
  • be more precise – super forecasters more often describe probabilities to precisions of a single digit, rather than the nearest ten percent or even five percent – e.g. 22% is better than 20% or even 25%.
  • to avoid over-correction or under-correction, make your corrections frequent but small, unless you have a good reason

I anticipate using these strategies to guide me through a number of big projects in the next year or two. [If you read carefully, that sentence is a meta-prediction.]

The macro level parts are about using predictions to inform decisions, particularly in the context of organizations tackling challenging problems. In this area, the part I found most interesting was that predictions are as much about priorities as they are about answers to specific questions. The authors describe America’s old two-war readiness plan as an example:

For decades, the United States had a policy of maintaining the capacity to fight two wars simultaneously. But why not three? Or four? Why not prepare for an alien invasion while we are at it? The answers hinge on probabilities. The two-war doctrine was based on a judgment that the likelihood of the military having to fight two wars simultaneously was high enough to justify the huge expense—but the same was not true of a three-war, four-war, or alien-invasion future.

Predictions, then, are as much about judgment and ordering as they are about answering any one question.

Many anecdotes in this book, and even many of its insights, are now part of how we talk about these topics. The book is valuable primarily for binding them together memorably, in one place, around a common theme.

Some thoughts that didn’t naturally fit the rest of the review:

  • I would read a book that was just the opposite of this: Anti-Superforecasting, going into greater detail about the mistakes of people who fail spectacularly at this project – a sort of “cognitive bias, but at scale” book. Superforecasting has plenty of examples, some of which are unpleasantly relevant (“Not that being wrong hurt [Larry] Kudlow’s career”), but I’d love a comprehensive treatment.
  • At a meta-level, one interesting thing this book does is something that seems rare in a media world that always lives in this moment: remembering things that happened in the past. It remembers that everyone thought George H.W. Bush was going to win reelection, and that 9/11 followed smaller-scale or foiled terrorist plane hijackings and thus (in the authors’ words) “fail[ed] the unimaginability test”. In this respect, this isn’t just a book for geeky people like me who want to learn more about making decisions or helping our teams get better; it’s for all of us as citizens, so that we can gain a clearer understanding of the world and how we can govern ourselves responsibly.

Review: In the Beginning

I read Neal Stephenon’s 1999 essay bundle In the Beginning… Was the Command Line this weekend. It’s flawed but good. It’s about operating systems and their relationship with users, developers, business, and society – ambitious for such a short book.

First its flaw: anytime it veers into talk of “culture” it gets weaker – the chapter “The Interface Culture”, for example, is a trainwreck. Its analysis of society and history is – and, to be fair, half-admits to being – condescending and painted with too broad a brush. (I haven’t read enough of Stephenson’s current work to know how his perspective has evolved since.)

The second half of the book, mostly about the world of Linux OS’s, is much better than the first. In fact, despite its age, there are chapters I would consider required reading for new tech geeks. I know when I was learning Linux a few years ago it would have solved a lot of mysteries at once.

The book does suffer from its own success: much of its message can be picked up from snippets of blog posts, tweets, and podcasts in the geek Internet today. With several years of basic coding and systems administration experience, I didn’t learn much that was new to me from it. Still, it’s handy to find it all in one place – and being such a major influencer as to become a part of tech’s very bloodstream is hardly a criticism.

And, definitely in spite of myself, I was really delighted by the last chapter.

ITBWTCL is uneven and its cultural commentary doesn’t hold up well, but on net it’s good. I appreciate this book for what it is. New geeks should read the second half for sure.