A Review of the Book, “The Undoing Project,” by Michael Lewis
Linda Brobeck

A Review of the Book, “The Undoing Project,” by Michael Lewis

(author of “Moneyball,” “The Big Short” and “The Blind Side”)

Linda Brobeck April 11, 2017 Posted in: Blog Posts, Blog, General
The Undoing Project is the story of two Israeli psychologists whose surprising friendship and unusual collaboration enlightened the world on a number of topics, including how we make judgments and the resulting decision-making behavior. It was a somewhat disappointing revelation that humans systematically err in intuitive decision-making. However, as a predictive modeler, it confirms my long-held suspicion that algorithms outperform our intrinsic judgment.

The book follows the life and professional contributions of Daniel Kahneman and Amos Tversky and though, at times, the biographical portions of the book can be somewhat tedious, their life experiences do play into their eventual studies and conclusions. The foreshadowing of discussions and insights on the mathematics within the psychology of human decision-making behavior kept me highly engaged. 

The lengthy list of Kahneman and Tversky’s published papers will resonate with most actuaries and data scientists. Just perusing the titles below gives us the hint that these psychologists approach their research with a strong mathematical focus.

  • “Belief in the Law of Small Numbers.” Psychological Bulletin 76, no. 2 (1971)
  • “Subjective Probability: A Judgment of Representativeness.” Cognitive Psychology 3 (1972)
  • “Availability: A Heuristic for Judging Frequency and Probability.” Cognitive Psychology 5, no. 2 (1973)
  • “On the Psychology of Prediction.” Psychological Review 80, no. 4 (1973)
  • “Judgment under Uncertainty: Heuristics and Biases.” Science 185 (1974)
  • “Prospect Theory: An Analysis of Decision under Risk.” Econometrica 47, no. 2 (1979)
  • “The Framing of Decisions and the Psychology of Choice” Science 211, no. 4481 (1981)
  • “Extensional versus Intuitive Reasoning: the Conjunction Fallacy in Probability Judgment.” Psychological Review 90, no. 4 (1983)
  • “Advances in Prospect Theory.” Journal of Risk and Uncertainty 5 (1992)

The Undoing Project describes the tests performed and the conclusions reached in these articles and papers. At the start of the book, I was ready for their ultimate conclusion: We are very bad at analyzing a situation without the aid of a statistical algorithm. In fact, Kahneman and Tversky asserted, “We study natural stupidity instead of artificial intelligence.”

The distortions of human judgment emanate from multiple sources, including our memories and unstated assumptions. Biases occur when inputs are more recent, more vivid or confirm our prior beliefs. Kahneman and Tversky wrote, “No one ever made a decision because of a number. They need a story.” The psychologists showed that people were blind to logic when it was embedded in a story. The test involved asking subjects which was more likely – 1,000 people drowning in a flood the following year, or an earthquake triggering a massive flood drowning 1,000 people the following year. People overwhelmingly responded that the earthquake was the more likely scenario even though it is a subset of the first more general situation of a flood caused by any event. Kahneman and Tversky believed that more details, basically a story, increased the believability even if it narrowed the true probability.

Whenever they thought their study results were due to a sample unfamiliar with the laws of probability, Kahneman and Tversky tested subjects formally educated in probability and statistics. The results were identical. They concluded that the human mind doesn’t reason probabilistically but, rather, deals with uncertainty using “Representativeness” – the similarity to what is in one’s mind, and “Heuristics” - replacing the laws of chance with rules of thumb.

For example, their studies showed that people will believe there are more 7-letter words (English) ending in “-ing” versus 7 letter words with “n” in the sixth position, since it is easier to recall words ending in “-ing”. If you are astute enough to recognize that the first scenario is a subset of the second, let’s try another test: What do you believe is the ratio of English words that start with “k” to English words with “k” as the third letter? If you think there are twice as many, you’d say 2:1. What about for “r, n, l or v?” You might be surprised to learn that the ratio is actually 1:2 for these letters. There are about twice as many words in which those letters appear in the third position as those which start with them. Again, it is easier to recall words that start with a certain letter, therefore most people will err in their intuitive judgment or estimate.

Another insight from Kahneman and Tversky is that people tend to recalculate the odds in light of some recent or memorable experience. They conducted a test of subjects’ recall of the proportion of male to female names from a list read to them. The names were all common and easily distinguishable as either male or female. If the list was actually male-dominated, the psychologists would include more famous women’s names, and vice versa. The test subjects almost always miscalculated the proportion of male to female names. The fallacy they uncovered was that the easier something is to retrieve from memory, the more likely it is accurate. 

They also uncovered the tendency for people to ignore the sample size impact of sampling variance. If a family’s children had the gender (girl [G]/boy [B]) birth order GBGBBG, how more or less likely was the birth order BGBBBB? Most in the study said less likely. Kahneman and Tversky believed this fallacy arose because the second scenario was less representative of the general population. People believe gender birth order is random, and the first scenario seemed more random even though both scenarios are equally likely.

In recounting the psychologists’ story, the author touches on several other contributions to the study of judgement and decision-making. For example, Lewis R. Goldberg performed experiments with doctors, psychologists and psychiatrists to determine factors influencing certain diagnoses. Not surprisingly, he found inconsistencies and contradictions among the experts. He also discovered these inconsistencies and contradictions within a single expert’s diagnoses. In other words, Goldberg found situations in which the same facts led to contradictory diagnoses by the individual expert. He then demonstrated that a model created from an expert’s actions ultimately performed better in correctly diagnosing than the expert himself.

While The Undoing Project is full of insights, there are two in particular that will stick with me and hopefully make me a more effective predictive modeler: First, beware of confirmation bias or having a higher affinity with a result that conforms to expectation. We must remember that intuitive expectations are governed by a consistent misperception of the world. We should, therefore, not just inspect an unexpected result in our model but also need to investigate the reasonableness even if the answer is as we expected. Second, when reaching conclusions think about and communicate the story behind the numbers not just the results. I leave you with one of my favorite quotes from the book, “Man’s inability to see the power of regression to the mean leaves him blind to the nature around him.”

Linda Brobeck is a Senior Consulting Actuary with Pinnacle Actuarial Resources, Inc. in the San Francisco, California office. She has worked in the property/casualty insurance industry since 1986 and has been providing actuarial consulting services since 2011. Her consulting career has focused on ratemaking and predictive modeling for several lines of insurance including personal and commercial automobile, homeowners, and professional liability. Linda is a Fellow of the Casualty Actuarial Society and a Member of the American Academy of Actuaries. She serves as the Vice Chairperson of the Program Planning Committee for the CAS and is a member of the CAS Examination Committee.
«July 2018»