Everyone should read Thinking, Fast and Slow. But journalists should be made to study it.
(Hey, I’m Singaporean. We like edicts.)
The book, by Nobel laureate Daniel Kahneman, is a dazzling – and dizzying – tour of human mind that’s won great reviews (in the FT, NYT and Guardian, to name a few). It’s a lucid, insightful exploration of how our quick, instinctive mind (which he calls System 1) works with – or against – our slower, more rational brain (System 2). It ain’t always pretty, but it is the way we think.
Kahneman focuses a lot of the irrational elements of our brain, and there’s much to see: From the way we cling to narratives to explain events despite evidence to the contrary, to how forcing us to frown can make us more skeptical, to the way exposure to a handful of age-related words can make us walk slower.
It’s fascinating stuff, and one that any student of human behavior should dive into. But for us journalists, whose day job is largely about trying to explain the events and motivations behind them, this research shouldn’t be optional. It’s important to help us learn how our sources get things wrong, where our own thinking goes astray, and how the words we write affect the audiences we reach.
It makes a good case for data journalism, too, although not in so many words. But I’ll take any argument in favor of better math and statistics.
There’s far too much in the book to summarize it effectively here – just go and read it, already – but what it lays out are the host of ways instinctive System 1 works to undermine rational System 2 in everyday life – and even in what should be carefully-considered decisions.
If you take the last two digits of your social security number (or whatever random number comes to mind) and then bid on a bottle of wine, you’ll likely offer more if those digits are high than if they’re low. Experiments on grad students show that those who were asked to work on a puzzle featuring words associated with old age shuffled more slowly to their next class. People asked to focus on one task often miss even a gorilla walking through the room. (Try it, then try the second video too).
In other words, our minds are susceptible to all sorts of outside influences, small and large, accidental and planned. And that should raise some level of concern about what people “know” when they tell us things – even experts. Experienced radiologists who evaluate chest X-rays as “normal” or “abnormal” contradict themselves 20% of the time when they see the same picture on separate occasions. True, you can’t simply distrust all your sources, but that’s why it’s important to corroborate what they say with others – or preferably with documents and statistics.
Another problem System 1 presents is a predilection for a narrative explanation – even in the face of data to the contrary. Or, as Kahneman puts it,
…people are prone to apply causal thinking inappropriately, to situations that require statistical reasoning.
The simplest example is the famous “Linda fallacy,” which I described in an earlier post. But there are lots of other examples that lay bare our inability to handle numbers well in the face of a good story.
Sports fans – and sports writers – still cling to the idea of a basketball player having a “hot hand” when he sinks a number of baskets in a row, despite tons of research showing there’s no such thing; it’s just randomness. Business publications devote covers and miles of column-inches to CEOs despite evidence that they only have a marginal – albeit a non-trivial margin – impact on a company’s fortunes. Not to mention mutual fund managers:
Mutual funds are run by highly experienced and hardworking professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than fifty years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than playing poker. Typically at least two out of every three mutual funds underperform the overall market in any given year.
More important, the year-to-year correlation between the outcomes of mutual funds is very small, barely higher than zero. The successful funds in any given year are mostly lucky; they have a good roll of the dice.
But calling it luck gets in the way of a good narrative: About the fund manager’s carefully-constructed strategy, his rags to riches story, the way he timed this market, etc.
Which isn’t to say that great stories don’t matter – they do, hugely. But that it’s important to remember that causality and narrative don’t always go hand-in-hand, and that journalists wanting to uncover the reasons for some event should dig much harder into the documents, data and statistics.
Consider the example that Kahneman about research into the characteristics of successful schools that showed that, on average, they’re small institutions. That makes intuitive sense: Smaller schools should be able to offer more attention to students, etc. As a result of that study, $1.7 billion was poured into efforts to make schools smaller. But a closer look at the research would have shown that the worst schools were also the smallest. Oops.
Small samples are hugely prone to huge swings in variability – fantastic and terrible results. That’s one reason why anecdote-based reporting – by definition looking at small samples – can come back with bogus trends or wildly inaccurate explanations for events. You need to anecdotes to tell good stories, of course; but you need the data to know which anecdotes reflect reality.
The exaggerated faith in small samples is only one example of a more general illusion – we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world that is simpler and more coherent that the data justify.
…System 1 is not prone to doubt. It suppresses ambiguity and spontaneously constructs stories that are as coherent as possible.
And even within coherent, accurate stories, how they’re presented to us matters hugely. And, by extension, how we present them to readers also matters hugely.
The book cites an example where physicians were asked to evaluate a procedure that had some short-term risk but could lead to important longer-term gains. Some were told that the procedure had a one-month survival rate of 90%, and others learned that there was a 10% mortality rate in the first month. Both mean the same thing, of course, but the 84% of the experts in the first group opted for the procedure; only 50% in the second group.
And those are experts in the field. What about our readers, who we’re (hopefully) trying to responsibly explain important policy and personal issues to? How do we frame the information we give them so that they get as fair and unbiased a report as possible so they can act on it effectively?
If we present numbers as percentages – eg, there’s a 1% mortality rate – readers take a more clinical view; if we present it in human terms – one in every 100 will die, on average – they fixate, unsurprisingly, on an image of one person dying. And that affects they way they’ll act on the information.
That’s not to say that there’s any “right” way to present that information. But it’s important to know that it sometimes isn’t simply a stylistic decision.
In some ways, much in Kahneman’s book – and there’s lots in there – just reinforces some old journalistic advice: Check it out with a second source; where’s the document; did you look at the data; write it clearly. But it’s nice to know there’s good science behind it as well.
Meanwhile, go read the book. They’ll be a test. Singaporeans love tests.