It’s terrible when we make a mistake in a story. True, that’s an ongoing occupational hazard in a business that’s often rushing to write the first draft of history, so it’s safe to conclude that we’ll never completely eliminate errors. But how can we reduce them?
There’s an interesting piece in Nature about a very similar problem in science – essentially about how too many studies can’t be replicated, indicating there’s an accuracy issue in what should be rigorously reported and peer-reviewed work. What the piece concludes – and it has resonance for journalistic work as well – is that unconscious bias plays a big part in when we get things wrong.
In today’s environment, our talent for jumping to conclusions makes it all too easy to find false patterns in randomness, to ignore alternative explanations for a result or to accept ‘reasonable’ outcomes without question — that is, to ceaselessly lead ourselves astray without realizing it.
Which is probably as much of an issue in some journalism as well – especially when it comes to figuring out what’s a likely narrative or hypotheses for facts we’ve gathered or data we’ve crunched. Nature has a word for it: Hypothesis myopia.
One trap that awaits during the early stages of research is what might be called hypothesis myopia: investigators fixate on collecting evidence to support just one hypothesis; neglect to look for evidence against it; and fail to consider other explanations.
In other words, only looking for evidence, quotes, data that supports one narrative, and not really asking the tough questions that might point to another explanation. The Nature piece cites a case where a woman in the UK was convicted of murdering two of her infant sons because of statistical evidence that the chances of both of them dying of sudden infant death syndrome only 1 on 73 million – which is a pretty damning statistic, until you consider that the chances of a parent murdering two children is even lower. (The conviction was later reversed).
The trick is to make sure there are skeptical voices at every stage of a story, rather than have everyone so invested in the outcome that they rush to publish. That can be hard on deadline, obviously, but it’s a critical brake on major stories.
(I remember a presentation of a human rights group – I can’t remember which now – that explicitly had an internal “devil’s advocate” to aggressively vet work their own investigators did before releasing it.)
The Nature piece calls out other common bias and problems, including what they call the Texas sharpshooter fallacy (seizing on random patterns and mistaking them for interesting findings), asymmetric attention (rigorously checking unexpected results, but giving expected ones a free pass), and just-so storytelling (finding stories after the fact to rationalize whatever the results turn out to be). Not all are applicable to journalism, but the general ideas are.
After all, we all love narratives, as this NYT piece about how we’re “born to be conned” notes:
In a sense, all victims of cons are the same: people swept up in a narrative that, to them, couldn’t be more compelling. Love comes at the exact moment you crave it most, money when you most need it. It’s too simplistic to dismiss those who fall for such wishful-seeming thinking as saps — just as it’s overly neat to dismiss the types of people who would take advantage of them as unfeeling psychopaths.
The trick for journalism is to understand when we’re too invested in a compelling narrative, and get someone else to pick it apart for us before we go too far down what may be a false road.