So Facebook waded into the fact-checking space – more precisely, dipped a toe into the fact-checking pond – last week with an announcement that it would work with a coalition of organizations to tag some articles as “disputed” on its platform.
Which leads to a thought: If we can flag falsehoods when we search for or share news articles, why can’t we do that at an even earlier stage – when people are writing the posts that they plan to share? Here’s what I mean:
(But first, a tip of the hat to friend and colleague Jeremy Wagstaff, who sparked off the thoughts below. You can also blame him if you think it doesn’t make any sense.)
Clearly, Google’s search algorithm has the ability to figure out what information people are looking for, then hunt in the thousands of fact checks conforming to an agreed-upon scheme from a set of trusted organizations, and present that information along with other search results. As I noted in an earlier post, that’s significant because it means millions of people will at least see whether a claim has been rated true or false. (Facebook’s system seems to rely more on whether its users dispute an article, which will then trigger fact-checking by the coalition – what seems to be a more manual, less scalable system.)
So if Google can figure out by the words you’re typing what fact checks to show you, why can’t we harness that in a platform such as, say, Facebook, to bring up those fact checks just as you’re about to type out something like “Look – even the Pope endorses Trump” in your feed? (Leaving aside the fact that Facebook and Google are major competitors, of course; we can all dream of global peace and unity one day.)
OK, so you may type ahead anyway, despite the platform surfacing a fact check disputing your post. But at least you’ll be forewarned that you’re promoting a falsehood. And if the fact check proved you to be correct, what if you could embed that link in your post, to forestall your friends disputing your post? Or, for that matter, if you published a falsehood anyway, what if same fact check surfaced every time your friends (or soon to be ex-friends) started to comment on your post, and if they could, is they wanted to, embed a link to that fact check in their comments?
In other words, why not treat Google’s search algorithm and fact check schema as a core part of a CMS on a number of platforms that distribute the vast majority of news – Google, Facebook, WordPress, Twitter, etc – so as to get fact checks in front of even more people, and at an even earlier stage of information creation and sharing?
To be sure, it may well be that people don’t care about being accurate, or that they won’t be influenced by clearly contradictory information as they’re scribbling out posts. But at least they’d have to willfully ignore it. And more importantly, producers of accurate posts can use links to the fact checks that support their statements to bolster the veracity of what they write.
How hard would this be to implement? There would be technical challenges, to be sure – but the (much) harder step would be to get a bunch of huge platforms to agree to rework their CMSs to – in essence – force-feed facts to their users. Which is why this idea is harebrained at best.
And, even if were to materialize, it would barely make a dent in torrent of misleading or highly partisan news that isn’t susceptible to classic fact checking.
That said, all of this is a good reminder that we do need to think much more broadly about the news ecosystem we now inhabit, and what “news products“- for want of a better word – are best suited for them, beyond the classic article, video, slideshow or graphic.
Surfacing more facts to more people more often, more earlier would be a good start.
(And yes, I know that last sentence isn’t grammatical.)