Posted by: structureofnews | January 30, 2017

Friends And Enemies

ConformityBelatedly – why do so many of my posts involve use of the word “belatedly,” I wonder? – here’s a quick post about a research study that was written up in the New York Times in December.  (Yes, I know that was last year.)

It’s about implicit (and hidden) bias – natural prejudices that we all have (and are often blind to), no matter how open-minded or broad thinking we think we are. It’s an important issue, which I’ve touched on before, and especially now for journalists in an increasingly polarized world where it can be tempting to take a side and see proponents of other views in a less-than-flattering light.

The bad news is that we can’t really avoid our internal biases.  The good news is that, if we work at it, we can mitigate its effects.

The study itself is pretty straightforward:  Experimenters set up a game with volunteers, who would then witness what they thought was another player cheating.  The volunteers then got to decide how harsh a punishment should be meted out to the “cheaters.”  The idea was to see if the volunteers would punish wrongdoers more harshly if they thought they were from a different group – say, supporters of an rival sports team – than if they thought they were “one of them.”

And what happened?  You can pretty much guess the result.

When people made their decisions swiftly — in a few seconds or less — they were biased in their punishment decisions. Not only did they punish out-group members more harshly, they also treated members of their own group more leniently. The same pattern of bias emerged in a pair of follow-up experiments in which we distracted half of the punishers.

Well, duh.  But the fact that this kind of tribalism is deeply ingrained in humanity doesn’t make it any better.  And for journalists, whose job is often to assess people’s credibility quickly and make decisions about how the information they provide will be used – in other words, to figure out how much to trust someone and how to treat them in a story – this should be sobering news.

We often think we’re unbiased or objective, but what the experiment shows is that we can be easily swayed by how similar we think the person we’re interviewing is to us.

But there is good news: Read More…

Advertisements
Posted by: structureofnews | January 5, 2017

Seeing Patterns

What are machines good at, what are people good at, and how can we get the most out of pairing the best of both worlds?

It’s not like I haven’t written about this topic before – not least about not trying to make machines be poor copies of humans – but it’s just that the list of things machines are good at keeps getting longer.  (And so maybe we should be thinking of how to make them better copies of humans, and worry about what jobs will go away – but that’s the subject for another post.)

Exhibit A is how-machines-are-getting-better-at-more-things is an excellent NYT Magazine piece from a couple of weeks ago by Gideon Lewis-Kraus, entitled The Great A.I. Awakening. If you haven’t read it yet, you should stop here and read it.  It’s very good.

At a basic level, it’s the story of how Google used neural networks to push the quality of Google Translate to an astonishingly good level, and in a very short period of time. But the broader story is about how neural networks – essentially, systems for recognizing patterns – have come into their own, and are powering machines to do a host of things never before thought possible: High-quality translations, image recognition, and so on.

And if they can do all those things well – imagine what could they do for journalism.

Just look at how good Google Translate has become with the help of the neural network built by a team called Google Brain.  First, a passage from Ernest Hemingway’s “The Snows of Kilimanjaro,” translated to Japanese and then back to English via Google Translate, pre-neural networks:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.

And now after neural networks:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

That’s pretty damn good.

And as Gideon notes, it’s not simply about translation:

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.

And worrying about those jobs, and those people who will be automated out of them, is an important, pressing issue.  But so too is thinking about how best to harness all this new-found capability in the service of journalism Read More…

Posted by: structureofnews | December 22, 2016

A Modest Proposal

15facebook2-master768So Facebook waded into the fact-checking space – more precisely, dipped a toe into the fact-checking pond – last week with an announcement that it would work with a coalition of organizations to tag some articles as “disputed” on its platform.

It’s a small but important step in the campaign to address the much-misnamed “fake news”problem, and follows in Google’s steps not long ago to promote fact-checks in search results.

Which leads to a thought: If we can flag falsehoods when we search for or share news articles, why can’t we do that at an even earlier stage – when people are writing the posts that they plan to share? Here’s what I mean:

(But first, a tip of the hat to friend and colleague Jeremy Wagstaff, who sparked off the thoughts below.  You can also blame him if you think it doesn’t make any sense.)

Clearly, Google’s search algorithm has the ability to figure out what information people are looking for, then hunt in the thousands of fact checks conforming to an agreed-upon scheme from a set of trusted organizations, and present that information along with other search results.  As I noted in an earlier post, that’s significant because it means millions of people will at least see whether a claim has been rated true or false. (Facebook’s system seems to rely more on whether its users dispute an article, which will then trigger fact-checking by the coalition – what seems to be a more manual, less scalable system.)

So if Google can figure out by the words you’re typing what fact checks to show you, why can’t we harness that in a platform such as, say, Facebook, to bring up those fact checks just as you’re about to type out something like “Look – even the Pope endorses Trump” in your feed?  (Leaving aside the fact that Read More…

Posted by: structureofnews | December 1, 2016

Innovation at Reuters

Just an entirely self-serving shout-out to the nice CJR piece by Jonathan Stray about some of the innovations going on at Reuters, including our Automation For Insight project and Reuters News Tracer – a cool new tool that detects newsworthy events on social media and assigns a confidence score assessing how credible they are.

And as a two-fer, we also got a nice piece about News Tracer in Nieman Lab as well.

Not bad for a single day.

(And as a complete side note, if you haven’t been reading Jonathan’s blog, you should.  There’s some really good stuff there.)

Basically – and you can read the pieces for more description – what News Tracer does is find clusters of tweets, cleans out spam and other dross, figures out which clusters are “newsworthy,” at least as mainstream news organizations define it, separate assertions of opinion from assertions of fact, and then figures out a score for the credibility of the cluster.

Loads of kudos to the Thomson Reuters R&D team, and especially Sameena Shah, who led the development team who solved a whole host of very interesting algorithmic challenges over a two-year period.  As Jonathan notes:

Newsroom standards are rarely formal enough to turn into code. How many independent sources do you need before you’re willing to run a story? And which sources are trustworthy? For what type of story? “The interesting exercise when you start moving to machines is you have to start codifying this,” says Chua. Much like trying to program ethics for self-driving cars, it’s an exercise in turning implicit judgments into clear instructions.

Sameena’s team did really smart work figuring out – with help from the newsroom – what “newsworthiness” means, and also how to pull together a basket of factors to help assess credibility.  It’s a never-ending iterative process, of course, but they’ve built up a very impressive capability that extends the reach of the newsroom, improves its speed, and frees reporters up to do more value-added work.

What’s not to like?

 

Posted by: structureofnews | November 28, 2016

Mea Culpa – And To Making Better Models

sotn-1This is more than a little belated, so apologies for that.

So the polls got it wrong.

Hillary Clinton did not, as it turns out, have a 75% or 80% or 90% or 99% chance of winning the election, as many of us – including our massive States of the Nation project – predicted up until Election Day.

(You’ll see that we’ve now updated our analysis to correct for an accurate understanding of the actual turnout, which shows that Donald Trump’s odds of success would have been 75%.  Kinda late, it’s true, but simply validating the underlying assumptions behind the project.)

So what happened?  And given Brexit, and Colombia, what does this mean for polls more broadly?  And how do we do better in the future?

(A shout out/caveat: I’m cribbing heavily from Mo Tamman‘s smart analysis of what went wrong in the polling this year.  Here’s also another nice, nuanced piece about where polls went wrong.)

To be fair, the polls weren’t actually that far off.  Seriously.  At least the national polls. They showed Clinton leading by a couple of percentage points, and as the final results trickle in, indeed the former Secretary of State is leading in the popular vote. It’s true that the polls may have overstated her appeal, but broadly the numbers are within the margin of error, especially if you assume the likely voter models are somewhat off.

Which they were.  And that’s one critical place all the polls fell down on.  And again, to be fair, predicting who will or won’t go to the polls – a once every two- or four-year event – is a tricky exercise at best.  But it doesn’t help that we tend to present the numbers we come up with as absolutes, rather than give a range of possible outcomes based on a range of possible turnout models.  As we noted when we launched the Read More…

Posted by: structureofnews | November 23, 2016

Content To Distribute

image_jan202014_reuters_facebook1So do we have a journalism problem or a distribution problem?

There’s no question that much of the media missed much of the story of Trump’s rise to power – that’s a journalism problem.  But it’s also clear that even all the great journalism about the campaign – and there was a lot of it – didn’t really factor hugely in many voters’ minds.  And that’s a distribution problem.

That’s not the same as the “fake news” issue – which is yet another real problem. This really about how, in addition to upending business models, the new digital landscape is also upending the media’s ability to get quality journalism in front of audiences.  And that’s at least as a big an issue as fake news.

Not that I have any proposals for solutions; but I thought it might be helpful to try to disaggregate the two issues of journalism and distribution, and point to different groups or approaches that should be tackling them.

To be sure, we all could – and should – do better journalism, and certainly much of the media dropped the ball in this election.  And fake news is a real problem.  But let’s focus for the moment on our distribution problem, our dependence on external platforms to get real news to readers, and the filter bubbles that they inhabit.

As Josh Benton put it very nicely in a Nieman Lab piece right after the elections:

In a column just before the election, The New York Times’ Jim Rutenberg argued that “the cure for fake journalism is an overwhelming dose of good journalism.” I wish that were true, but I think the evidence shows that it’s not. There was an enormous amount of good journalism done on Trump and this entire election cycle, from both old-line giants like the Times and The Washington Post and digital natives like BuzzFeed and The Daily Beast. …

The problem is that not enough people sought it out. And of those who did, not enough of them trusted it to inform their political decisions. And even for many of those, the good journalism was crowded out by the fragmentary glimpses of nonsense.

And tackling this issue isn’t really something that individual journalists – or even large news organizations are really equipped to do well.

There’s an analogy here – a medical one – that I think bears on this.  Stay with me. Read More…

Posted by: structureofnews | November 21, 2016

Bubble Trouble

In this photo illustration, a Facebook logo on a computer screen is seen through a magnifying glass held by a woman in BernI was meaning to write this post – honest! – before the election, but procrastination has its benefits: Now the timing seems much more apt, even if the subject – filter bubbles – has heavily picked over. Which means I can talk about a different kind of bubble instead – the kind in newsrooms.

(So thanks, Donald.)

Much has been said about the phenomena of fake news on Facebook, not least in this great piece in the NYT Magazine by John Herrman over the summer, so I won’t dive in too much on the subject.  (For loads of context, check out this, this, this, this, this and this, Facebook’s defence.)

There are two pretty big – somewhat unrelated – problems to address on that front: One is how much flat-out untruthful/half-truthful memes are out there, masquerading as real news and crowding out real information; and the other is just how hard it is for good, serious journalism – the kind of work, for example, that the Washington Post did on the Trump Foundation – to actually get in front of audiences that matter. The latter is much more around the questions of virality, discovery, platforms, filter algorithms and issues of how to distribute rather than create news – of which another post, probably.

But in all of the angst in the media about how we failed to predict – or even contemplate – the prospect of a Trump victory has also been the meme about how the mainstream media inhabits its own bubble with a self-reinforcing worldview.  Which certainly has some truth to it.  As Fortune noted:

In part, that’s because much of the East Coast-based media establishment is arguably out of touch with the largely rural population that voted for Trump, the disenfranchised voters who looked past his cheesy exterior and his penchant for half-truths and heard a message of hope, however twisted.

Or, in a much more direct piece in Cracked magazine, of all places, David Wong writes of the rural poor:

They’re getting the shit kicked out of them. I know, I was there. Step outside of the city, and the suicide rate among young people fucking doubles. The recession pounded rural communities, but all the recovery went to the cities. The rate of new businesses opening in rural areas has utterly collapsed.

So the argument is that the media elite missed a key part of the story because they didn’t have enough insight into the rural heartland; that they sent reporters in to report, but largely as anthropological expeditions rather than as genuine explorations Read More…

Posted by: structureofnews | October 31, 2016

Just The Facts

top_stories_gjacoix-width-800Coming late to this – but then again, I’ve been late to post pretty much all of this year, so what’s new? – but wanted to flag Google’s new “fact check” links that come up in Google News.

This is huge – or yuge! – news for both consumers of news and for structured journalism more broadly.

Why, you ask? It’s just a new kind of link.

Well, yes. But it’s three other things as well.

First, it’s a link driven by Google, which means millions – hundreds of millions – of people will see it and use it, and hence drive up the value and importance of fact-checking, at least in theory.

Second, it stems from a recognition by Google – or at least I hope it does – that people’s news needs aren’t driven solely by the freshest story on the subject, and more by a desire to understand a subject in context. That explains, to some extent, why Wikipedia has become a real destination for news searches, and certainly pushes the value of depth rather than just speed. (Not that speed doesn’t matter as well, of course).

And thirdly, by highlighting only the fact checks that conform to a certain schema, Google is rewarding the notion of structured journalism, and using the best of what the idea has to offer: Building greater long-term value out of structuring the information journalists collect, analyze and publish every day.

To be sure, some don’t see that as an advantage, as this piece from Slate suggests:

Google seems to have a somewhat narrow view of fact-checking journalism, one that defines it by form as much as by function. It will likely leave out plenty of stories that could merit the tag, while including some others that might not. At least at first, it seems to be surfacing stories mainly from dedicated fact-checking organizations, such as Politifact, rather than articles from mainstream news organizations.

And it’s true that there are fact checks embedded in all sorts of types of journalism that won’t be surfaced by this new link.  On the other hand, it’s just as likely that Read More…

Posted by: structureofnews | August 25, 2016

Biggest Poll Ever – And More

SOTN.pngSo we launched the Reuters/Ipsos States of the Nation project today.

It’s the biggest presidential tracking poll ever – at least as far as we can figure, and with upwards of 15,000-plus people surveyed every week, we’re reasonably confident we can make that claim.  But it isn’t just a huge poll, cool though that is.

(And one of the most cool things about a poll this size is that it means we can look into results regularly at the state level, where the U.S. Presidential election is really decided.)

It’s built around the idea that polling accuracy – and election results – hinge around good estimates and predictions of actual turnout on polling day.  So while every pollster has their own model for what percentage of each demographic group will show up to vote – as do we – the site we’ve built lets users create demographic groups on the fly and adjust their predictions of turnout for that group, and see how that would impact the results of the elections at the state level, and hence overall in the Electoral College.

So if you think less-wealthy white men will turn out in droves on election day, amp up their predicted turnout and see how the election will turn out.  What if younger voters stay home?  What if women couldn’t vote?  What if only women voted?  What about Hispanic women aged 18-30, making less than $25,000 a year and identifying as Democrats?

(OK, so you can actually create that last filter, but then again that’s not a huge slice of the population, so I’m not sure changing their turnout is going to materially affect the election.  But it’s great that you can do that.)

Go ahead.  Check it out.  I’ll wait.

(I’ll be reading the nice CJR piece about it while I wait. Had to get Read More…

Posted by: structureofnews | July 4, 2016

What’s The Use?

car16_banner.pngVery (very) belatedly – a report from NICAR in Denver, which was a long time ago now.

That was in March. David Caswell, Jacquie Maher and I had a really good, well-attended session on structured journalism – a sign, I hope, that the concept is gaining traction in newsrooms, or at least among the nerdier of the journalism community.

And if the lively discussion at the session, and at then at an evening drinks session afterwards was anything to go by, there’s cause for some optimism. Not that it’s all smooth sailing from here – and certainly one of the bigger questions we have to address as more people try to implement structured journalism sites is: What’s the use? As in: What use do you want the site to serve?

That’s not a question that’s unique to structured journalism, of course – all news organizations need to think about who their intended audience is, and what they bring to them. But structure brings with it much more, well, structure – and that means trying to solve those questions much earlier.

But first to go back to the session for a minute. It featured a good mix: David talked about the ambition goals of his structured stories software and template, and how it fared in actual coverage, and Jacquie shared the progress she’s made in developing more standardized templates for turning out BBC stories and explainers on key topics. And I just tried to keep up with them.

The room was pretty full, and there was no shortage of comments and questions from the floor, so towards the end of the session, we extended an invite to get together in the bar (where else?) in the evening to keep the discussion going. By the time I got there, a couple of people had already gathered around David, and before too long there was a solid core of about a dozen of us around a table – sharing ideas for projects, discussing challenges they’d faced.

But a key question that kept surfacing was – nicely framed by Jonathan Stray – was about use cases, and how tightly to define it before you set up shop. And it’s a key issue, I think: Deciding what topic and questions we want to throw a light on, designing our information structure for that – and shedding everything else.

It’s a tough thing to do, because we all naturally want to preserve as much flexibility as possible. But – at least so far – it’s very hard to build and maintain Read More…

« Newer Posts - Older Posts »

Categories