Posted by: structureofnews | August 12, 2010

Welcome

Aaah – another site about The Future of Journalism.

A dull one.  Without the  invective and ideology about free vs. paid, pajama-clad bloggers vs. stick-in-the-mud mainstream media curmudgeons, and Utopian visions of crowdsourced news vs. dark fears about falling standards you can find elsewhere.  It has words like taxonomy and persistent content in it; discusses business models and revenue streams in dull, accountant-like language; and tries to dissect the sparkling prose journalists turn out into tiny bytes of data.

But there is a purpose here, and it’s based around the idea that we as journalists haven’t really thought about how people are changing the way they access information, or how we need to fundamentally rethink the way we carry out journalism and the kinds of – for want of a better word – products we turn out for them.

There’s much hand-wringing over the loss of the traditional business model of news, it’s true.  Perhaps too much.  And this site will contribute its share.  But hopefully it’ll also explore some of the less-explored questions about where the profession goes in a digital age.   And lay out some of the thinking behind one concrete idea that might help move the business forward: Something I’m calling Structured Journalism.

So, welcome – and I hope you find this interesting.

Posted by: structureofnews | May 8, 2017

For Or About?

News Jobs

There was a fascinating analysis in Politico recently about how and why the media missed the support for Donald Trump in America’s heartlands. Written by Jack Shafer and Tucker Doherty, it’s a smart look at where journalists have increasingly congregated in the digital age, and – no surprise – it’s not in red states.

And if you’re not where the story is, you’ll miss the story.  And so we all did.

But that’s only half the story.  It’s not just that we – the media – missed the story; we also missed the audience.  And that’s probably as important an issue as how well the country is covered: Do we need better coverage about a group, or better coverage for that group?

In other words, it’s great if major news organizations spend the time and resources to cover the fears, dreams and drivers of rural and rust-belt voters, and so better inform their readers in New York, London or wherever.  But that’s not the same as covering those communities for people in those communities, who doubtless have a whole bunch of issues they care about that people in far-off cities don’t.

To be sure, it’s not the job of the New York Times, or Washington Post, or Guardian, to reach rural readers in Wyoming, and it’s unfair to expect them to do so.  Read More…

Posted by: structureofnews | April 15, 2017

Off to SABEW

SABEW.JPGA quick post: I’m off to Seattle in two weeks for the Society of American Business Editors and Writers annual conference, and to talk on a panel about automation and the newsroom.

Also on the panel – titled “Robots Will Change Your Newsroom. Are You Ready?” – will be Robbie Allen of Automated Insights and Lisa Gibbs of the AP.  Should be lots of fun, and interesting.  The main point I want to make is that we ought to be looking at the kinds of stories that machines can do well, rather than trying to make machines do what humans do well.  

(Plus, I’ve never been to Seattle, and it looks like a fun town.)

It’s on at 3:45 pm on Friday, April 28.  Come by if you’re in town.  (And, there’s a reception sponsored by Reuters that evening.  Free drinks!)

Posted by: structureofnews | April 8, 2017

Navigating The News

HuffPoHow do you find the news? Or, more to the point, how do you navigate the news?

It’s one thing to be alerted to breaking news or interesting stories – there are recommendations from friends via social media, news alerts on your phone, and other systems that let you know when something interesting is happening.  But what if you want to explore the news, tracking down threads from one story to the next, or pieces that contradict the one you just read, or simply content that’s related?

There are recommendation engines, of course, and some of them even work reasonably well.  But as I noted in a piece way back in 2011, most of them come to you in the form of lists.  Which are great as a means of sorting information, but certainly aren’t the only way to help you understand how stories might be related or how to explore particular trains of thought.

So a new navigation page from the Huffington Post – the Flipside – is an interesting experiment in seeing how people will take to a non-list form of exploring the news.   It’s built around the idea that you can navigate stories by topic and news organization, laid out on a matrix that sites them based on how liberal or conservative they are, and how trustworthy or untrustworthy they are.

As the HuffPo notes in a blog post introducing the idea,

The idea is simple: Use this tool to explore the diversity of stories trending on Twitter at any given time on a handful of topics. We’ve chosen to follow links from 14 publications, some mainstream, some from the edges of the political spectrum.

To be sure, while it’s a smart, innovative design, there aren’t a lot of surprises built into the current implementation of the page: Most of the mainstream news organizations it carries are clustered in the trusthworthy/liberal end of the scale, with Breitbart occupying the untrustworthy/conservative world by its lonesome.  The value is in the headlines that pop up on the side when you click on a news organization, in the form of a – yes – list.  And to be fair, the point was really to make readers aware of the spread of stories from a range of media on some key topics, and in that regard it works well.

But there are clearly many other uses for such a layout, Read More…

Posted by: structureofnews | March 31, 2017

What Can’t Google Do?

google-logoWhat can’t Google do?  And why does it matter?

The subject came up while I was talking to a journalism class the other day, discussing an upcoming site that would collate and curate information about a particular topic, bringing in context, related information, documents, and background.  There wouldn’t be a huge amount of original content, with most of the focus being on collecting and organizing information from elsewhere.

To which one student asked: Can’t I get all of that from Google?

And it’s true, you can.  To be sure, there’s already a huge amount of value in curating, verifying and organizing information that’s easily available via a Google search.  Just sifting through the flood of links that come back on any search is massive public service.  But it does raise the question: How much different do you have to be from a Google search to really add value?  Do you have to add a lot more original content – and even if you do, isn’t that content then just available on Google as well?

Or, to put it another way, what is it that Google can’t do that a site – even one that doesn’t create its own content – can do?

And the answer is: A fair amount.  (At least so far, until a bunch of PhDs at Google put their mind to it.)  Google can’t really add or subtract, for example, so if you store information in a structured form, ala Politifact’s fact checks, you can create a page that summarizes and counts, for example, how many  times Donald Trump has uttered falsehoods.  Google can’t do that.  (Or rather, a Google search can’t return that number).

Google doesn’t do timelines all that well, either, since it generally prioritizes the most recent story on any given topic.  But if you want to track an event or an issue over time, that’s not to helpful.  So by ensuring that dates and summaries are part of the information structure of a site, it’s relatively easy to generate Read More…

Posted by: structureofnews | January 30, 2017

Friends And Enemies

ConformityBelatedly – why do so many of my posts involve use of the word “belatedly,” I wonder? – here’s a quick post about a research study that was written up in the New York Times in December.  (Yes, I know that was last year.)

It’s about implicit (and hidden) bias – natural prejudices that we all have (and are often blind to), no matter how open-minded or broad thinking we think we are. It’s an important issue, which I’ve touched on before, and especially now for journalists in an increasingly polarized world where it can be tempting to take a side and see proponents of other views in a less-than-flattering light.

The bad news is that we can’t really avoid our internal biases.  The good news is that, if we work at it, we can mitigate its effects.

The study itself is pretty straightforward:  Experimenters set up a game with volunteers, who would then witness what they thought was another player cheating.  The volunteers then got to decide how harsh a punishment should be meted out to the “cheaters.”  The idea was to see if the volunteers would punish wrongdoers more harshly if they thought they were from a different group – say, supporters of an rival sports team – than if they thought they were “one of them.”

And what happened?  You can pretty much guess the result.

When people made their decisions swiftly — in a few seconds or less — they were biased in their punishment decisions. Not only did they punish out-group members more harshly, they also treated members of their own group more leniently. The same pattern of bias emerged in a pair of follow-up experiments in which we distracted half of the punishers.

Well, duh.  But the fact that this kind of tribalism is deeply ingrained in humanity doesn’t make it any better.  And for journalists, whose job is often to assess people’s credibility quickly and make decisions about how the information they provide will be used – in other words, to figure out how much to trust someone and how to treat them in a story – this should be sobering news.

We often think we’re unbiased or objective, but what the experiment shows is that we can be easily swayed by how similar we think the person we’re interviewing is to us.

But there is good news: Read More…

Posted by: structureofnews | January 5, 2017

Seeing Patterns

What are machines good at, what are people good at, and how can we get the most out of pairing the best of both worlds?

It’s not like I haven’t written about this topic before – not least about not trying to make machines be poor copies of humans – but it’s just that the list of things machines are good at keeps getting longer.  (And so maybe we should be thinking of how to make them better copies of humans, and worry about what jobs will go away – but that’s the subject for another post.)

Exhibit A is how-machines-are-getting-better-at-more-things is an excellent NYT Magazine piece from a couple of weeks ago by Gideon Lewis-Kraus, entitled The Great A.I. Awakening. If you haven’t read it yet, you should stop here and read it.  It’s very good.

At a basic level, it’s the story of how Google used neural networks to push the quality of Google Translate to an astonishingly good level, and in a very short period of time. But the broader story is about how neural networks – essentially, systems for recognizing patterns – have come into their own, and are powering machines to do a host of things never before thought possible: High-quality translations, image recognition, and so on.

And if they can do all those things well – imagine what could they do for journalism.

Just look at how good Google Translate has become with the help of the neural network built by a team called Google Brain.  First, a passage from Ernest Hemingway’s “The Snows of Kilimanjaro,” translated to Japanese and then back to English via Google Translate, pre-neural networks:

Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.

And now after neural networks:

Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.

That’s pretty damn good.

And as Gideon notes, it’s not simply about translation:

Once you’ve built a robust pattern-matching apparatus for one purpose, it can be tweaked in the service of others. One Translate engineer took a network he put together to judge artwork and used it to drive an autonomous radio-controlled car. A network built to recognize a cat can be turned around and trained on CT scans — and on infinitely more examples than even the best doctor could ever review. A neural network built to translate could work through millions of pages of documents of legal discovery in the tiniest fraction of the time it would take the most expensively credentialed lawyer. The kinds of jobs taken by automatons will no longer be just repetitive tasks that were once — unfairly, it ought to be emphasized — associated with the supposed lower intelligence of the uneducated classes. We’re not only talking about three and a half million truck drivers who may soon lack careers. We’re talking about inventory managers, economists, financial advisers, real estate agents. What Brain did over nine months is just one example of how quickly a small group at a large company can automate a task nobody ever would have associated with machines.

And worrying about those jobs, and those people who will be automated out of them, is an important, pressing issue.  But so too is thinking about how best to harness all this new-found capability in the service of journalism Read More…

Posted by: structureofnews | December 22, 2016

A Modest Proposal

15facebook2-master768So Facebook waded into the fact-checking space – more precisely, dipped a toe into the fact-checking pond – last week with an announcement that it would work with a coalition of organizations to tag some articles as “disputed” on its platform.

It’s a small but important step in the campaign to address the much-misnamed “fake news”problem, and follows in Google’s steps not long ago to promote fact-checks in search results.

Which leads to a thought: If we can flag falsehoods when we search for or share news articles, why can’t we do that at an even earlier stage – when people are writing the posts that they plan to share? Here’s what I mean:

(But first, a tip of the hat to friend and colleague Jeremy Wagstaff, who sparked off the thoughts below.  You can also blame him if you think it doesn’t make any sense.)

Clearly, Google’s search algorithm has the ability to figure out what information people are looking for, then hunt in the thousands of fact checks conforming to an agreed-upon scheme from a set of trusted organizations, and present that information along with other search results.  As I noted in an earlier post, that’s significant because it means millions of people will at least see whether a claim has been rated true or false. (Facebook’s system seems to rely more on whether its users dispute an article, which will then trigger fact-checking by the coalition – what seems to be a more manual, less scalable system.)

So if Google can figure out by the words you’re typing what fact checks to show you, why can’t we harness that in a platform such as, say, Facebook, to bring up those fact checks just as you’re about to type out something like “Look – even the Pope endorses Trump” in your feed?  (Leaving aside the fact that Read More…

Posted by: structureofnews | December 1, 2016

Innovation at Reuters

Just an entirely self-serving shout-out to the nice CJR piece by Jonathan Stray about some of the innovations going on at Reuters, including our Automation For Insight project and Reuters News Tracer – a cool new tool that detects newsworthy events on social media and assigns a confidence score assessing how credible they are.

And as a two-fer, we also got a nice piece about News Tracer in Nieman Lab as well.

Not bad for a single day.

(And as a complete side note, if you haven’t been reading Jonathan’s blog, you should.  There’s some really good stuff there.)

Basically – and you can read the pieces for more description – what News Tracer does is find clusters of tweets, cleans out spam and other dross, figures out which clusters are “newsworthy,” at least as mainstream news organizations define it, separate assertions of opinion from assertions of fact, and then figures out a score for the credibility of the cluster.

Loads of kudos to the Thomson Reuters R&D team, and especially Sameena Shah, who led the development team who solved a whole host of very interesting algorithmic challenges over a two-year period.  As Jonathan notes:

Newsroom standards are rarely formal enough to turn into code. How many independent sources do you need before you’re willing to run a story? And which sources are trustworthy? For what type of story? “The interesting exercise when you start moving to machines is you have to start codifying this,” says Chua. Much like trying to program ethics for self-driving cars, it’s an exercise in turning implicit judgments into clear instructions.

Sameena’s team did really smart work figuring out – with help from the newsroom – what “newsworthiness” means, and also how to pull together a basket of factors to help assess credibility.  It’s a never-ending iterative process, of course, but they’ve built up a very impressive capability that extends the reach of the newsroom, improves its speed, and frees reporters up to do more value-added work.

What’s not to like?

 

Posted by: structureofnews | November 28, 2016

Mea Culpa – And To Making Better Models

sotn-1This is more than a little belated, so apologies for that.

So the polls got it wrong.

Hillary Clinton did not, as it turns out, have a 75% or 80% or 90% or 99% chance of winning the election, as many of us – including our massive States of the Nation project – predicted up until Election Day.

(You’ll see that we’ve now updated our analysis to correct for an accurate understanding of the actual turnout, which shows that Donald Trump’s odds of success would have been 75%.  Kinda late, it’s true, but simply validating the underlying assumptions behind the project.)

So what happened?  And given Brexit, and Colombia, what does this mean for polls more broadly?  And how do we do better in the future?

(A shout out/caveat: I’m cribbing heavily from Mo Tamman‘s smart analysis of what went wrong in the polling this year.  Here’s also another nice, nuanced piece about where polls went wrong.)

To be fair, the polls weren’t actually that far off.  Seriously.  At least the national polls. They showed Clinton leading by a couple of percentage points, and as the final results trickle in, indeed the former Secretary of State is leading in the popular vote. It’s true that the polls may have overstated her appeal, but broadly the numbers are within the margin of error, especially if you assume the likely voter models are somewhat off.

Which they were.  And that’s one critical place all the polls fell down on.  And again, to be fair, predicting who will or won’t go to the polls – a once every two- or four-year event – is a tricky exercise at best.  But it doesn’t help that we tend to present the numbers we come up with as absolutes, rather than give a range of possible outcomes based on a range of possible turnout models.  As we noted when we launched the Read More…

Posted by: structureofnews | November 23, 2016

Content To Distribute

image_jan202014_reuters_facebook1So do we have a journalism problem or a distribution problem?

There’s no question that much of the media missed much of the story of Trump’s rise to power – that’s a journalism problem.  But it’s also clear that even all the great journalism about the campaign – and there was a lot of it – didn’t really factor hugely in many voters’ minds.  And that’s a distribution problem.

That’s not the same as the “fake news” issue – which is yet another real problem. This really about how, in addition to upending business models, the new digital landscape is also upending the media’s ability to get quality journalism in front of audiences.  And that’s at least as a big an issue as fake news.

Not that I have any proposals for solutions; but I thought it might be helpful to try to disaggregate the two issues of journalism and distribution, and point to different groups or approaches that should be tackling them.

To be sure, we all could – and should – do better journalism, and certainly much of the media dropped the ball in this election.  And fake news is a real problem.  But let’s focus for the moment on our distribution problem, our dependence on external platforms to get real news to readers, and the filter bubbles that they inhabit.

As Josh Benton put it very nicely in a Nieman Lab piece right after the elections:

In a column just before the election, The New York Times’ Jim Rutenberg argued that “the cure for fake journalism is an overwhelming dose of good journalism.” I wish that were true, but I think the evidence shows that it’s not. There was an enormous amount of good journalism done on Trump and this entire election cycle, from both old-line giants like the Times and The Washington Post and digital natives like BuzzFeed and The Daily Beast. …

The problem is that not enough people sought it out. And of those who did, not enough of them trusted it to inform their political decisions. And even for many of those, the good journalism was crowded out by the fragmentary glimpses of nonsense.

And tackling this issue isn’t really something that individual journalists – or even large news organizations are really equipped to do well.

There’s an analogy here – a medical one – that I think bears on this.  Stay with me. Read More…

Older Posts »

Categories