Remember this advice from way back about writing a story? “Just imagine you’re in a bar, telling someone what happened today….”
Oh, sure. Of course that’s how stories are actually written. Can you imagine actually reading out a newspaper story to someone at a bar and pretending that’s how you talk?
But maybe – in a back-to-an-imagined-past-and-a-new-future kind of way – that’s precisely where news is going. Google just unveiled Duplex, a new voice-controlled AI capability, and it’s… well, amazing, frightening, creepy, and revolutionary. All at the same time. And potentially game-changing for journalism, and possibly the business of journalism as well.
Listen to this: (scroll down a little and click on the first two audio clips.) Go ahead, I’ll wait – it’s essential listening for the rest of this post.
Is that not amazing and creepy, or what? To be sure, this isn’t an all-purpose HAL-like device that can decide you lock you outside the spaceship without a helmet. Yet. As The Verge notes:
Initially Google Duplex will focus on three kinds of task: making restaurant reservations, scheduling hair appointments and finding out businesses’ holiday opening hours.
But you could easily imagine how it becomes less of an assistant for performing tasks, and more of a way for users to explore and get the news or information they want. Or as another story on The Verge points out:
The more technology advances, the clearer it becomes that our smartphones are no longer about conversing but more about transfers of information.
And what is news but the transfer of relevant, useful information? But wait, you say: Don’t we already have news on Alexa and other voice-controlled devices?
Sure we do. But much of voice-enabled news so far has been built around either summarizing the top stories of the day, or pulling together data points – such as stock market performance – and generating sentences. The quality of Duplex – at least as seen in the demo – opens up the possibility of much more fully interactive explorations of information. Much like that mythical chat at the bar.
“So what happened today?”
“Well, a bunch of papers reported that Michael Cohen got a lot of money from companies that wanted to understand the new administration.”
“Wait – who’s Michael Cohen?”
“He’s the President’s personal lawyer.”
“Wasn’t he the guy that was involved in paying Stormy Daniels?”
“That’s the one.”
“So what happened next?”
“Well, Michael Avenetti – you know, Stormy Daniel’s lawyer – released this document that laid out all these payments to an account that Cohen controls.”
“How did he get access to that?”
“No one seems to know.”
“And then what happened?”
And so on. OK, so this is made up. (Not the news, just the conversation.) But as a way of giving people the information they need, allowing them to steer into sidebars and then returning to the main narrative, skipping over facts they already know – there’s much to recommend a – made-up – system like this.
It’s this kind of personalization that’s been tried in one form or another – the late-lamented Circa, or Google’s Living Stories – and something I’ve written a fair amount about, both as where journalism needs to be going and what structured journalism can help power. And it moves us off the notion that the story is the basic block of journalism, and more to a world where we focus on facts, context, and evolving different narratives. And think much more about product.
Imagine a stock market report like this:
“How did I do in the market today?”
“Well, the Dow was up 1 percent, but your portfolio was down 1 percent.”
“Damn!”
“Sorry. Your media stocks tanked.”
“I knew loading up on them was a bad idea.”
“Well, if you had hung on to your Google shares instead of selling them in February, you would have been up 2 percent.”
The analysis needed to come up with simple counterfactuals like that isn’t all that hard – and in fact, is something we’re working on at Reuters.
So much of this is somewhat fanciful – machines will need to get a lot better at parsing information to be able to pull out short statements that accurately describe Michael Cohen, understanding how much context you already know, and filling in the gaps. But what will make it easier is – you guessed it – structured data and information. Rather than have machines – admittedly smarter and smarter machines – pull information out of stories and understand their meaning in context, why couldn’t news organizations help them but providing verified pieces of information in a structured form that applications such as Duplex can interrogate easily – and accurately? And in the process, make money providing that structured information?
(Google – you know where to contact us!)
True, it would be a huge task to structure the world – but there’s no reason why you couldn’t pick off a domain at a time, such as the stock market, or sports, or polling.
So I admit this is a pretty optimistic view of the world. Maybe this is much harder to do than I’m making out, and it’s many years off. Or maybe we’ll all be locked out of spaceships with no helmets when Skynet comes online. Or get hacked, as this scary NYT piece points out, is a very real possibility.
….researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.
So we’re in a wonderful new world of news. Or one where we work for our robot overlords. Or both.
[…] yet it’s important to also recognize how far machines have come, and what capabilities we can harness to do better journalism (even if sometimes there are […]
By: Humans In The Loop | (Re)Structuring Journalism on May 20, 2018
at 10:52 am
[…] spoken news bulletins, and expand that idea to video news. Imagine if you could have a dialogue with that virtual news anchor instead of simply passively watching the […]
By: Less Wooden | (Re)Structuring Journalism on November 12, 2018
at 12:14 am
[…] language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago […]
By: The traditional story structure gets deconstructed » Nieman Journalism Lab - Nieman Journalism Lab at Harvard - ZUA MARKETING on December 14, 2022
at 12:53 pm
[…] language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago […]
By: The traditional story structure gets deconstructed » Nieman Journalism Lab - Nieman Journalism Lab at Harvard cutshows on December 15, 2022
at 1:47 am
[…] language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago […]
By: The traditional story structure gets deconstructed » Nieman Journalism Lab – Nieman Journalism Lab at Harvard – The KKK News on December 15, 2022
at 11:27 am
[…] language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago […]
By: The traditional story structure gets deconstructed » Nieman ... - Nieman Journalism Lab at Harvard cutshows on December 18, 2022
at 10:37 am
[…] as I did for a Nieman Labs prediction piece (and years ago, here), that it could be used to power a real news chatbot, but one powered by verified, real-time […]
By: Questions | (Re)Structuring Journalism on December 28, 2022
at 10:54 am