So I’ve been thinking. (I know – a dangerous activity at the best of times, and ideally mitigated by the ingestion of copious amounts of Islay scotch. But I digress.)
Apropos of – well, a couple of things: Talking to Bill Adair and Laura and Chris Amico about the precepts of structured journalism, prepping for a panel discussion with them (virtually) next week in Italy, talking at length with David Caswell of Structured Stories, and a couple more things like that – it seemed like a good time to revisit the broad underpinnings of the ideas behind the blog and structured journalism. And if it doesn’t – well, hey, it’s my blog.
It’s not that the basic ideas I wrote down back on 2010 have changed; they haven’t, really. But the world has, and technology has advanced, which makes more things possible and some of the underlying needs that structured journalism addresses perhaps more pressing.
The core underlying idea remains the same as I articulated it back then:
… that the tradition “push” model of news – we tell you the latest happenings – only serves part of audiences’ needs; increasingly people turn to a “pull” model where they look for up-to-date information on specific subjects when they want it, not when events happen.
But it’s already moved on from there: People not only want news/information when they want it; they want it – and will increasingly expect it – to be personalized to them. Whether that’s stock market reports that are built around their portfolio, analyses of school results that focus on where their kids are educated, sports stories that reflect their level of interest in that team, or just the ability to explore in narrative and visualization forms a wealth of data and information, it’s increasingly possible to develop machine systems that can “write” such stories at speed and scale.
Not all stories, of course, and certainly not for all circumstances. But still a far advance on the current one-size-fits-all-audiences model. It seems to me just a matter of time before such personalized reports are available and expected. If.
If there’s enough underlying data and information to create such stories. Data and information that journalists need to go out and report, verify and collate.
Some might argue that, given how much data is being created and dispersed into the world each day, there’s no shortage of information for machines to use to build narratives. And maybe that’s true. But there is – at least currently – a shortage of well-structured, clearly verified information that professional journalists are ideally placed to create – mostly because it’s a by-product of our day-to-day activities.
Structured, verified data allows us to create much more robust, detailed narratives than information gleaned from machine analysis, even of very large document sets.
Focusing on creating that structured data plays to the strengths inherent in large, professional newsrooms – discipline, verification, regularity of updates, etc. That offers some competitive advantage in an age where on any given day someone, somewhere, is likely to do a better story that you or your organization can.
Just as importantly, creating that database allows news organizations to constantly repurpose information gathered long ago into new narratives and stories, both creating new content at low cost and extending the lifespan of the core inventory of the company – facts. Not to mention serve audiences and the public better.
To recap: When I first wrote about structured journalism in 2010, I was focused on things like data visualization as the possible outputs of large amounts of structured information. But technology has advanced significantly so that the prospect of personalized narratives and exploratory “counterfactuals” and news-on-demand are fast becoming a reality.
So if anything, the need for robust databases of verified, structured information is even greater now. And our ability to create them – with more tools, more flexible CMSes, less legacy cultural inhibitions to trying new things – has never been better.
So what are we waiting for?