So let’s say structured journalism makes sense. Why torture the newsroom or build elaborate data structures to implement it? We have great new technologies just over the horizon that can do free-text analysis, extract key concepts, names, companies, and throw it all into the right fields.
And I have a bridge to sell you.
The problem is that even the best free-text engines still have problems getting all the context right; and even if they did, we still have the issue of what taxonomy to use (probably a deeper and more intractable question than raw machine processing power.) And it will likely be more than a few years before the technology really works as promised (look at Generate, Silobreaker and others.)
More importantly, even if the technology is close at hand, why not simply deploy the fastest, cheapest computer around on it – the human brain? If journalists are already working on a story, they should be able to catalog and characterize the elements of their story faster and more accurately than any machine. And, if we build the input structure right, they should be able to do it at only a marginal increase in their workload.
In other words, why try to code a program that will make a summary of a story when we can just ask the reporter to do it? True, it would be hard to get the entire library of congress done without employing thousands of people, but that’s not the point of the exercise; it’s simply to catalog and structure stories going forward. Maybe over time we can pay thousands of people to summarize old stories – but for the moment let’s focus on building value going forward first.
Sometimes human engineering is a faster, more effective, and cheaper fix than technology.