Posted by: structureofnews | August 12, 2010

Welcome

Aaah – another site about The Future of Journalism.

A dull one.  Without the  invective and ideology about free vs. paid, pajama-clad bloggers vs. stick-in-the-mud mainstream media curmudgeons, and Utopian visions of crowdsourced news vs. dark fears about falling standards you can find elsewhere.  It has words like taxonomy and persistent content in it; discusses business models and revenue streams in dull, accountant-like language; and tries to dissect the sparkling prose journalists turn out into tiny bytes of data.

But there is a purpose here, and it’s based around the idea that we as journalists haven’t really thought about how people are changing the way they access information, or how we need to fundamentally rethink the way we carry out journalism and the kinds of – for want of a better word – products we turn out for them.

There’s much hand-wringing over the loss of the traditional business model of news, it’s true.  Perhaps too much.  And this site will contribute its share.  But hopefully it’ll also explore some of the less-explored questions about where the profession goes in a digital age.   And lay out some of the thinking behind one concrete idea that might help move the business forward: Something I’m calling Structured Journalism.

So, welcome – and I hope you find this interesting.

(An update: I first wrote those words 11 years ago, and it’s amazing how some of those passionately argued debates – free vs. paid! – have basically gone away.  Which is great.  So I could and should rewrite this intro.  But the third paragraph remains just as valid. Plus, I’m pretty lazy. )

Posted by: structureofnews | February 5, 2023

Right Tool, Wrong Job

There’s a joke about a drunk hunting around the ground below a bright street light, and when asked what he’s looking for, says that he dropped his keys somewhere up the road.  So why is he looking here? “The light is better here,” he says.

OK, so you didn’t come here for the humor, but for the sharp analogies that jokes can prompt (self-referential reference here). Although this one isn’t my analogy: this is cribbed from AI expert Gary Marcus, a professor at NYU, on a smart podcast discussing the promise and limitations of AI with Ezra Klein, about our obsession with ChatGPT and our attempts to have it solve any multitude of problems – not because it’s good at them, but because the light is so much better under it. 

There have been thousands of words written about ChatGPT and its miraculous capabilities, massive shortcomings or apocalyptic dangers. This isn’t one of those pieces.  It’s more about what it doesn’t do well, what what it does do well could do for journalism and why we should be looking elsewhere to fill the gaps it can’t.

To be sure, ChatGPT is very good at some things.  It’s an astounding language model, which means it can produce human-sounding text in multiple styles at scale, and will doubtless upturn any profession industry that requires writing as an output – business executives and copywriters, for example.  That doesn’t mean it’ll put people out of work – although it certainly could; it’s more that people who aren’t great at expressing themselves might get a tool to help them on that front, just as a calculator helped people who weren’t great at doing math with pencil and paper. Given the right prompts, it can turn mediocre ideas into acceptable prose – a low bar, perhaps, but then again lots of writing ain’t Shakespeare. (There are a whole set of other questions about equity and the new “AI divide” between those who have access to such tools and those who don’t, but that’s a topic for another day.)

And, as I noted earlier in an earlier post, it’s uncannily good at “understanding” questions, and if combined with a good backend search engine, could well revolutionize search as well.  (As Microsoft is doubtless thinking with its investment into OpenAI and integration of ChatGPT into Bing.)

What are its weaknesses?  It can’t do math, for one, as CNet’s ill-fated experiment in ChatGPT-generated stories demonstrated.  And that’s pretty basic skill for journalism.  Nor is it great at discerning fact from fiction, as any number of people have shown. And while it can create new content in multiple styles, all that is ultimately based in some broad way on words that have been written before.  It isn’t original in that sense. And, per Gary Marcus:

What it’s bad at is abstraction. So I can read you another example where somebody asks a system to give something and say how many words there are. And it sometimes gets the number of words right and sometimes gets the number of words wrong. So the basic notion of counting a number of words is an abstraction that the system just doesn’t get. There are lots of abstractions that deep learning, in fact all abstractions of a certain narrow technical sense, these systems just don’t get at all.

So it’s not all that helpful to criticize it for not being original, for not understanding concepts, or for not performing great (or even mediocre) journalism; that’s not what it’s built to do. After all, you don’t complain that Excel does a bad job of writing stories; it’s not supposed to. At heart, ChatGPT is a language model that does an astoundingly good job at putting words one after another that cohere.  It doesn’t “understand” any of them; it doesn’t analyze them, or facts, per se. It’s taking large amounts of data and predicting, based on the words it has ingested, how to create something new.

And when those words are largely accurate, it can give a pretty good answer. But when those words are riddled with inaccuracies, not so much. But journalism is often about new words: new facts, new analysis, and new ideas. It’s math, at one level.  It’s analysis.  It’s inferences and weighing of conflicting statements.  It’s verification of new facts.  Which leaves a system like ChatGPT without a lot of training data to work from.

For that – at least in my limited understanding of AI and technology – you need something more like symbolic logic.  You need a system that can take data, analyze it, look for patterns that are “interesting” or “insightful” and surface them, whether in text or some other format.  That’s what we were building when I was at Reuters with Lynx Insight.  Language generation was the least interesting part of it; what we wanted was smart pattern recognition.  Does this financial data suggest some major shift in company strategy?  Are corporate insiders bailing out?  And so on.

By itself, such a system based on algorithmic rules has some real limitations, as we learned – and not just with the language generation part of it. Gary Marcus again:

The weakness of the symbol manipulation approach is people have never really solved the learning problem within it. So most symbol manipulation stuff has been hard-wired. People build in the rules in advance. That’s not a logical necessity. Children are able to learn new rules. So one good example is kids eventually learn to count. They’ve learned the first few numbers, 1 and 2 and 3. It’s kind of painful. And then eventually they have an insight, hey, this is something I can do with all numbers. I can just keep going.

And the kind of brute force data analysis we were pushing Lynx Insight into was always going to hit a ceiling at some point. But pairing even something like that with an output generator like ChatGPT? That would have been a very interesting exercise, not least if we wanted to produce, at scale, more customized versions of similar stories, based on verified data analysis. Factual journalism, in other words.

And to be sure, there are smarter AI systems out there than what we used with Lynx Insight; Anthropic, which just announced a deal with Google, seems to be streets ahead of ChatGPT in terms of its ability to “understand” broader language concepts and have more of a model of the world. (At least, based on one very quick demo that I saw, and I was drinking at the time.)

All of which is mostly an argument – or plea – here that we think more holistically about how AI can help or reshape journalism, beyond a single capability, miraculous though it may seem.  ChatGPT does a couple of things well and other things badly. Other systems can fill in the gaps.  It’s less important to criticize it for shortcomings it wasn’t designed to address and more important to think about more holistic systems that can help us do our jobs better, serve communities better, and make our products better.

In other words, to stop just looking under the ChatGPT light – bright though it is.

Posted by: structureofnews | December 28, 2022

Questions

I’m going back to the well of ledes/jokes with this one:

“My dog can play checkers.”

“That’s amazing!”

“Not really – he’s not very good.  I beat him three times out of five.”

Old joke, I know, but I tweeted it recently in connection to the discussion about ChatGPT and its skill – or lack thereof – in creating human-like content in response to prompts. And how maybe we’re looking at the wrong place to understand how groundbreaking it really is. Hint: It’s in the questions, not the answers.

(First, a digression: Yes, it’s been ages since I posted. And who knows when I’ll post next. But I am slowly returning to this world. End of digression.)

So this post is based on no real deep knowledge of ChatGPT – and I’m not sure if I would understand it if I had more access to information – but some of the commentary on the launch of what’s a really very impressive AI platform seems to be focused on the wrong thing: the output rather than the input.

Don’t get me wrong: ChatGPT’s output is incredible. And also sometimes incredibly stupid, as this piece in the Atlantic notes. And there’s certainly no shortage of critiques on the interwebs about how it’s not really creative and it simply reworks the huge corpus of writing and data that’s been fed into it.

And all that’s fair. Although, again: “My dog can play checkers” is the achievement, not how many times it wins.

But more importantly, perhaps the most significant achievement in ChatGPT isn’t in how it comes up with answers but how it understands questions. I’ve been playing with it a bit, and what I find amazing isn’t how well – or badly – it answers my questions, but how it knows what I’m looking for. I asked it who Michael Cohen was, and it figured that out; I asked about Stormy Daniels, and it knew that too. True, not hard. But when I asked whether Donald Trump had paid Stormy Daniels off, it managed to parse what’s a really complicated question – who is Donald, who is Stormy, what is their relationship, did it involve a payoff and why, and who said what – and came back with a reasonable answer (Donald says he didn’t, but Michael said he did.)

To be sure, it’s true that as a search engine, ChatGPT has some significant drawbacks, not least that it doesn’t seem to be able to distinguish between what’s true and published and what’s untrue and published.

But Google does a pretty good job of that part of the search experience. While doing a pretty mediocre job of the front end of the search experience – we all spend far too much time refining our search terms to get it to spit out useful answers. So what if the front end of ChatGPT was paired with the back end of Google?

Imagine, as I did for a Nieman Labs prediction piece (and years ago, here), that it could be used to power a real news chatbot, but one powered by verified, real-time information from a news organization. Talk – literally – about truly personalized information, as I mused in the Nieman Lab article.

How about using ChatGPT‘s powerful language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago that weren’t supposed to be there.” “I don’t understand — didn’t he say he declassified them?” “Actually…”

It would let readers explore questions they have and skip over information they might have. In other words, use technology to treat every reader as an individual, with slightly different levels of knowledge and levels of interest.

And yet another use case for good structured journalism! (Of course.) Regardless of how it’s ultimately used (and here’s one idea, and here’s another), it’s important to recognize how important this development is, and how it could truly transform the industry. You read it here first.

(Another digression: You should listen to this Ezra Klein episode as he talks to Sam Altman, the CEO of Open AI, which created ChatGPT. It’s either fascinating or terrifying. Or both.)

Posted by: structureofnews | December 15, 2022

The Semaform at Semafor

Here’s a short video I did – well, technically I fronted; someone else (Joe Posner!) did all the work – explaining what we’re trying to do with the story form at Semafor. We call it a – doh – Semaform. And here’s the accompanying story.

(And less, it’s two months old; but when was I ever on time on this blog?)

It’s been an interesting challenge at Semafor to actually implement some of the ideas I’ve been talking about – in mostly theoretical terms for so long – to rethink the basic unit of news from the ground up. And now I’m seeing how well some ideas do – or don’t – work in practice.

It’s been a great adventure. I’ll post more as I dig out from under…

Posted by: structureofnews | July 10, 2022

A Talk Or Two

Shameless self-promotion alert: Here are links to two talks I gave recently (“recently” being stretched to mean “three months ago”).

I would summarize them, but that would take work, and it’s the weekend…

More seriously – but it is also the weekend – the first is a talk I gave at the International Symposium on Online Journalism in Austin, Texas, back in April. The topic was News: What is it, who is it for, and how can we rethink it for the digital era? and it’s mostly about the need for imagination in journalism innovation, but the Q&A session afterwards, expertly moderated by Neil Chase of CalMatters, roams all over the place. I had fun, but your mileage may vary; there isn’t a transcript so you’ll have to wade through 45 minutes or so of bad jokes and poor analogies to capture the entire essence of my ramblings.

And a shout out to Rosental Alves, who created the conference two decades or so ago, and who had the bad judgement to invite me to speak.

The second is the lunchtime keynote I gave at this year’s Investigative Reporters and Editors conference in Denver in June; I don’t think I had a title for it, and if I did, I’ve since forgotten. It was mostly about – well, a bunch of things, ranging from the need for us to focus as much on communities and audiences and their needs as on the stories we want to write; the importance of diversity in management ranks and greater sensitivity to our blind spots; and why framing stories and language are critical. There’s no shortage of bad jokes, too.

The link takes you to the text of the 15-minute talk, and at the bottom of the page is the actual video; my part comes on at about the 50-minute mark. The jokes are better in the video; the text is faster to read. Your call.

Hugely honored to have been asked to give both talks; sorrier for the people who had to sit through them. And gave me a chance to distill a lot of what I’ve been writing about here and thinking about more generally. Would love to hear thoughts and comments.

Posted by: structureofnews | July 5, 2022

Starting Up

So this is old news, but hey, it’s not like I’m the fastest writer in the world – despite having spent more than a decade at one of the fastest real-time news agencies in the world.

Although I’m not at one of the fastest real-time news agencies in the world any more; the news is that I left Reuters in April and joined Semafor, a new news startup founded by Ben Smith and Justin Smith, where I’ll be Executive Editor.

I love Reuters, but this was a too-good-to-turn-down opportunity to help build something from the ground up as opposed to work within a 175-year-old institution; to “make my own mistakes,” as I’ve said to any number of people. So far it’s been a wonderful adventure – a frenetic, kinetic, full-speed ride. And a ton of fun.

What is Semafor, you ask? Good question. Here’s what we say on our job ads:

Semafor is a global news platform for an increasingly complex world in which consumers are overwhelmed by too many news sources and unsure what to trust.  We are building Semafor from the ground up to enable world-class journalists to deliver reporting and insights with rigor in journalistic forms that ensure a new level of transparency. Our editors will distill the most important stories from all over in formats that uncover the forces shaping the stories, explain the interests behind polarizing narratives, and replenish the stock of shared facts. As a global platform, Semafor recognizes that smart people can disagree and that informed readers need to understand alternative points of view from competing centers of power and culture in a multi-polar world. 

There’s more to come, and we hope you’ll check us out when we launch later this year.

Posted by: structureofnews | November 29, 2021

Fixing Bias

How do you fix bias?

More specifically, how do you fix bias in the systems we interact with every day?  That’s the theme of The End of Bias, an interesting new book by Jessica Nordell. She doesn’t just document all the conscious and unconscious biases we all have, but sets out to look at what works and doesn’t work in trying to address those issues. Well worth reading.

Plus, what’s not to like about a book that starts out with the experiences of a transman as he crosses the gender divide?

I wrote about this book a little while back – even before I read it! – and the simulation described in it; it shows how, even without overt acts of discrimination, a small level of systemic bias will accumulate over time and significantly disadvantage one group or another. 

And that’s a key point:  That you don’t need bad actors, overt discrimination or blatant wrongdoing to suffer from bias; it’s the small things that add up over time.

And key point two: We’re all biased.  None of us can escape the blind spots we have; and we all have plenty.  “Trying harder” isn’t a solution, any more than telling a nearsighted person to “try harder” to read the words across the wall.

So if more effort, better intentions, training people to recognize bias, aren’t the solutions – or at least not the only solutions – then what is?  That’s what the book’s about.  Bear with me. This is a long-ish post.

And all this matters not just as we try to build newsrooms that are more inclusive and more representative of the communities we cover and serve, but also in how we think about, document and frame the issues that matter to them.

First, it’s important to understand what bias is.  Basically – and I’m sure I’m butchering some definition somewhere – it’s really just substituting our expectations about a group and their shared traits, whether justified or not, for actual detailed findings about an individual in that group.  And we all do it.  It would be hard to get through life without doing it.  In many cases, it’s simply our brain’s way of coming to conclusions more quickly, although of course it can also be the result of out-and-out prejudice.

Or, as the book puts it:

That expectation is assembled from the artifacts of culture: headlines and history books, myths and statistics, encounters real and imagined, and selective interpretations of reality that confirm prior beliefs

Biased individuals do not see a person.  They see a person-shaped daydream

The individual who acts with bias engages with an expectation instead of reality

Read More…
Posted by: structureofnews | November 3, 2021

Backwards Ran The Sentences

A very short post, sparked by a single paragraph.

It was in a NYT story about the debate over language on the left (BIPOC/Latinx/ Microagression/AAPI/LGBTQIA+ and more); the story overall was smart and interesting, but this paragraph was particularly insightful.

Saying something like, ‘Black people are less likely to get a loan from the bank,’ instead of saying, ‘Banks are less likely to give loans to Black people,’ might feel like it’s just me wording it differently,” Rashad Robinson, president of the racial justice organization Color of Change, said. “But ‘Black people are less likely to get a loan from the bank’ makes people ask themselves, ‘What’s wrong with Black people? Let’s get them financial literacy programs.’ The other way is saying, ‘What’s wrong with the banks?’”

And Mr. Robinson is absolutely right: Words matter, and how we use them matter. Beyond news judgment – itself a whole area we can and should explore – and the framing of stories (ditto), even simple, declarative, uncontroversial factual statements can affect how readers look at a subject.

You might be thinking: Perhaps he’s overstating the importance of this. And it’s true that probably a single use of a phrase or a framing doesn’t have much impact. But cumulatively, small things do build and can change our perceptions of the world. We all have biases, based on how we see categories of things and how experience and culture of them filter into our brains. (This all from The End of Bias, a book I’m deep into now and plan to write about soon.) And words in stories are part of that experience and culture.

This won’t be easy to address. Undoing decades of writing habits is hard.

It would be easier if it wasn’t words. Certainly the graphics and data visualization community has always been sensitive to how information is designed – and writing is a form of information design – and that’s perhaps because that visual grammar is in many ways still evolving; and as a result, everything is up for discovery and debate. Writing, on the other hand, is a much older technology, and so much of it is embedded in us that we often don’t think as much as we should about how the words go on the page (or screen), and certainly not when we’re on deadline. I confess I haven’t thought about what I write the way Mr. Robinson has, and that’s my bad.

So how can we address this? I’m not sure, other than more scrutiny and awareness. But I’m sure that’s not enough.

Posted by: structureofnews | October 21, 2021

The Arithmetic of Bias

The NYT recently ran a great opinion piece, by Jessica Nordell and Yarnaa Serkez, about the long-term impact of bias on women in the workplace. The magic was in the math.

It wasn’t that the piece called out egregious examples of discrimination, or identified companies or people that were really bad actors (although there was some of that); it was that it called attention, via a simple simulation, about how even small levels of bias – whether conscious or unconscious – can accumulate over time and lead to very large effects. In other words, it wasn’t trying to pin issues on particular bad actors or motives, but flagging systemic issues we might be otherwise blind to.

And that’s something we should think about too, as we write about complex systems – to resist the temptation to just look for for bad guys but instead to help readers really understand how the world works, even if terrible outcomes are the result of small flaws or human frailty.

The piece features a simulation of a company, NormCorp, where employees are promoted based on their performance reviews. You know, more or less like a regular company.

NormCorp is a simple company. Employees do projects, either alone or in pairs. These succeed or fail, which affects a score we call “promotability.” Twice a year, employees go through performance reviews, and the top scorers at each level are promoted to the next level.

So if all things are fair, men and women progress at the same rate through the company. But what if there’s some in-built bias in the system that regularly rates women slightly lower then men? It doesn’t have to be intentional, or the result of bad motives, or even conscious. It just has to exist – and it doesn’t even have to be against women. It simply has to be systemic. Maybe managers have a (unconscious) preference for people that look or sound a certain way, or who have gone to a certain university, or come to work early, or socialize after hours. Whatever.

When we dig into the trajectory of individual people in our simulation, stories begin to emerge. With just 3 percent bias, one employee — let’s call her Jenelle — starts in an entry-level position, and makes it to the executive level, but it takes her 17 performance review cycles (eight and a half years) to get there, and she needs 208 successful projects to make it. “William” starts at the same level but he gets to executive level much faster — after only eight performance reviews and half Jenelle’s successes at the time she becomes an executive.

Read More…
Posted by: structureofnews | October 18, 2021

Impact

Shameless self-promotion post:

Incredibly moved and honored to have been named the inaugural recipient of the Online News Association‘s Impact Award, for a “trailblazing individual whose work in digital journalism and dedication to innovation exhibits a substantial impact on the industry. “

I’m not sure I would have chosen myself to get this award, so I’m very glad the ONA board was making that decision and not me. It was nice to see shoutouts in the citation to things I’m really proud to have been involved with, from Connected China to WhoRunsHK to CitizenMap, as well as Reuters’ award-winning graphics and data teams. I’m even happier to have it also call out this blog and the Tiny News Collective, a really smart and innovative project that sprung from the fertile mind of Aron Pilhofer, that I’m currently privileged to be a part out (and I’ll have to write it about soon.)

And especially happy to have a chance to talk, in short remarks when accepting the award, about how we need to not only work to make sure our newsrooms reflect the communities they serve, but also to ensure that our coverage more accurately reflects the world we live in.

This is an excerpt.

So that was a very nice evening. And now back to the salt mines.

Posted by: structureofnews | September 20, 2021

News Worthy

Pauli Murray. Charcoal on paper, Gina Chua 2020

It’s been a while since I posted, I know. Events got in the way, notably trying to get people out of Afghanistan. But here I am again.

I’m in the middle of reading the autobiography of a remarkable person, Pauli Murray – a pivotal figure in legal and civil rights circles of the 20th century, and yet someone most people haven’t heard about. Why is that? What gets in the way of our ability to see stories that matter, and what stories are we now missing – what events aren’t considered “newsworthy” – because of the blind spots we have?

But first, a plug: I came to know about Pauli Murray via a great documentary about their life, My Name Is Pauli Murray, by Betsy West and Julie Cohen, who also directed RBG. It’s well-worth watching. Pauli was a feminist, civil rights pioneer, legal scholar and the architect of the winning arguments behind some critical U.S. Supreme Court rulings. The first African-American woman to be ordained an Episcopal minister. Also, queer, non-binary and possibly transgender. How did someone pack so much into a life that began as a young Black orphan in North Carolina? Watch the documentary, read the book; Pauli is well-worth getting to know.

But that’s not what this post is about. It’s a long one, so apologies in advance.

Early in the documentary, you learn that Pauli was arrested and jailed in 1940 for refusing to move back to the rear of a bus, as was required in then-segregated Virginia. That was 15 years before Rosa Park’s similar, but much more celebrated, act of civil disobedience. During a panel discussion with the filmmakers at Reuters, the question was asked: Why wasn’t there more coverage at the time of Pauli’s action?

The answer: There was. Just not in the mainstream press. Black newspapers covered it, as they did issues like lynchings and everyday discrimination. But this was rarely deemed “newsworthy” by the mainstream – white – press.

And that raises bigger questions about what we deem to be worthy of coverage, what constitutes “news”, whose voices we hear, and who gets to make those decisions.

This aren’t new questions, but they are important ones, not least in the wake of the racial justice protests around the world in the summer of 2020; and while it’s heartening to see that those events have raised newsrooms’ sensitivities to those kinds of stories, it also raises the question about what other stories we might be missing, what other as-yet-undiscovered blind spots we might have.

Sometimes we miss stories because we don’t have connections into communities where things are happening; that’s a problem of a lack of diversity in newsrooms (and not just of gender and ethnicity, but also of class and background). Sometimes editors dismiss ideas because they don’t jive with their view of what’s important; that’s a problem of mistaking one’s viewpoint for being the most valid viewpoint. And Gary Younge, a journalist and academic, notes in an incredibly insightful piece that sometimes we don’t pursue important stories because they are – regrettably – just not out of the ordinary.

Read More…

Older Posts »

Categories