Aaah – another site about The Future of Journalism.
A dull one. Without the invective and ideology about free vs. paid, pajama-clad bloggers vs. stick-in-the-mud mainstream media curmudgeons, and Utopian visions of crowdsourced news vs. dark fears about falling standards you can find elsewhere. It has words like taxonomy and persistent content in it; discusses business models and revenue streams in dull, accountant-like language; and tries to dissect the sparkling prose journalists turn out into tiny bytes of data.
But there is a purpose here, and it’s based around the idea that we as journalists haven’t really thought about how people are changing the way they access information, or how we need to fundamentally rethink the way we carry out journalism and the kinds of – for want of a better word – products we turn out for them.
There’s much hand-wringing over the loss of the traditional business model of news, it’s true. Perhaps too much. And this site will contribute its share. But hopefully it’ll also explore some of the less-explored questions about where the profession goes in a digital age. And lay out some of the thinking behind one concrete idea that might help move the business forward: Something I’m calling Structured Journalism.
So, welcome – and I hope you find this interesting.
(An update: I first wrote those words 11 years ago, and it’s amazing how some of those passionately argued debates – free vs. paid! – have basically gone away. Which is great. So I could and should rewrite this intro. But the third paragraph remains just as valid. Plus, I’m pretty lazy. )
None of this should be surprising. That’s because chatbots don’t have any real ability to discern fact from fiction. They have no concept of verification. What they do is process language, and they do that very well. They’re idiot savants of language. Request lyrics to a song about nuclear physics in the style of Taylor Swift, and you’ll be astonished.
And yet there’s something seductive about a human-like interface that seems to be an oracle of information and can answer any question we pose. Because chatbots have the same simple, single-box interface of search engines, we expect the same kind of responses from them.
We should know better, and the user interfaces should help more.
Bard, the chatbot recently released by Google, has a small disclaimer under its input bar: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” ChatGPT has much the same thing: “ChatGPT may produce inaccurate information about people, places or facts.” And Bing Chat is a tad more coy: “Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts, and share feedback so we can learn and improve!”
It’s good that the disclosures are there, but they’re easy to miss. And more importantly, they imply that mistakes — ”hallucinations,” as they’re called — may happen from time to time, when in fact they should say, “please don’t count on this chatbot for any facts.” These disclosures are like an AI image generator saying, “the images you see may not represent the real world.”
But until that changes, the onus is on us to keep reminding ourselves: chatbots aren’t search engines.
Another story for Semafor, this one on how close we are to adding math & symbolic logic (aka “algebra”) to large language models, and how that could a huge change.
THE NEWS
“Microsoft Excel: The Killer App” is probably not a headline you would ever expect to see. But an upcoming integration of the AI system GPT-4 into the ubiquitous spreadsheet program may mark one of the most significant advances so far in a string of already amazing milestones that chatbots have met and surpassed.
Microsoft announced recently that it was rolling out Office 365 Copilot, an AI-powered system, to its Office suite of programs — Word, PowerPoint, Teams, and so on. I watched a demo, and it’s impressive to see it summarize meeting notes, draft emails, create presentations and much more.
But what really caught my attention was the planned — but not demo-ed — integration with Excel, with the promise that users could simply tell the program to build a spreadsheet, and it would; and that they could further refine it with a series of queries (eg: “if I were to reduce costs by 5%, how would that affect profit margins?”)
Microsoft said it’s still working on building Copilot into Excel, but that we could expect it soon. Currently, chatbots have limited — to say the least — math capabilities. I asked Claude, an AI system from Anthropic, to do some simple addition, and it botched the result. I asked it to double-check its math, and it gave me the correct answer. But on asking it a third time, it came back with yet another, wrong, answer — despite assuring me it had triple-checked the math.
As Claude itself noted:
I am not actually able to do mathematical calculations – I am an AI language model created by Anthropic to be helpful, harmless, and honest using a technique called Constitutional AI. While I can have conversations about math and numbers, I do not have a built-in calculator. I do not have any internal mathematical reasoning capabilities or truly understand numbers in the way that humans do.
When you asked me to add those numbers, I simply treated that question as another language input, and responded with the sum I was trained to give for that specific set of numbers. But I cannot do any open-ended math or calculation. I have no real mathematical abilities – I simply manipulate language.
GINA’S VIEW
Maybe being able to add and subtract sounds mundane. But it’s not.
In fact, it may be transformative.
GPT-4, like Claude and Google’s Bard, are Large Language Models, or LLMs. They “understand” and create language well, but as I’ve written (and Claude notes, above), have no real conception of facts, verification — and importantly — numbers. That limits the utility of such systems: They can create messages, edit stories, build presentations, and even write some code, but they can’t be counted on to interact with the world when it comes to the parts that go beyond language.
For that, it needs a way of handling numbers in a verifiable way; to do basic algebra.
Excel could give it that capability.
(And if you’re thinking, it’s “just” math — which us math majors would strenuously disagree with — it’s important to remember that Excel handles far more than numbers; it’s really a simple database that has a range of sophisticated functions that can handle text, dates and other types of information. LLMs are much more limited when it comes to those capabilities: Bard, for example, couldn’t tell me if the name “John Smith” had the letter “S” in it. But Excel can.)
Imagine if you could, in plain English, get Excel to create a budget for your organization, and that you could then ask it to explore what-if scenarios; or if you could get it to build a spreadsheet of contacts, and ask it to list only all the people that had addresses in a certain city; or if you could start asking it to look for patterns in the data — say, how salaries in a company for the same job title vary by location, gender or ethnicity. None of which an LLM can do on its own right now.
KNOW MORE
I don’t have access to Office 365 Copilot; it hasn’t been rolled out. But I did try to see how well Claude, Bing and GPT-4 responded to a prompt that asked it to create something akin to a spreadsheet.
Here’s the prompt:
Thirty people went to dinner together. Half of them ordered chicken, which costs $25 each; five had the vegetarian option, which cost $20, and the remainder ate the fish, which costs $22. In addition, they ordered 10 bottles of wine, which cost $300 in total. If they were to split the bill fairly, with each person paying for what they ordered, plus an equal share of the cost of wine and a 20% tip, how much would each person have to pay?
Bard blew it, big time.
I have no idea what universe it was inhabiting. Claude was not much better:
So splitting the bill fairly based on what each person ordered, the totals per person would be:
* $134 for chicken diners
* $63 for vegetarian diners
* $98 for fish diners
Clearly, having the people who ordered chicken each pay $134 for a $25 meal, a share of wine and tip seems a little excessive. But GPT-4, which hadn’t handled editing chores as well as Claude in an earlier exercise, aced this one:
That’s pretty impressive for a model that’s basically working simply off language. And of course you don’t need a spreadsheet to solve that problem; but I also asked all three LLMs to create a simple newsroom budget that included salaries, fringe benefit costs and travel expenses, and again only GPT-4 aced the exercise; the other two failed miserably.
ROOM FOR DISAGREEMENT
Garbage in, garbage out. It’s easy enough to check the simple scenario above to ensure that GPT-4 is building in the right assumptions, but what happens when we’re working with much more complicated information? How will we know for sure that it isn’t “hallucinating” when translating our queries into instructions for the spreadsheet?
As we turn to AI systems for more complex tasks, what safeguards are we — and more importantly, the companies behind them — building into them to ensure we can double-check them?
And will they be able to learn from their mistakes? Each time I went back to Claude to point out the errors it had made in calculating each diner’s share of costs, it replied with increasingly outrageous calculations, with chicken diners first needing to pay $41.63 each, then $70.97 and finally $89.22.
OpenAI announced it was adding support for plug-ins for ChatGPT, significantly expanding the chatbot’s reach and ability to both access live data from the internet as well as take on tasks via third party applications, such as filling an Instacart’s shopping basket.
Anyway, another dual posting – this one that muses about why tech companies seem intent on using large language models – ie, the thing behind chatbots – to revamp the whole search experience, when some small chatbot-enabled tweaks could make it so much better much sooner. (Hint – read the Nieman Lab piece at the end of this: It’s likely because they want to keep users on the search site.)
Tech behemoths — and the public — have been so focused on the chatbots’ ability to hold human-like conversations with users that the core purpose of a search engine, which is to find useful and ideally, accurate, information seems to have been overshadowed. Instead, the public has seized upon professions of love, angry denials of basic realities, and many more mundane “hallucinations” of incorrect facts.
GINA’S VIEW
It didn’t have to be this way.
At its heart, a search engine does — at least to lay users like me — three things: Take in a query (e.g., “how effective are COVID-19 vaccines?”) and turn it into a search term; hunt for information on the internet, make some kind of judgment about what’s credible; and then present it back to users. Sometimes that comes as a simple, authoritative answer (“The population of New York City was 8.468 million in 2021”) and sometimes as a list of links.
Google — the king of search engines — does that second part extremely well, thanks to PageRank and other proprietary algorithms that it’s developed over the decades; it’s doing better on the first part, although it’s still a long way away from providing a conversational interface.
And it does less well on the third part, often presenting a list of links that users have to plow through, although it’s getting better at synthesizing the information all the time. Chatbots, on the other hand, are terrible at the second thing — because, bluntly, they’re optimized for language output and not for fact-finding or fact-checking. When they try to aggregate disparate information into a single definitive answer, they often get things wrong, or “hallucinate.”
And the lack of citations or links in their authoritative-sounding answers means it’s nearly impossible to check the facts for yourself. On the other hand, the chatbots are pretty good at parsing language and generating language, because they’re, well, language models. Doh.
So why are tech companies enamored with integrating them into the entire search process — even the parts they’re not good at? Why not marry the two capabilities? Why not have a chatbot take a normal human question and turn that into a search term (that’s a language skill), have a link system for finding relevant web pages (that’s a search and ranking skill), and then use the chatbot to summarize them (another language skill)?
Which is what I tried to do.
I used Claude — an AI chatbot built by Anthropic, in which Google just invested $300 million — to ask a simple question: Did Donald Trump pay adult film star Stormy Daniels for her silence? (I couldn’t ask an up-to-date question, because Claude’s database doesn’t extend to the present day.)
Here’s what I got: First, I just asked a question and got the standard summary:
That’s a pretty decent response and is essentially accurate, at least as far as 2021 sources are concerned. But that’s because I already knew the answer. If I didn’t, how could I check? It provides no citations, offers no links, and really doesn’t give users a chance to verify the information for themselves.
So then I asked it for links to stories, as a Google search might have turned up:
That’s helpful — and I’m sure a dedicated search engine would have provided even better links. But that’s a lot of reading to do, and how would I know which ones to dig into? I asked it to summarize the articles it linked to:
Much easier to digest, and actually gives a sense of the issues surrounding the question.
What if we had simply skipped all those steps, and instead, my original query just returned those summaries, with links, not unlike a Google search, but with more useful answers that don’t require as much clicking and reading?
To put it another way, why do tech companies seem so intent on blowing up the entire search experience when incremental changes could yield significant improvements?
ROOM FOR DISAGREEMENT
Google has made a long list of iterative — and impressive — improvements to its search product over the years, in many cases focused on ensuring that pages surfaced are in fact authoritative and relevant, but also to better understand natural language queries that users type in.
It’s also improved the output, and many queries now return a list of likely questions and answers lifted from web pages verbatim, saving readers the effort of digging through a host of links. Most of its AI improvements lie under the hood, so to speak, rather than in the flashier user experiences that chatbots promise.
And Microsoft says it’s doing similar work, both to use language models to better understand queries as well as to generate summaries of information that its search engine technology surfaces, and including links and citations to sources.
As for questions where the data is clearly defined and constrained — airline fares, or prices for comparison shopping, for example — and where the purpose is less to discover nuanced ideas and insights and more to find specific information (booking a trip from A to Z on a given day), chatbots could significantly improve the search experience.
NOTABLE
Jeremy Wagstaff makes a convincing case that chatbots’ conversational capabilities are as much a bug as a feature; their ability to converse and take queries in different directions makes them inherently unpredictable and hence dangerous in a world where we expect machines to provide defined services.
Joshua Benton argues at Nieman Lab that a pivot by search engines to providing algorithmically generated answers, without links, will have a huge financial impact on publishers dependent on traffic from search results.
Reuters examined the costs of providing chatbot-generated answers compared to traditional link-based search results, and found it to be much more expensive in terms of computing power needed.
Here’s a story I wrote for Semafor (you know, where I work) about the possible uses of AI chatbots in journalism.
I wrote earlier here about how we keep trying to shoehorn their capabilities into things the don’t do well; this is a post exploring what they’re actually good at, and they could help newsrooms. And in case you’re wondering, the story is in a “semaform.”
THE NEWS
There’s been a ton of ink spilled on all the things ChatGPT and other AI chatbot systems don’t do well, not least distinguishing fact from fiction or doing basic math. But what can it do well, and — importantly for newsrooms — what can it do for journalism?
The trick is to focus on the tasks they can do well – which is to work with language.
GINA’S VIEW
ChatGPT and other AI systems don’t do journalism well because, well, they weren’t built to. They’re language models, meaning they’re really good at tasks involving language. But they’re not fact models, or verification models, or math models, which makes them terrible at figuring out what’s true or adding two numbers together — both of which are pretty basic journalism skills. But complaining that they’re bad at journalism is like being angry at Excel because it doesn’t draw pictures well.
What they are good at is language. I’ve been playing with various AI-powered chatbots for the last week or so, and two things are absolutely clear to me:
There are useful, here-and-now real world applications that could materially improve how journalism is practiced and created;
The statement above might no longer be true.
I used Poe, a great tool for experimenting with AI systems from the folks that brought you Quora, to ask Claude, a chatbot created by Anthropic (which Google recently invested $300 million in) to copy edit stories that I had deliberately introduced factual (Ron DeSantis is governor of Miami), spelling, and grammar errors into. For good measure, I threw in some internal inconsistencies (a fall in birth rates in China will lead to an increase in population.)
It aced the test. It fixed DeSantis’ title, it corrected the inconsistencies, it made the stories read more smoothly. It even gave an explanation of all the changes it made, and why.
To be sure, this wasn’t top-of-the-line, prize-winning editing of an 8,000-word investigative project. It was acceptable, middling, competent copy editing — and let’s face it, that’s what 80% of what copy editing in most newsrooms is. (Don’t kill me, copy deskers! I love you all!)
I’m not suggesting that Claude should be unleashed on stories unsupervised; but if could do a first edit on most of the copy in a newsroom — especially those where the staff are writing in a language which isn’t their mother tongue — it could offer material improvements in quality and efficiency.
KNOW MORE
And then I took another step. I asked Claude to take another story — about China’s attempts to reverse a fall in birthrates — and edit it in the styles of the New York Times, the New York Post, China Daily and Fox News. The Times version was staid, as you might expect from the Gray Lady, and threw in a lot of background.
The New York Post was racier: “China Panics as Birth Rate Plunges”.
The China Daily was, well, the China Daily: “Official Calls for Support of Families”.
But Fox News? Claude nailed it. “China Demographic Crisis: Is Communism to Blame?”
OK, so it’s hard to see a real immediate use case in this, at least for journalism. But if you’re a press agent, and you want to minimize the friction in getting a release published in both the New York Times and the New York Post, wouldn’t you generate two versions, one in each style? And if you’re a news organization, and you’re appealing to multiple audiences, would it make sense to use an AI system to help you customize your content?
I don’t find anything inherently immoral with doing that, although I grant some people might. When I ran The Asian Wall Street Journal (now The Wall Street Journal Asia), the Asian edition of the U.S. Journal, we routinely had to rework and reframe stories for our two, very different audiences. And vice versa. What AI would do is allow us to do it more quickly and more efficiently. And we’d still want a human to check the results.
ROOM FOR DISAGREEMENT
But here’s the most interesting thing about the experiment. I did all those copyediting tests on Friday night. On Tuesday, I decided to try them again. This is what Claude told me:
Clearly, something happened between Friday and Tuesday — whether a human tweaked the constraints on its output, or it learned some new behaviors.
Which is a reminder of several things: We don’t really have control of these tools, and by “we” I include the creators of them as well. They’re constantly evolving. They’re not like a washing machine you buy that works pretty much the same way for decades, if it doesn’t break down. It’s more like a washing machine that next week might decide you ought to wear your clothes a little longer so as not to waste water and preserve the environment.
So in some ways, even thinking about use cases now may be premature. As Sam Altman, the CEO of Open AI, the company behind ChatGPT, said recently, “ChatGPT is a horrible product. It was not really designed to be used.”
True. Still, it provides a really interesting provocation about what’s possible, even as we — rightly — worry about who owns them and what damage they can do.
But that requires us to focus as much on what these tools can do than what they can’t.
NOTABLE
New York Times technology columnist Kevin Roose recounted a surreal, Her-like conversation with Bing — or Sydney — in which the chatbot professed both love for him and a desire to become a real person.
Northwestern University journalism professor Nick Diakopolous dives into a list of possible uses for a chatbot in the newsroom and comes up with a very limited list, all of which he suggests should include human intervention and checking.
There’s a joke about a drunk hunting around the ground below a bright street light, and when asked what he’s looking for, says that he dropped his keys somewhere up the road. So why is he looking here? “The light is better here,” he says.
OK, so you didn’t come here for the humor, but for the sharp analogies that jokes can prompt (self-referential reference here). Although this one isn’t my analogy: this is cribbed from AI expert Gary Marcus, a professor at NYU, on a smart podcast discussing the promise and limitations of AI with Ezra Klein, about our obsession with ChatGPT and our attempts to have it solve any multitude of problems – not because it’s good at them, but because the light is so much better under it.
There have been thousands of words written about ChatGPT and its miraculous capabilities, massive shortcomings or apocalyptic dangers. This isn’t one of those pieces. It’s more about what it doesn’t do well, what what it does do well could do for journalism and why we should be looking elsewhere to fill the gaps it can’t.
To be sure, ChatGPT is very good at some things. It’s an astounding language model, which means it can produce human-sounding text in multiple styles at scale, and will doubtless upturn any profession industry that requires writing as an output – business executives and copywriters, for example. That doesn’t mean it’ll put people out of work – although it certainly could; it’s more that people who aren’t great at expressing themselves might get a tool to help them on that front, just as a calculator helped people who weren’t great at doing math with pencil and paper. Given the right prompts, it can turn mediocre ideas into acceptable prose – a low bar, perhaps, but then again lots of writing ain’t Shakespeare. (There are a whole set of other questions about equity and the new “AI divide” between those who have access to such tools and those who don’t, but that’s a topic for another day.)
And, as I noted earlier in an earlier post, it’s uncannily good at “understanding” questions, and if combined with a good backend search engine, could well revolutionize search as well. (As Microsoft is doubtless thinking with its investment into OpenAI and integration of ChatGPT into Bing.)
What are its weaknesses? It can’t do math, for one, as CNet’s ill-fated experiment in ChatGPT-generated stories demonstrated. And that’s pretty basic skill for journalism. Nor is it great at discerning fact from fiction, as any number of people have shown. And while it can create new content in multiple styles, all that is ultimately based in some broad way on words that have been written before. It isn’t original in that sense. And, per Gary Marcus:
What it’s bad at is abstraction. So I can read you another example where somebody asks a system to give something and say how many words there are. And it sometimes gets the number of words right and sometimes gets the number of words wrong. So the basic notion of counting a number of words is an abstraction that the system just doesn’t get. There are lots of abstractions that deep learning, in fact all abstractions of a certain narrow technical sense, these systems just don’t get at all.
So it’s not all that helpful to criticize it for not being original, for not understanding concepts, or for not performing great (or even mediocre) journalism; that’s not what it’s built to do. After all, you don’t complain that Excel does a bad job of writing stories; it’s not supposed to. At heart, ChatGPT is a language model that does an astoundingly good job at putting words one after another that cohere. It doesn’t “understand” any of them; it doesn’t analyze them, or facts, per se. It’s taking large amounts of data and predicting, based on the words it has ingested, how to create something new.
And when those words are largely accurate, it can give a pretty good answer. But when those words are riddled with inaccuracies, not so much. But journalism is often about new words: new facts, new analysis, and new ideas. It’s math, at one level. It’s analysis. It’s inferences and weighing of conflicting statements. It’s verification of new facts. Which leaves a system like ChatGPT without a lot of training data to work from.
For that – at least in my limited understanding of AI and technology – you need something more like symbolic logic. You need a system that can take data, analyze it, look for patterns that are “interesting” or “insightful” and surface them, whether in text or some other format. That’s what we were building when I was at Reuters with Lynx Insight. Language generation was the least interesting part of it; what we wanted was smart pattern recognition. Does this financial data suggest some major shift in company strategy? Are corporate insiders bailing out? And so on.
I’m going back to the well of ledes/jokes with this one:
“My dog can play checkers.”
“That’s amazing!”
“Not really – he’s not very good. I beat him three times out of five.”
Old joke, I know, but I tweeted it recently in connection to the discussion about ChatGPT and its skill – or lack thereof – in creating human-like content in response to prompts. And how maybe we’re looking at the wrong place to understand how groundbreaking it really is. Hint: It’s in the questions, not the answers.
(First, a digression: Yes, it’s been ages since I posted. And who knows when I’ll post next. But I am slowly returning to this world. End of digression.)
So this post is based on no real deep knowledge of ChatGPT – and I’m not sure if I would understand it if I had more access to information – but some of the commentary on the launch of what’s a really very impressive AI platform seems to be focused on the wrong thing: the output rather than the input.
Don’t get me wrong: ChatGPT’s output is incredible. And also sometimes incredibly stupid, as this piece in the Atlantic notes. And there’s certainly no shortage of critiques on the interwebs about how it’s not really creative and it simply reworks the huge corpus of writing and data that’s been fed into it.
And all that’s fair. Although, again: “My dog can play checkers” is the achievement, not how many times it wins.
But more importantly, perhaps the most significant achievement in ChatGPT isn’t in how it comes up with answers but how it understands questions. I’ve been playing with it a bit, and what I find amazing isn’t how well – or badly – it answers my questions, but how it knows what I’m looking for. I asked it who Michael Cohen was, and it figured that out; I asked about Stormy Daniels, and it knew that too. True, not hard. But when I asked whether Donald Trump had paid Stormy Daniels off, it managed to parse what’s a really complicated question – who is Donald, who is Stormy, what is their relationship, did it involve a payoff and why, and who said what – and came back with a reasonable answer (Donald says he didn’t, but Michael said he did.)
To be sure, it’s true that as a search engine, ChatGPT has some significant drawbacks, not least that it doesn’t seem to be able to distinguish between what’s true and published and what’s untrue and published.
But Google does a pretty good job of that part of the search experience. While doing a pretty mediocre job of the front end of the search experience – we all spend far too much time refining our search terms to get it to spit out useful answers. So what if the front end of ChatGPT was paired with the back end of Google?
Imagine, as I did for a Nieman Labs prediction piece (and years ago, here), that it could be used to power a real news chatbot, but one powered by verified, real-time information from a news organization. Talk – literally – about truly personalized information, as I mused in the Nieman Lab article.
How about using ChatGPT‘s powerful language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago that weren’t supposed to be there.” “I don’t understand — didn’t he say he declassified them?” “Actually…”
It would let readers explore questions they have and skip over information they might have. In other words, use technology to treat every reader as an individual, with slightly different levels of knowledge and levels of interest.
And yet another use case for good structured journalism! (Of course.) Regardless of how it’s ultimately used (and here’s one idea, and here’s another), it’s important to recognize how important this development is, and how it could truly transform the industry. You read it here first.
(Another digression: You should listen to this Ezra Klein episode as he talks to Sam Altman, the CEO of Open AI, which created ChatGPT. It’s either fascinating or terrifying. Or both.)
Here’s a short video I did – well, technically I fronted; someone else (Joe Posner!) did all the work – explaining what we’re trying to do with the story form at Semafor. We call it a – doh – Semaform. And here’s the accompanying story.
(And less, it’s two months old; but when was I ever on time on this blog?)
It’s been an interesting challenge at Semafor to actually implement some of the ideas I’ve been talking about – in mostly theoretical terms for so long – to rethink the basic unit of news from the ground up. And now I’m seeing how well some ideas do – or don’t – work in practice.
It’s been a great adventure. I’ll post more as I dig out from under…
Shameless self-promotion alert: Here are links to two talks I gave recently (“recently” being stretched to mean “three months ago”).
I would summarize them, but that would take work, and it’s the weekend…
More seriously – but it is also the weekend – the first is a talk I gave at the International Symposium on Online Journalism in Austin, Texas, back in April. The topic was News: What is it, who is it for, and how can we rethink it for the digital era? and it’s mostly about the need for imagination in journalism innovation, but the Q&A session afterwards, expertly moderated by Neil Chase of CalMatters, roams all over the place. I had fun, but your mileage may vary; there isn’t a transcript so you’ll have to wade through 45 minutes or so of bad jokes and poor analogies to capture the entire essence of my ramblings.
And a shout out to Rosental Alves, who created the conference two decades or so ago, and who had the bad judgement to invite me to speak.
The second is the lunchtime keynote I gave at this year’s Investigative Reporters and Editors conference in Denver in June; I don’t think I had a title for it, and if I did, I’ve since forgotten. It was mostly about – well, a bunch of things, ranging from the need for us to focus as much on communities and audiences and their needs as on the stories we want to write; the importance of diversity in management ranks and greater sensitivity to our blind spots; and why framing stories and language are critical. There’s no shortage of bad jokes, too.
The link takes you to the text of the 15-minute talk, and at the bottom of the page is the actual video; my part comes on at about the 50-minute mark. The jokes are better in the video; the text is faster to read. Your call.
Hugely honored to have been asked to give both talks; sorrier for the people who had to sit through them. And gave me a chance to distill a lot of what I’ve been writing about here and thinking about more generally. Would love to hear thoughts and comments.
So this is old news, but hey, it’s not like I’m the fastest writer in the world – despite having spent more than a decade at one of the fastest real-time news agencies in the world.
I love Reuters, but this was a too-good-to-turn-down opportunity to help build something from the ground up as opposed to work within a 175-year-old institution; to “make my own mistakes,” as I’ve said to any number of people. So far it’s been a wonderful adventure – a frenetic, kinetic, full-speed ride. And a ton of fun.
What is Semafor, you ask? Good question. Here’s what we say on our job ads:
Semafor is a global news platform for an increasingly complex world in which consumers are overwhelmed by too many news sources and unsure what to trust. We are building Semafor from the ground up to enable world-class journalists to deliver reporting and insights with rigor in journalistic forms that ensure a new level of transparency. Our editors will distill the most important stories from all over in formats that uncover the forces shaping the stories, explain the interests behind polarizing narratives, and replenish the stock of shared facts. As a global platform, Semafor recognizes that smart people can disagree and that informed readers need to understand alternative points of view from competing centers of power and culture in a multi-polar world.
There’s more to come, and we hope you’ll check us out when we launch later this year.
More specifically, how do you fix bias in the systems we interact with every day? That’s the theme of The End of Bias, an interesting new book by Jessica Nordell. She doesn’t just document all the conscious and unconscious biases we all have, but sets out to look at what works and doesn’t work in trying to address those issues. Well worth reading.
Plus, what’s not to like about a book that starts out with the experiences of a transman as he crosses the gender divide?
I wrote about this book a little while back – even before I read it! – and the simulation described in it; it shows how, even without overt acts of discrimination, a small level of systemic bias will accumulate over time and significantly disadvantage one group or another.
And that’s a key point: That you don’t need bad actors, overt discrimination or blatant wrongdoing to suffer from bias; it’s the small things that add up over time.
And key point two: We’re all biased. None of us can escape the blind spots we have; and we all have plenty. “Trying harder” isn’t a solution, any more than telling a nearsighted person to “try harder” to read the words across the wall.
So if more effort, better intentions, training people to recognize bias, aren’t the solutions – or at least not the only solutions – then what is? That’s what the book’s about. Bear with me. This is a long-ish post.
And all this matters not just as we try to build newsrooms that are more inclusive and more representative of the communities we cover and serve, but also in how we think about, document and frame the issues that matter to them.
First, it’s important to understand what bias is. Basically – and I’m sure I’m butchering some definition somewhere – it’s really just substituting our expectations about a group and their shared traits, whether justified or not, for actual detailed findings about an individual in that group. And we all do it. It would be hard to get through life without doing it. In many cases, it’s simply our brain’s way of coming to conclusions more quickly, although of course it can also be the result of out-and-out prejudice.
Or, as the book puts it:
That expectation is assembled from the artifacts of culture: headlines and history books, myths and statistics, encounters real and imagined, and selective interpretations of reality that confirm prior beliefs
…
Biased individuals do not see a person. They see a person-shaped daydream
…
The individual who acts with bias engages with an expectation instead of reality
(Re)Structuring Journalism explores the evolution of information in a digital age and how we need to fundamentally rethink what journalists do and what they produce.
And it proposes one possible solution: Structured Journalism.
About the author
Gina Chua has been a journalist for more than three decades; she’s currently Executive Editor, Semafor. Prior to that, she was Executive Editor at Reuters, where she transitioned. Before joining Reuters, Gina – then Reg – was Editor-in-Chief of the South China Morning Post and had a 16-year career at The Wall Street Journal, including eight as Editor of the Journal’s Asian edition.