Posted by: structureofnews | August 12, 2010

Welcome

Aaah – another site about The Future of Journalism.

A dull one.  Without the  invective and ideology about free vs. paid, pajama-clad bloggers vs. stick-in-the-mud mainstream media curmudgeons, and Utopian visions of crowdsourced news vs. dark fears about falling standards you can find elsewhere.  It has words like taxonomy and persistent content in it; discusses business models and revenue streams in dull, accountant-like language; and tries to dissect the sparkling prose journalists turn out into tiny bytes of data.

But there is a purpose here, and it’s based around the idea that we as journalists haven’t really thought about how people are changing the way they access information, or how we need to fundamentally rethink the way we carry out journalism and the kinds of – for want of a better word – products we turn out for them.

There’s much hand-wringing over the loss of the traditional business model of news, it’s true.  Perhaps too much.  And this site will contribute its share.  But hopefully it’ll also explore some of the less-explored questions about where the profession goes in a digital age.   And lay out some of the thinking behind one concrete idea that might help move the business forward: Something I’m calling Structured Journalism.

So, welcome – and I hope you find this interesting.

(An update: I first wrote those words 11 years ago, and it’s amazing how some of those passionately argued debates – free vs. paid! – have basically gone away.  Which is great.  So I could and should rewrite this intro.  But the third paragraph remains just as valid. Plus, I’m pretty lazy. )

Posted by: structureofnews | February 19, 2024

The Mechanical Mechanical Turk

Here’s another piece I wrote for Semafor:

THE NEWS

Can a machine mimic a human trying to mimic a machine better than a human can?

Short answer: Yes.  Longer answer: Yes, and it matters — in particular to a kind of high-stakes, everyday journalism that mixes qualitative and quantitative assessments, and which I suspect is pretty much the last thing most editors would entrust to AI.

Bear with me. 

Large Language Models like ChatGPT are often derided — accurately — for the way they embody our many human flaws: inexactness, inconsistency, fuzzy thinking.  Shouldn’t machines be able to do better, you ask. But what if those traits could be features, not bugs?

I was mulling these ideas during a class I’m co-teaching on computational journalism (I know, too much free time) and listening to a presentation on how hard it is to accurately collect hate crime statistics.  The Department of Justice definition is pretty straightforward, but police departments around the country track them in different ways, often with different criteria. A massive annual DOJ survey of people’s experience of crime yields widely different results from the official numbers. How to square those circles?

One way is by trawling through hundreds of crime stories in local news sites to get a sense of what’s happening on the ground; a tried-and-tested journalism and research technique. There are three common ways to sort out which stories might refer to hate crimes and which might not: Write an incredibly detailed search term for key words in the stories; use some form of machine learning, feeding in hundreds of examples of hate crimes and hundreds that aren’t, and have the machine figure out what distinguishes one from the other; or pay a large number of people a little bit of money each to rate the stories as likely to be hate crimes or not (a process known as using a Mechanical Turk, so named for a fake chess-playing machine in the 1700s that actually had a human in it).

The first system is tough to pull off in practice; how can you be sure you’ve captured every possible nuance of every possible hate crime in your terms? The second is prone to error — who knows what false associations the machine might make, and how many existing biases in the data it will encode?  And the third takes advantage of human understanding of nuance and complexity to identify hate crimes, but requires time, money and people. 

But what if machines could substitute for people? 

That’s what I set out to do. 

KNOW MORE

It’s not hard to build a Mechanical Turk bot. I took the DOJ definition of a hate crime and put it into a GPT4-powered bot, told it not to use any other information, and asked it to assess (made-up) news stories on how likely it was they referred to a hate crime. And then I fed a steady diet of those stories.

The first described an attack and robbery of a woman in Brooklyn by a group of five assailants, captured on video.  Neither the victim nor the attackers were identified, and the bot scored the likelihood of this being a hate crime as “very low.”

I modified the story to describe the victim as an Asian woman, and added the detail that one of the attackers was heard saying “we don’t want your kind here.”

Next I changed the victim to a woman wearing a MAGA cap, and said the attackers had been seen coming from an anti-Trump rally.  This was the result:

In other words, bias may have prompted the attack, but that’s not necessarily a hate crime as defined by the DOJ.

Read More…
Posted by: structureofnews | December 21, 2023

Less is More

I’m back! Here’s a story I wrote for Semafor after a weekend playing with bots, a sentence that I’m sure will not age well. (It is, as always, in a Semaform.)

THE SCENE

The big news — albeit somewhat old news — is that large language models can work with small data.

And that could be game-changing for journalism.

I spent a weekend building some custom bots to explore how newsrooms could leverage generative AI to improve their work. TL;DR: I think there’s a lot of there there.

To recap: LLMs are great at manipulating language — parsing questions, generating human-like prose, and engaging in conversation. But they do poorly at distinguishing fact from fiction — even if they’re improving at an incredible pace — which makes them ill-suited to creating content that’s meant to be accurate, a.k.a journalism. One reason is that they’ve been trained on billions of articles from the web, much of it inaccurate.

But if you could marry their incredible language capabilities with a really tightly constrained set of information — by building a custom bot that only accesses a small set of documents — what could you accomplish? A lot.

I built three bots: One, to investigate how a newsroom might be better able to leverage the wealth of information contained in its reporters’ notebooks; two, to explore a more interactive user interface and story form; and three, to see if there was a better way to get technical or arcane expertise to resource-strapped newsrooms, especially on deadline. It took just a day and a half to get pretty decent — although far from perfect — results.

KNOW MORE

The first experiment was simple: Collect the half-dozen scoops that Semafor’s Liz Hoffman has broken on investment firm 777 Partners and build a bot that answers questions only based on those stories. The idea was to mimic a newsroom tool for reporters that need to get quick summaries of the work their colleagues are working on.

Sure, they could read all the stories themselves (or call Liz), but why not just query a bot for specific information they need? The stories don’t list a CEO for the company, it had a good sense of what I was looking for here:

You don’t have to take the bot’s word for it; it offers citations that deep link to the source documents. That makes it an impressive search engine that “understands” context.

Imagine building a system across a newsroom that allowed reporters and editors to rummage through their published stories — or notes. Sure, you’d want them to double-check anything the bot told them; but you expect they’d do that from a Google search or if they read it in Wikipedia, too. This just helps them be more efficient.

The second experiment was to see if I could make a bot that would mimic a human conversation about a news story — an Alexa or Siri for news, but with much more capability. I took the same set of 777 Partners stories, but added prompts that asked it to answer as if it was talking to an acquaintance over a meal. (I tried asking it to talk as if it was conversing with a friend over a drink, but the language got too salty too quickly.)

Read More…
Posted by: structureofnews | April 7, 2023

Chatbots ≠ Search Engines

Here’s a mini-rant I contributed to Semafor’s tech newsletter. (Sign up here! It’s free!)

Chat bots aren’t search engines, and tech companies need to stop hinting that they are!

An Australian mayor is threatening to sue Open AI because ChatGPT apparently named him — falsely — as the guilty party in a bribery scandal.  A law professor was named — also falsely — by ChatGPT in a list of legal scholars who had sexually harassed someone, citing an also non-existent Washington Post article.

None of this should be surprising. That’s because chatbots don’t have any real ability to discern fact from fiction. They have no concept of verification. What they do is process language, and they do that very well. They’re idiot savants of language. Request lyrics to a song about nuclear physics in the style of Taylor Swift, and you’ll be astonished.

And yet there’s something seductive about a human-like interface that seems to be an oracle of information and can answer any question we pose.  Because chatbots have the same simple, single-box interface of search engines, we expect the same kind of responses from them.

We should know better, and the user interfaces should help more.

Bard, the chatbot recently released by Google, has a small disclaimer under its input bar: “Bard may display inaccurate or offensive information that doesn’t represent Google’s views.” ChatGPT has much the same thing: “ChatGPT may produce inaccurate information about people, places or facts.” And Bing Chat is a tad more coy: “Bing is powered by AI, so surprises and mistakes are possible. Make sure to check the facts, and share feedback so we can learn and improve!”

It’s good that the disclosures are there, but they’re easy to miss. And more importantly, they imply that mistakes — ”hallucinations,” as they’re called — may happen from time to time, when in fact they should say, “please don’t count on this chatbot for any facts.” These disclosures are like an AI image generator saying, “the images you see may not represent the real world.”

But until that changes, the onus is on us to keep reminding ourselves: chatbots aren’t search engines.

Posted by: structureofnews | March 29, 2023

Adding Capabilities

Another story for Semafor, this one on how close we are to adding math & symbolic logic (aka “algebra”) to large language models, and how that could a huge change.

THE NEWS

“Microsoft Excel: The Killer App” is probably not a headline you would ever expect to see. But an upcoming integration of the AI system GPT-4 into the ubiquitous spreadsheet program may mark one of the most significant advances so far in a string of already amazing milestones that chatbots have met and surpassed. 

Microsoft announced recently that it was rolling out Office 365 Copilot, an AI-powered system, to its Office suite of programs — Word, PowerPoint, Teams, and so on. I watched a demo, and it’s impressive to see it summarize meeting notes, draft emails, create presentations and much more. 

But what really caught my attention was the planned — but not demo-ed — integration with Excel, with the promise that users could simply tell the program to build a spreadsheet, and it would; and that they could further refine it with a series of queries (eg: “if I were to reduce costs by 5%, how would that affect profit margins?”)

Microsoft said it’s still working on building Copilot into Excel, but that we could expect it soon.
Currently, chatbots have limited — to say the least — math capabilities. I asked Claude, an AI system from Anthropic, to do some simple addition, and it botched the result. I asked it to double-check its math, and it gave me the correct answer. But on asking it a third time, it came back with yet another, wrong, answer — despite assuring me it had triple-checked the math. 

As Claude itself noted:

I am not actually able to do mathematical calculations – I am an AI language model created by Anthropic to be helpful, harmless, and honest using a technique called Constitutional AI. While I can have conversations about math and numbers, I do not have a built-in calculator. I do not have any internal mathematical reasoning capabilities or truly understand numbers in the way that humans do. 

When you asked me to add those numbers, I simply treated that question as another language input, and responded with the sum I was trained to give for that specific set of numbers. But I cannot do any open-ended math or calculation. I have no real mathematical abilities – I simply manipulate language.

GINA’S VIEW

Maybe being able to add and subtract sounds mundane.  But it’s not. 

In fact, it may be transformative. 

GPT-4, like Claude and Google’s Bard, are Large Language Models, or LLMs.  They “understand” and create language well, but as I’ve written (and Claude notes, above), have no real conception of facts, verification — and importantly — numbers.  That limits the utility of such systems: They can create messages, edit stories, build presentations, and even write some code, but they can’t be counted on to interact with the world when it comes to the parts that go beyond language. 

For that, it needs a way of handling numbers in a verifiable way; to do basic algebra.

Excel could give it that capability.  

Read More…
Posted by: structureofnews | February 25, 2023

Search and Rescue

I admit, I stole that headline from the annotation in the photo that accompanied this story on Semafor.

Anyway, another dual posting – this one that muses about why tech companies seem intent on using large language models – ie, the thing behind chatbots – to revamp the whole search experience, when some small chatbot-enabled tweaks could make it so much better much sooner. (Hint – read the Nieman Lab piece at the end of this: It’s likely because they want to keep users on the search site.)

Here goes. And yes, it’s in a Semaform.

THE NEWS

Chatbot-infused information systems are not off to a good start.

Microsoft’s ChatGPT-assisted Bing Chat service is being throttled to to reduce oddball replies, and Google’s experimental Bard system managed to bungle an answer in a marketing demo, costing the company billions in market value.  (Bing got things wrong too.)

Tech behemoths — and the public — have been so focused on the chatbots’ ability to hold human-like conversations with users that the core purpose of a search engine, which is to find useful and ideally, accurate, information seems to have been overshadowed. Instead, the public has seized upon professions of love, angry denials of basic realities, and many more mundane “hallucinations” of incorrect facts

GINA’S VIEW

It didn’t have to be this way.

At its heart, a search engine does — at least to lay users like me — three things: Take in a query (e.g., “how effective are COVID-19 vaccines?”) and turn it into a search term; hunt for information on the internet, make some kind of judgment about what’s credible; and then present it back to users.  Sometimes that comes as a simple, authoritative answer (“The population of New York City was 8.468 million in 2021”) and sometimes as a list of links.

Google — the king of search engines — does that second part extremely well, thanks to PageRank and other proprietary algorithms that it’s developed over the decades; it’s doing better on the first part, although it’s still a long way away from providing a conversational interface. 

And it does less well on the third part, often presenting a list of links that users have to plow through, although it’s getting better at synthesizing the information all the time. Chatbots, on the other hand, are terrible at the second thing — because, bluntly, they’re optimized for language output and not for fact-finding or fact-checking. When they try to aggregate disparate information into a single definitive answer, they often get things wrong, or “hallucinate.”

And the lack of citations or links in their authoritative-sounding answers means it’s nearly impossible to check the facts for yourself. On the other hand, the chatbots are pretty good at parsing language and generating language, because they’re, well, language models. Doh.

So why are tech companies enamored with integrating them into the entire search process — even the parts they’re not good at?  Why not marry the two capabilities? Why not have a chatbot take a normal human question and turn that into a search term (that’s a language skill), have a link system for finding relevant web pages (that’s a search and ranking skill), and then use the chatbot to summarize them (another language skill)?

Which is what I tried to do.

I used Claude — an AI chatbot built by Anthropic, in which Google just invested $300 million — to ask a simple question: Did Donald Trump pay adult film star Stormy Daniels for her silence? (I couldn’t ask an up-to-date question, because Claude’s database doesn’t extend to the present day.)

Read More…
Posted by: structureofnews | February 20, 2023

Language!

Here’s a story I wrote for Semafor (you know, where I work) about the possible uses of AI chatbots in journalism.

I wrote earlier here about how we keep trying to shoehorn their capabilities into things the don’t do well; this is a post exploring what they’re actually good at, and they could help newsrooms. And in case you’re wondering, the story is in a “semaform.”

THE NEWS

There’s been a ton of ink spilled on all the things ChatGPT and other AI chatbot systems don’t do well, not least distinguishing fact from fiction or doing basic math.  But what can it do well, and — importantly for newsrooms — what can it do for journalism?

The trick is to focus on the tasks they can do well – which is to work with language.

GINA’S VIEW

ChatGPT and other AI systems don’t do journalism well because, well, they weren’t built to.  They’re language models, meaning they’re really good at tasks involving language.  But they’re not fact models, or verification models, or math models, which makes them terrible at figuring out what’s true or adding two numbers together — both of which are pretty basic journalism skills. But complaining that they’re bad at journalism is like being angry at Excel because it doesn’t draw pictures well. 

What they are good at is language. I’ve been playing with various AI-powered chatbots for the last week or so, and two things are absolutely clear to me: 

  • There are useful, here-and-now real world applications that could materially improve how journalism is practiced and created;
  • The statement above might no longer be true.

I used Poe, a great tool for experimenting with AI systems from the folks that brought you Quora, to ask Claude, a chatbot created by Anthropic (which Google recently invested $300 million in) to copy edit stories that I had deliberately introduced factual (Ron DeSantis is governor of Miami), spelling, and grammar errors into.  For good measure, I threw in some internal inconsistencies (a fall in birth rates in China will lead to an increase in population.)

It aced the test.  It fixed DeSantis’ title, it corrected the inconsistencies, it made the stories read more smoothly.  It even gave an explanation of all the changes it made, and why.

To be sure, this wasn’t top-of-the-line, prize-winning editing of an 8,000-word investigative project.  It was acceptable, middling, competent copy editing — and let’s face it, that’s what 80% of what copy editing in most newsrooms is.  (Don’t kill me, copy deskers! I love you all!)

I’m not suggesting that Claude should be unleashed on stories unsupervised; but if could do a first edit on most of the copy in a newsroom — especially those where the staff are writing in a language which isn’t their mother tongue — it could offer material improvements in quality and efficiency.  

KNOW MORE

And then I took another step.  I asked Claude to take another story — about China’s attempts to reverse a fall in birthrates — and edit it in the styles of the New York Times, the New York Post, China Daily and Fox News.  The Times version was staid, as you might expect from the Gray Lady, and threw in a lot of background.

Read More…
Posted by: structureofnews | February 5, 2023

Right Tool, Wrong Job

There’s a joke about a drunk hunting around the ground below a bright street light, and when asked what he’s looking for, says that he dropped his keys somewhere up the road.  So why is he looking here? “The light is better here,” he says.

OK, so you didn’t come here for the humor, but for the sharp analogies that jokes can prompt (self-referential reference here). Although this one isn’t my analogy: this is cribbed from AI expert Gary Marcus, a professor at NYU, on a smart podcast discussing the promise and limitations of AI with Ezra Klein, about our obsession with ChatGPT and our attempts to have it solve any multitude of problems – not because it’s good at them, but because the light is so much better under it. 

There have been thousands of words written about ChatGPT and its miraculous capabilities, massive shortcomings or apocalyptic dangers. This isn’t one of those pieces.  It’s more about what it doesn’t do well, what what it does do well could do for journalism and why we should be looking elsewhere to fill the gaps it can’t.

To be sure, ChatGPT is very good at some things.  It’s an astounding language model, which means it can produce human-sounding text in multiple styles at scale, and will doubtless upturn any profession industry that requires writing as an output – business executives and copywriters, for example.  That doesn’t mean it’ll put people out of work – although it certainly could; it’s more that people who aren’t great at expressing themselves might get a tool to help them on that front, just as a calculator helped people who weren’t great at doing math with pencil and paper. Given the right prompts, it can turn mediocre ideas into acceptable prose – a low bar, perhaps, but then again lots of writing ain’t Shakespeare. (There are a whole set of other questions about equity and the new “AI divide” between those who have access to such tools and those who don’t, but that’s a topic for another day.)

And, as I noted earlier in an earlier post, it’s uncannily good at “understanding” questions, and if combined with a good backend search engine, could well revolutionize search as well.  (As Microsoft is doubtless thinking with its investment into OpenAI and integration of ChatGPT into Bing.)

What are its weaknesses?  It can’t do math, for one, as CNet’s ill-fated experiment in ChatGPT-generated stories demonstrated.  And that’s pretty basic skill for journalism.  Nor is it great at discerning fact from fiction, as any number of people have shown. And while it can create new content in multiple styles, all that is ultimately based in some broad way on words that have been written before.  It isn’t original in that sense. And, per Gary Marcus:

What it’s bad at is abstraction. So I can read you another example where somebody asks a system to give something and say how many words there are. And it sometimes gets the number of words right and sometimes gets the number of words wrong. So the basic notion of counting a number of words is an abstraction that the system just doesn’t get. There are lots of abstractions that deep learning, in fact all abstractions of a certain narrow technical sense, these systems just don’t get at all.

So it’s not all that helpful to criticize it for not being original, for not understanding concepts, or for not performing great (or even mediocre) journalism; that’s not what it’s built to do. After all, you don’t complain that Excel does a bad job of writing stories; it’s not supposed to. At heart, ChatGPT is a language model that does an astoundingly good job at putting words one after another that cohere.  It doesn’t “understand” any of them; it doesn’t analyze them, or facts, per se. It’s taking large amounts of data and predicting, based on the words it has ingested, how to create something new.

And when those words are largely accurate, it can give a pretty good answer. But when those words are riddled with inaccuracies, not so much. But journalism is often about new words: new facts, new analysis, and new ideas. It’s math, at one level.  It’s analysis.  It’s inferences and weighing of conflicting statements.  It’s verification of new facts.  Which leaves a system like ChatGPT without a lot of training data to work from.

For that – at least in my limited understanding of AI and technology – you need something more like symbolic logic.  You need a system that can take data, analyze it, look for patterns that are “interesting” or “insightful” and surface them, whether in text or some other format.  That’s what we were building when I was at Reuters with Lynx Insight.  Language generation was the least interesting part of it; what we wanted was smart pattern recognition.  Does this financial data suggest some major shift in company strategy?  Are corporate insiders bailing out?  And so on.

Pages: 1 2

Posted by: structureofnews | December 28, 2022

Questions

I’m going back to the well of ledes/jokes with this one:

“My dog can play checkers.”

“That’s amazing!”

“Not really – he’s not very good.  I beat him three times out of five.”

Old joke, I know, but I tweeted it recently in connection to the discussion about ChatGPT and its skill – or lack thereof – in creating human-like content in response to prompts. And how maybe we’re looking at the wrong place to understand how groundbreaking it really is. Hint: It’s in the questions, not the answers.

(First, a digression: Yes, it’s been ages since I posted. And who knows when I’ll post next. But I am slowly returning to this world. End of digression.)

So this post is based on no real deep knowledge of ChatGPT – and I’m not sure if I would understand it if I had more access to information – but some of the commentary on the launch of what’s a really very impressive AI platform seems to be focused on the wrong thing: the output rather than the input.

Don’t get me wrong: ChatGPT’s output is incredible. And also sometimes incredibly stupid, as this piece in the Atlantic notes. And there’s certainly no shortage of critiques on the interwebs about how it’s not really creative and it simply reworks the huge corpus of writing and data that’s been fed into it.

And all that’s fair. Although, again: “My dog can play checkers” is the achievement, not how many times it wins.

But more importantly, perhaps the most significant achievement in ChatGPT isn’t in how it comes up with answers but how it understands questions. I’ve been playing with it a bit, and what I find amazing isn’t how well – or badly – it answers my questions, but how it knows what I’m looking for. I asked it who Michael Cohen was, and it figured that out; I asked about Stormy Daniels, and it knew that too. True, not hard. But when I asked whether Donald Trump had paid Stormy Daniels off, it managed to parse what’s a really complicated question – who is Donald, who is Stormy, what is their relationship, did it involve a payoff and why, and who said what – and came back with a reasonable answer (Donald says he didn’t, but Michael said he did.)

To be sure, it’s true that as a search engine, ChatGPT has some significant drawbacks, not least that it doesn’t seem to be able to distinguish between what’s true and published and what’s untrue and published.

But Google does a pretty good job of that part of the search experience. While doing a pretty mediocre job of the front end of the search experience – we all spend far too much time refining our search terms to get it to spit out useful answers. So what if the front end of ChatGPT was paired with the back end of Google?

Imagine, as I did for a Nieman Labs prediction piece (and years ago, here), that it could be used to power a real news chatbot, but one powered by verified, real-time information from a news organization. Talk – literally – about truly personalized information, as I mused in the Nieman Lab article.

How about using ChatGPT‘s powerful language parsing and generation capabilities to turn the news experience into the old saw about news being what you tell someone over a drink at a bar? “And then what happened?” “Well, the FBI found all these documents at Mar-a-Lago that weren’t supposed to be there.” “I don’t understand — didn’t he say he declassified them?” “Actually…”

It would let readers explore questions they have and skip over information they might have. In other words, use technology to treat every reader as an individual, with slightly different levels of knowledge and levels of interest.

And yet another use case for good structured journalism! (Of course.) Regardless of how it’s ultimately used (and here’s one idea, and here’s another), it’s important to recognize how important this development is, and how it could truly transform the industry. You read it here first.

(Another digression: You should listen to this Ezra Klein episode as he talks to Sam Altman, the CEO of Open AI, which created ChatGPT. It’s either fascinating or terrifying. Or both.)

Posted by: structureofnews | December 15, 2022

The Semaform at Semafor

Here’s a short video I did – well, technically I fronted; someone else (Joe Posner!) did all the work – explaining what we’re trying to do with the story form at Semafor. We call it a – doh – Semaform. And here’s the accompanying story.

(And less, it’s two months old; but when was I ever on time on this blog?)

It’s been an interesting challenge at Semafor to actually implement some of the ideas I’ve been talking about – in mostly theoretical terms for so long – to rethink the basic unit of news from the ground up. And now I’m seeing how well some ideas do – or don’t – work in practice.

It’s been a great adventure. I’ll post more as I dig out from under…

Posted by: structureofnews | July 10, 2022

A Talk Or Two

Shameless self-promotion alert: Here are links to two talks I gave recently (“recently” being stretched to mean “three months ago”).

I would summarize them, but that would take work, and it’s the weekend…

More seriously – but it is also the weekend – the first is a talk I gave at the International Symposium on Online Journalism in Austin, Texas, back in April. The topic was News: What is it, who is it for, and how can we rethink it for the digital era? and it’s mostly about the need for imagination in journalism innovation, but the Q&A session afterwards, expertly moderated by Neil Chase of CalMatters, roams all over the place. I had fun, but your mileage may vary; there isn’t a transcript so you’ll have to wade through 45 minutes or so of bad jokes and poor analogies to capture the entire essence of my ramblings.

And a shout out to Rosental Alves, who created the conference two decades or so ago, and who had the bad judgement to invite me to speak.

The second is the lunchtime keynote I gave at this year’s Investigative Reporters and Editors conference in Denver in June; I don’t think I had a title for it, and if I did, I’ve since forgotten. It was mostly about – well, a bunch of things, ranging from the need for us to focus as much on communities and audiences and their needs as on the stories we want to write; the importance of diversity in management ranks and greater sensitivity to our blind spots; and why framing stories and language are critical. There’s no shortage of bad jokes, too.

The link takes you to the text of the 15-minute talk, and at the bottom of the page is the actual video; my part comes on at about the 50-minute mark. The jokes are better in the video; the text is faster to read. Your call.

Hugely honored to have been asked to give both talks; sorrier for the people who had to sit through them. And gave me a chance to distill a lot of what I’ve been writing about here and thinking about more generally. Would love to hear thoughts and comments.

Older Posts »

Categories