Posted by: structureofnews | November 24, 2018

The Algorithms of War

RObocop.pngJust a riff on a recent NYT magazine piece about the debate around “autonomous weapons,” or machines that can make decisions about who and when to kill.  Spoiler alert: There’s no consensus about them.  Actually, not even close to being a consensus. Which is probably a good thing.

That said, it’s a good entryway to revisit the notion that we as an industry/profession could be doing a better job covering the multiple algorithms that now govern our lives, even if they aren’t literally designed to kill us.

Algorithms influence what news and information we see, how financial markets behave, where police put their resources, whether we can get loans and at what price, and much more.  And beyond that, they have to power – as do other automations – to reshape how we build and structure our world, beyond replacing humans.

As the NYT piece notes about the debates about autonomous weapons:

This argument parallels the controversial case for self-driving cars. In both instances, sensor-rich machines navigate a complex environment without the fatigue, distractions and other human fallibilities that can lead to fatal mistakes. Yet both arguments discount the emergent behaviors that can come from increasingly intelligent machines interpreting the world differently from humans.

Precisely.  Autonomous machines can and will go beyond replacing humans and potentially fundamentally change our world – not necessarily for better or worse, but certainly differently.  And even if they don’t – or before they do – there’s a good argument to better understand how they’re supplementing human decisions.

If a newspaper got a new editor, we would certainly interview him or her and try to understand their view of news; if your town got a new police chief, you’d want to know what he or she thought about stop and frisk.  Bringing in algorithms that assist – or make – those decisions isn’t far removed from bringing in new people to make those decisions. And yet we tend only to cover algorithms when they go wrong – such as when a self-driving car kills someone.  (Of which a bit more later on.)

To be sure, it can be hard to understand what’s happening inside a machine, especially one that’s been created via a machine-learning system that’s a black box even to its creators; but that’s all the more reason to expend the effort to dig into it. (Or at least write about it in a witty way, as Patricia Marx recently did in the New Yorker – it’s well worth the read.)

This isn’t a new rant of mine; self-referentially, I made this argument a while back and again not so long ago:

It makes a great case for why we need better coverage and understanding of algorithms, given how big a role they now play in our daily lives and how little transparency there is about how they work. That’s not a new idea – “algorithmic accountability” has been a rallying cry for some for some time now, not least from Nick Diakopoulos, now at Northwestern University, and Julia Angwin of Pro Publica. (I’ve made pitches for it as well – here and here, for example.) And the furor over Facebook’s algorithmically driven news feed, and how it was used to target particular audiences during the 2016 presidential campaign, is breathing new life into that drive.

(As an aside, Julia has left Pro Publica and set up a new news organization to cover technology issues, and Nick will have a book on algorithms coming out soon.)

But back to self-driving cars and the algorithms in them that determine how they deal with life-and-death ethical questions.  Perhaps, as this fascinating piece by Johannes Himmelreich in The Conversation notes, we’re asking the wrong questions.  Why are we focused on asking how self-driving cars should behave at crosswalks?

For myself, I began to question whether we need places called “crosswalks” at all? After all, self-driving cars can potentially make it safe to cross a road anywhere.

And it is not only crosswalks that become unnecessary. Traffic lights at intersections could be a thing of the past as well. Humans need traffic lights to make sure everyone gets to cross the intersection without crash and chaos. But self-driving cars could coordinate among themselves smoothly.

In some ways, this is akin to the question about whether machines can write better stories than humans.  It’s a perfectly valid question, but isn’t the better question how machines could change the way we find and provide information to people?

As the piece goes on to note, why try to have machines copy humans at all?

Furthermore, self-driving cars shouldn’t drive like people. Humans aren’t actually very good drivers. And they drive in ethically troubling ways, deciding whether to yield at crosswalks, based on pedestrians’ agerace and income. For example, researchers in Portland have found that black pedestrians are passed by twice as many cars and had to wait a third longer than white pedestrians before they can cross.

Self-driving cars should drive more safely, and more fairly than people do.

All of which is true – but too much of this work is being done behind closed doors, with decisions being made that will ultimately affect all of us.

Decisions made by engineers today, in other words, will determine not how one car drives but how all cars drive. Algorithms become policy.

And isn’t one of journalism’s key missions to cover policy?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Categories

%d bloggers like this: