Posted by: structureofnews | November 6, 2017

Coding Error

images

To err is human.  But machines aren’t bad at it, either.

Especially when humans are the ones encoding the mistakes in – whether in the flawed assumptions we bake in, or the lack of judgement we show in blindly trusting our creations.

That’s the key thesis of  Cathy O’Neil‘s Weapons of Math Destruction – a well-argued book about the dangers of allowing algorithms we don’t understand well to run large parts of our life.  It’s a good, quick, read – I finished it on a five-hour flight from New York to Phoenix – with lots of stark examples, and is well-worth diving into.

It makes a great case for why we need better coverage and understanding of algorithms, given how big a role they now play in our daily lives and how little transparency there is about how they work. That’s not a new idea – “algorithmic accountability” has been a rallying cry for some for some time now, not least from Nick Diakopoulos, now at Northwest University, and Julia Angwin of Pro Publica. (I’ve made pitches for it as well – here and here, for example.) And the furor over Facebook’s algorithmically driven news feed, and how it was used to target particular audiences during the 2016 presidential campaign, is breathing new life into that drive.

But there’s still much more than can and should be done.

To be sure, we can’t do without algorithms – they help us sift through tons of data, can bring a level of objectivity to difficult decisions, and can surface insights we never knew. Not to mention the alternative – human judgement – has its own flaws and biases too.

But too often we ascribe a level of certainty or fairness to algorithmic-driven systems that isn’t justified, or fail to even ask the most basic questions about how they work.

Not all algorithms are bad, of course, and Cathy develops a nice framework around how to thinking about the dangers any given one might pose – how opaque they are, how much damage they might cause, and how widespread an impact do they have.

Opacity is certainly a red flag – if we don’t know what’s going into the system, how will be know if the results coming out are “right?” In some cases, the math that’s built in is laughably bad, such as when education departments use sample sizes of two or three dozen students to grade teachers; but even when the statistics involved are rigorous, there are questions about what outcomes the systems are optimizing for.  For example, should an algorithm that looks for potential terrorist threats be optimized for fairness, or for spotting the tiniest sign of danger?

And then, to what extent are individuals being treated as a class of people – for example,  people who have relatives in the Middle East – or as individual actors – say people who have traveled to Syria more than twice in the last year?  If living in a poor neighborhood is correlated with being a bad credit risk, and that’s an important factor in determining whether someone gets a loan, then how does a hard-working, responsible person who happens to live in the wrong part of town get a break?

Equally important, does the system had a feedback mechanism built in?  If a loan-approving algorithm mistakenly excludes a whole class of credit-worthy people, but never tracks their progress after they get a loan from somewhere else, how will it ever correct itself?  And how then can it adapt to changing circumstances?  As Cathy notes:

Big Data processes codify the past.  They do not invent the future. Doing that requires moral imagination, and that’s something that only humans can provide.

There are many more questions that can and should be asked – what’s the data it’s pulling in, and does it accurately measure what’s in question?  Do test scores actually reflect what students learn, or is it just a pretty decent proxy for that?

Yet overcoming opacity will be a huge problem.  In the information economy, algorithms are often the key intellectual property that companies own, and few will be willing to allow anyone – not least journalists – a peek into their systems.  Building a framework for allowing that, at least in systems that have significant impact on the public, will be one of the key public policy challenges ahead.

Beyond opacity, issues of scale and damage are important as well, and speak less to the accuracy of any particular algorithm as much as they do to what we as a society value.  What’s an acceptable rate of error in a system that looks for potential terror threats?  Is it OK for 1%, or 2%, or 100,000 people in a population of 300 million to be on watch lists even if they’re innocent?  What should any system be optimized for?

Of course, humans make mistakes as well, and any number of people have been wrongfully convicted, denied loans or had other injustices done to them.  The difference is that algorithms can operate at vast scale, affecting many more lives.

That said, there are some advantages to a world of algorithms: Because they’re written in code, they require us – or should require us – to think through what was often a mix of human ideals, ideas and biases, and assign weights to different goals and priorities.  How should a self-driving decide between avoiding a pregnant woman and swerving into a group of children? There are scholars who are studying those questions.

So why aren’t more studying the many other algorithms that are taking over our lives?

This is, ultimately, about how we build systems that will govern much of our lives.  And we have to take responsibility for the choices we make – or the choices we encode. As Cathy notes:

The choices are not jut about logistics, profits and efficiency. They are fundamentally moral.

Advertisements

Responses

  1. […] to Gdynia, Poland.  (Don’t ask.).  Like Weapons of Math Destruction, by Cathy O’Neil (another book I finished on a plane), this one also focuses on the misuse of technology and misplaced faith in algorithms and […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s

Categories

%d bloggers like this: