Posted by: structureofnews | October 29, 2018

Just Following Orders

MinecraftSo it’s been a long time since I posted anything here.  I did think about it a bunch of times (honest!) but there’s been more than enough other things going on – from having two colleagues unjustly jailed in Myanmar to a multi-billion deal that nets Reuters a 30-year, $325 million-a-year contract to supply news – that this blog just hasn’t been a priority.

Still, there’s a lot happening in the world of technology and news, and even in structured journalism.  Like this nice, short piece in the New York Times on Sunday, riffing on the promise and limitations of machine learning.  And which reminds us, again, that we need to cover the algorithms that govern our lives in a much better way.

It’s mostly about how textgenrnn, a machine-learning algorithm that imitates text, has come up with a number of creative, funny and quirky new Halloween costume names – Sexy Minecraft Person or Piglet Crayon, anyone? – after being fed a series of costume names.  That’s pretty impressive, given that the system wasn’t given any information about words, grammar or spelling – it basically iterated towards things that made sense (sort of) to a human.

Which speaks to some of the power of such algorithms – their ability, in many ways, to come to “reasonable” results with a minimum of human intervention.  And – in a much more troubling way – with a minimum of human understanding as well.  As the piece notes:

Even when we can peer inside the neural network’s virtual brain and examine its virtual neurons, the rules it learns for its prediction-making are usually very hard to interpret.

…for the most part these algorithms are black boxes, producing predictions without explanation.

The key inputs for such systems are the training set of data – the pool of information that the system should be able to emulate – and the goals we set for the algorithm.  But the training data can be biased, leading to algorithms that turn out biased results (as Cathy O’Neil’s Weapons of Math Destruction nicely describes), and machines can take orders somewhat too literally, optimizing for conditions their creators never intended.

The NYT piece points to a couple of hilarious-if-they-weren’t-serious examples: A system that was supposed to reduce sorting errors simply deleted the entire list, leading to no sorting errors.  Another simulation intended to teach how to land an aircraft gently on an aircraft carrier solved the problem by crashing the plane so hard the simulation registered zero force.

Both achieved the goals laid out by their designers – but not what their designers intended.

Couple that with a black box system that isn’t truly understandable by humans, and the fact that such algorithms are making more and more decisions about our lives, and it becomes clear we ought to be paying a lot more attention to them.

Not that algorithms are a bad thing; in many cases, they’re better than humans.  And in any case, we can’t manage an increasingly complex world without them.  But we need to understand them better, and understand how, in some ways, they are coming to conclusions we never intended.

The NYT article sums it up very well (emphasis mine):

The moral of this story is not to expect artificial intelligence to be fair or impartial or to have the faintest clue about what our goals are. Instead, we should expect AI merely to try its best to give us exactly what we ask for, and we should be very careful what we ask for.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Categories

%d bloggers like this: