How moral should a machine be? How moral can a machine be?
Maybe that conjours up science-fiction images of Hal 9000, Cylons, Skynet and even lovable Robby the Robot.
But what it should really do is make us question the assumptions, values and directions embedded in the multiple algorithms that run our lives – not to mention how, increasingly, we won’t really know what assumptions, values and directions are in them.
Consider this scenario, which NYU cognitive science professor Gary Marcus posited in a 2012 New Yorker article:
Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?
Now, I’m sure we all have different answers to this question, perhaps related to how bratty the 40 kids are. But what if it’s the algorithm that powers a self-driving car that has to make that split-second decision? What are the factors that should be programmed into that piece of software?
Welcome to the world of machine ethics. Or, as Prof. Marcus notes:
Or at least the need to debate and discuss the questions about how issues of morality might be incorporated into machines. Those issues cropped up again in this weekend’s New York Times Magazine, which raised similar questions about machine-delivered medical care and semi-autonomous weapons systems. If machines can quasi-independently choose which targets to attack – as they already can – how should they be programmed to make that selection?
Computer scientists are teaming up with philosophers, psychologists, linguists, lawyers, theologians and human rights experts to identify the set of decision points that robots would need to work through in order to emulate our own thinking about right and wrong.
The NYT article even discusses the concept of programming “guilt” into robots, to help it learn to adapt its actions based on how they turn out.
Ronald Arkin, a roboticist at Georgia Tech, has received grants from the military to study how to equip robots with a set of moral rules. “My main goal is to reduce the number of noncombatant casualties in warfare,” he says. His lab developed what he calls an “ethical adapter” that helps the robot emulate guilt. It’s set in motion when the program detects a difference between how much destruction is expected when using a particular weapon and how much actually occurs. If the difference is too great, the robot’s guilt level reaches a certain threshold, and it stops using the weapon.
But a lot of this discussion is based on the not-unreasonable image of a physical robot interacting with us in the “real world.” And certainly the field of robotics has advanced tremendously in the last decade or so, to the point where self-driving cars, dismissed just 10 or so years ago, are now very close to reality. So it’s pretty likely physical robots will be much more a part of our lives soon, and we will need to figure out how they’ll respond to the multiple ethical decisions their human predecessors have to deal with – not necessarily always well, or consistently, but at least we’ve developed codes, laws and norms to deal with them.
Yet that understates the pervasiveness of the non-physical robots in our lives – the algorithms that govern so much of what we already do, that not only often operate below the level of our consciousness, but are sometimes beyond our understanding.
How much are we looking at the moral issues embedded in how predictive policing systems work, for example? How well are regulators coping with the kinds of machine-driven algorithmic trading that now dominates equities markets. and what should their frame of reference for an efficient market be? What are the assumptions built into models that quote different prices to different customers, and are we OK with that? How well do the analytics that determine if you’re a good candidate for a loan – and what interest rate you’re charged – work, and who programmed them?
More importantly – what if no one programmed them?
As machine learning progresses, and systems increasingly learn from experience, we’re fast getting to the point where we can’t point with any certainty to an understanding of the assumptions – or theories – that drive any given algorithm, beyond that it produces a desired output. But moral questions are often as much about the intent of the actor as much as it the actual action – that’s why self-defence/being in fear for your life (or “stand your ground” laws, to be more controversial about it) can be justifiable grounds for killing someone.
Certainly it’s an area that journalism could do more to delve into.
Still, it’s also important to have a sense of perspective. We need to know more about how algorithms make what would be otherwise human moral choices. But we should also remember that humans are, well, only human. As the Economist notes in a 2012 article:
One way of dealing with these difficult questions is to avoid them altogether, by banning autonomous battlefield robots and requiring cars to have the full attention of a human driver at all times …
But autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat. Driverless cars are very likely to be safer than ordinary vehicles, as autopilots have made planes safer. Sebastian Thrun, a pioneer in the field, reckons driverless cars could save 1m lives a year.
In other words, machines may make bad moral choices. But so do humans. It’s understanding how decisions come to be made, and what accountability there is for bad ones, that really matters.
[…] then there’s the broader philosophical question about what society ought to do if it finds the machines are discriminating, or facilitating […]
By: Machines Behaving Badly (People Too) | (Re)Structuring Journalism on February 17, 2015
at 10:26 am