Should machines try to stamp out discrimination – or should they just work efficiently? And what should journalists do about tracking the answer?
The question comes up as we increasingly turn to algorithms and digital platforms to manage many of the things we used to do offline. Offline, there are any number of laws and practices that regulate behavior. Online – well, that’s a whole new landscape. And one we ought to cover a lot more.
There was an interesting NYT piece from a little while back about start ups that are trying to use data analysis to make lending decisions – not so much looking into people’s credit history, but more throwing together thousands on seemingly unrelated pieces of information to predict borrowing (and repayment) behavior.
No single signal is definitive, but each is a piece in a mosaic, a predictive picture, compiled by collecting an array of information from diverse sources, including household buying habits, bill-paying records and social network connections. It amounts to a digital-age spin on the most basic principle of banking: Know your customer.
Does it make sense that people who capitalize properly are better credit risks than people who don’t? Well, the people who make the software not only don’t care, they’d rather they didn’t even try to understand it.
“It is important to maintain the discipline of not trying to explain too much,” said Max Levchin, chief executive of Affirm (one of the companies profiled). Adding human assumptions, he noted, could introduce bias into the data analysis.
True, it may be great that machine-learning systems that crunch lots of data can find correlations that allow more people more access to credit than under the traditional banking system. But what if it turns out that it’s denying credit to certain groups – not intentionally, but simply because that’s the way the data correlates?
The danger is that with so much data and so much complexity, an automated system is in control. The software could end up discriminating against certain racial or ethnic groups without being programmed to do so.
But if you wanted to program in less discrimination, how would you do it – and how much less would you want, if it wound up being less effective at channeling money to people who need it and aren’t being served by the existing banking system?
Or what if it isn’t the machine, but just a lot of people who, each acting on their own, wind up exhibiting discriminatory behavior en masse?
Consider this much-cited paper by Harvard Business School professors Benjamin Edelman and Michael Luca, who showed that non-Black hosts in New York City manage to charge, on average, 12 percent more than Black hosts on Airbnb, after correcting for location, ratings, quality and so on. In that case it wasn’t so much an algorithm determining prices Read More…