How can you regulate what you can’t see?
How can authorities ensure a fair playing field in a digital age if they can’t be sure what the field looks like? Or if the field looks different to every player? What happens when the ideals of net neutrality meet personalization and the filter bubble? How do you regulate an algorithm – and should you, and can you?
The thought came to me during a presentation by Eli Noam, a smart and articulate business professor at Columbia University, about media ownership concentration globally. It was a great, data-driven analysis of how concentrated ownership of various types of media assets – newspapers, TV stations, ISPs, cable companies, etc – had ebbed and flowed over the decades. It was striking how ownership patterns seemed to converge, perhaps reflecting the increasingly dominant economic imperatives of the digital age. But what was even more striking – but not surprising – was the huge dominance of a single player (Google) in search.
Which led me to wonder about how public policy deals with questions about fair play when a single company embeds algorithms and personalization into its core product.
You can certainly make a case that monopolies shouldn’t be allowed to push people towards favored products – so the only grocery in the state may need to be forced to stock a wide range of cereals, say, and not such the house brand. Or that the dominant cable company should allow equal access to all sites, not just the ones that pay it more.
But what if the whole point of the monopoly is to serve you a personalized service – as Google aims to do? How could you tell if it was or wasn’t steering you to a preferred site?
Try this with a friend, as I did once: Type in the same search term in his or her browser and yours. You’ll almost certainly get different results, based on your past search history.
Is that a good thing? In many ways it is – since you get results that are more tailored to what you looked for in the past, and are hence more likely to be useful to you. But how – without unpicking and reverse-engineering the search algorithm – would you ever know if you got the “fair” result? How can governments regulate a proprietory algorithm – and how would they know if it was being abused?
There’s a fascinating discussion over at codinghorror about the practice of “hellbanning,” or building, in effect, a separate reality for people who engage in bad behavior in forums. Hellbanning consists of making a banned person invisible to the rest of the community – but not to himself. In other words, it looks to him like he’s taking part in the life of community, posting comments as usual. But no one else can see him. And so after a while, when he gets no responses to him, he melts away. Or that’s the theory. (Shades of Bruce Willis in The Sixth Sense, who thinks he’s talking to people but doesn’t realize he’s dead. Sorry. Spoiler alert.)
There are other ways discussed there to make life difficult for difficult people, such as introducing random error messages and slowing performance of the site – just for him. But the broader point is that these are designed to not let him know that he’s being targeted for special treatment. As far as he’s concerned, the site is just buggy.
So how will we know if we’re being targeted for special treatment? What benchmarks can we compare ourselves to when everyone’s experience is supposed to be personal? There are lots of reasons already to worry about how all of us are constructing our own personalized realities in the digital world, but another one is the question of how monopolies and monopoly power can be regulated in such a world.
Can we? Should we? How much does it matter?