So much for the long tail (of search). At least according to a great graphic at SEO Book that digs into how adjustments to Google’s algorithm is killing off the days of misspelled and muddled search terms.
In essence, it says, Google’s effectiveness at finding what we want – or what it thinks we want – prevents us from stumbling down accidental, incorrect or quirky paths. And that’s bad news for sites built around getting that traffic. It’s less clear whether that’s good or bad for us consumers. It’s like having a really smart personal shopper thrust upon us who steers us away from serious fashion mistakes; it’s probably a net plus for us, but it certainly cuts down on the chances we’ll experiment with offbeat designs, for better or for worse.
But more importantly, it highlights the increasing role that algorithms play in our daily lives – and yet how little attention we pay to them.
But we should focus much more on them – as the graphic shows, even small changes can remake the way we see the world. They have a huge impact on the value of things we own; they can affect the services we’re offered. And we’re largely unaware of how they work and how prevalent they are in our lives. (Not to mention figuring out new ground rules on how they should be regulated.)
For a quick overview of the topic, you could do worse than watch this TED talk by entrepreneur Kevin Slavin on how algorithms shape our world; he frames it nicely in terms of how, as they embed themselves in our daily processes, they play an increasing role in creating the reality around us.
…I want to propose today that we rethink a little bit about the role of contemporary math — not just financial math, but math in general. That its transition from being something that we extract and derive from the world to something that actually starts to shape it — the world around us and the world inside us. And it’s specifically algorithms, which are basically the math that computers use to decide stuff. They acquire the sensibility of truth because they repeat over and over again, and they ossify and calcify, and they become real.
Algorithmic trading, for example, now accounts for more than 70% of equities trading in developed markets; human traders have less and less influence on prices of stocks and the value of companies. Of course, human programmers/developers have increasing influence on how stocks are valued. It’s just that they – and their work – aren’t as visible, or as regularly covered.
This isn’t to say that the age of algorithms is universally a bad thing. It isn’t. We get better services, more optimized traffic flow, increased energy efficiency and lots more from the smart crunching of big data. Amazon is pretty good at figuring out what we might like based on our shopping and browsing behavior, and few of us complain about that. In effect, it’s making predictions about us based on what people like us like; and if it screws up now and then, that’s OK.
But what happens when predictions like that are used to constrain our options? What if there’s data to show that people who, say, drink a lot and visit casinos in the wee hours of the morning turn out to be bad credit risks on average. (Which I’m guessing they are, but I confess I don’t have the data to back me up.) Should a bank be able to use that information to deny loans to someone with that profile? Where’s the line between building interesting correlations and smart predictions and discrimination?
It isn’t like this is all that new a phenomena – insurance companies have made a fine art of figuring out health and other risks for various populations for decades. But the explosion of data that’s now available on our behavior means that much, much more can – and almost certainly will – be done with it.
In a recent New York Times piece, law professor Lori Andrews drew an analogy to the old practice of companies “redlining” some inner-city districts where they wouldn’t provide services – essentially discriminating against huge groups of people – to a modern practice of “weblining.”
When an Atlanta man returned from his honeymoon, he found that his credit limit had been lowered to $3,800 from $10,800. The switch was not based on anything he had done but on aggregate data. A letter from the company told him, “Other customers who have used their card at establishments where you recently shopped have a poor repayment history with American Express.”
Even though laws allow people to challenge false information in credit reports, there are no laws that require data aggregators to reveal what they know about you.
But beyond knowing what they know about us – we need to find out much more about how that data is being crunched; what the algorithms in our lives are.