Just a riff on a recent NYT magazine piece about the debate around “autonomous weapons,” or machines that can make decisions about who and when to kill. Spoiler alert: There’s no consensus about them. Actually, not even close to being a consensus. Which is probably a good thing.
That said, it’s a good entryway to revisit the notion that we as an industry/profession could be doing a better job covering the multiple algorithms that now govern our lives, even if they aren’t literally designed to kill us.
Algorithms influence what news and information we see, how financial markets behave, where police put their resources, whether we can get loans and at what price, and much more. And beyond that, they have to power – as do other automations – to reshape how we build and structure our world, beyond replacing humans.
As the NYT piece notes about the debates about autonomous weapons:
This argument parallels the controversial case for self-driving cars. In both instances, sensor-rich machines navigate a complex environment without the fatigue, distractions and other human fallibilities that can lead to fatal mistakes. Yet both arguments discount the emergent behaviors that can come from increasingly intelligent machines interpreting the world differently from humans.
Precisely. Autonomous machines can and will go beyond replacing humans and potentially fundamentally change our world – not necessarily for better or worse, Read More…