I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard

Do super-intelligent machines have a purpose and is it a good one?

2015-02-26 by Nick S., tagged as artificial intelligence

Over the past month, I happened to read a few books in which machine intelligence plays a big part, being Nicholas Agar's Humanity's End (2010), Frank Pasquale's The Black Box Society (2015) and Tyler Cowen's Average is Over (2013).

Cowen is by far the most sanguine, if only because he takes a firmly amoral view that only an economist could love. He presents as inevitable a future of super-intelligent calculating machines tended to by a few elite humans able to work with them, while the remaining workforce finds itself of little value. Agar, on the other hand, doubts that augmenting humans beyond their natural abilities has any real benefits, and Pasquale fears that the secret algorithms behind search engines, computer trading and the like will stymie the public's understanding and control of the information that is presented to them.

While there are many small points on which I find Cowen's logic impenetrable, I did appreciate his characterisation of super-intelligent machines. Rather than have a human-like intelligence appear fully-formed at some choice moment as it does in so much science fiction, he sees machine intelligence emerging gradually and appearing alien and unintelligible to human intelligence. If it takes eighteen years for a human to become fully developed in the legal sense, why expect that a machine — especially the first one ever built, presumably the most primitive of its kind — could achieve the same immediately upon being switched on? And why expect a computer to behave like a human when it is an entirely different sort of construction?

Agar points out that, if the behaviour of super-intelligent machines is incomprehensible to us, what interest would we have in anything they do? Cowen observes that few people are interested in watching computers play chess against each other, precisely because human watchers don't understand what the computer players are doing. Yet, if machine intelligence emerges gradually, at what point might we decide to stop because we're no longer interested?

Pasquale suggests a more sinister possibility. How do we know that secret or incomprehensible behaviour is in our best interests? I'm sure plenty of people would regard Cowen's world as dystopian without any further elaboration, and it's easy to think up even worse dystopias in which the elite (Google et al. in Pasquale's book) enrich themselves while keeping everyone else ignorant of the real state of affairs, or in which machines become trapped in an echo chamber processing only data created or influenced by themselves.

Cowen seems to be confident that his super-intelligent machines will be able to get good results even if we don't understand why, citing examples like the ability to win chess games and match successful romantic partners without any human being able to understand how they made their decisions. For problems with narrow and well-defined goals — like winning games and, at least to a crude approximation, marriage — it's easy to verify that a solution is correct even if we don't know how the solution was arrived at. But computers are already superb for narrow and well-defined goals, and no one would suggest that we allow them to rule over us because such goals are only crude approximations of what we actually want.

Pasquale's solution is to expose the algorithms to scrutiny. Perhaps no human could follow the detailed execution of an algorithm because a human cannot keep track of so many variables so quickly as computer can. But we must understand the algorithms on some level in order to build them in the first place, and to judge whether or not they are good algorithms. And if we can't judge whether or not the algorithms are good, what is our purpose in creating them?