I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Posts tagged as artificial intelligence

On science fiction and political economy

2015-07-28 by Nick S., tagged as artificial intelligence, prediction

Continuing the science fiction theme from my previous entry, I recalled an interview in which Iain Banks described "the Culture", the society in which most of his science fiction novels are set, as his utopia. One of the distinguishing features of the Culture is its population of artificial "Minds" that perform tasks from waiting on the biological citizens of the Culture, to commanding mammoth spaceships, to governing the whole society.

At first I wasn't convinced — I have no desire to add any extra arms, as does the protagonist of The Hydrogen Sonata (2012), for one — but, having considered some of the alternatives over my past few entries, I'm coming around to the idea. Banks' Minds are a pretty friendly, helpful and cooperative bunch, far from the totalitarian overlords featured in the Terminator movies and the incomprehensible tools of an in-the-know aristocracy imagined by Tyler Cowen and Frank Pasquale. The human(-like) characters don't need to work, but give purpose to their lives through elaborate hobbies like games of strategy (The Player of Games, 1988), absurd musical instruments (requiring those extra arms in The Hydrogen Sonata), and carrying out the alien missions that drive most of the novels' plots. (They also take plenty of time out from their hobbies for parties, sex and drugs.)

Of course Banks doesn't describe the economic or political mechanisms by which all this comes about. The same could be said of Star Trek, in which future humans are imagined to spend their time "improving themselves" rather than working for more material wealth.

Come to think of it, I can't recall science-fiction-inspired technology pundits like Project Hieroglyph or Brian David Johnson's "Science Fiction Prototyping" column in IEEE Computer saying much about economic or political mechanisms, either. Like most people, perhaps, they're primarily interested in how particular imagined technologies might impact society. This might be a fine thing to do, but the thoughts above lead me to wonder if the world could also use some "political economy fiction" exploring something broader than adventures with a particular technology or scientific theory.

Perhaps any such fiction is destined to sound like an old-fashioned utopia, and the term "utopia" has become something of an insult to describe a narrow idealistic vision that suits the interests of its proposers while ignoring the interests of everyone else and being generally impractical. My differences with those I describe as "techno-utopians" in particular were a large part of my motivation in beginning this blog. Still, in the essay that inspired Project Hieroglyph, Neal Stephenson laments what he perceives as a failure to pursue big technological ideas like space travel and robots. But if pursuing space travel and robots is interesting and important, why not our political and economic institutions as well?

Some thoughts on the Butlerian Jihad

2015-07-21 by Nick S., tagged as artificial intelligence, employment

Continuing to think about automation and employment while constructing my last entry, I recalled the "Butlerian Jihad" that Frank Herbert imagines in the history of Dune (1965). In the far distant future in which the novel is set, the Jihad has resulted in a ban on machines that replicate human mental functions. This ban manifests itself in Dune in form of human "mentats" trained to perform the computational work that we now associate with machines.

It's been some time since I read Dune, and I don't remember why the Butlerians went on their Jihad, or if Herbert gives a reason at all. But if they feared that thinking machines might make humans redundant, or at least spawn the monumental inequality envisaged by thinkers like Tyler Cowen and Eryk Brynjolfsson and Andrew McAfee, could the Butlerians have a point? I imagine that orthodox economists and technologists, including those I've just mentioned, would simply dismiss the Butlerians as a form of Luddite. But why should we accept machines if they're not doing us any good?

Part of the problem with any such jihad, aside from the violence associated with it in the novels, is that what makes us human is not so clear-cut or obvious as is traditionally presumed. Evolutionary biology argues that we are not so different from other animals, work in artificial intelligence is continually re-drawing the line between computation and what we think of as "intelligent", and neurologists are yet to identify a soul. The introduction of mentats illustrates the computational part of the difficulty: in ridding the galaxy of machines with human-like capabilities, the Butlerians introduced a need for humans with machine-like capabilities. Brynjolfsson and McAfee (I think) also make the point that it isn't just in mental powers that humans distinguish themselves machines: humans remain better at tasks requiring fine manual dexterity, meaning that robots aren't yet ready to replace pickers and packers, masseurs, and all manner of skilled tradespeople. Any would-be Butlerians have some work to do in defining exactly what it is that they object to.

A second problem is that people differ in what they want to do themselves, and what they want automated. I enjoy making my own beer, for example, but plenty of other people are happy to buy it from a factory that can make it much more efficiently. On the other hand, I'm usually happy to have my camera choose its own settings for focus, shutter speed and the like, where I imagine a photography enthusiast might be appalled to leave such things to a machine. Should I smash breweries, or photographers smash my camera, to preserve the need for the skills that we like to exercise ourselves?

Of course I don't need to smash breweries in order to brew my own beer: I have a non-brewing-related income that leaves me with the time and resources to brew my own beer even if no one else will pay for it. This brings me back to a point I've already come to several times in thinking about automation and work: to what degree should our worth and satisfaction depend on paid employment at all? If machines allowed us to reduce the amount of work we do, freeing up more time and resources to do what we actually want to do, would we have any reason to fear the machines?

Do super-intelligent machines have a purpose and is it a good one?

2015-02-26 by Nick S., tagged as artificial intelligence

Over the past month, I happened to read a few books in which machine intelligence plays a big part, being Nicholas Agar's Humanity's End (2010), Frank Pasquale's The Black Box Society (2015) and Tyler Cowen's Average is Over (2013).

Cowen is by far the most sanguine, if only because he takes a firmly amoral view that only an economist could love. He presents as inevitable a future of super-intelligent calculating machines tended to by a few elite humans able to work with them, while the remaining workforce finds itself of little value. Agar, on the other hand, doubts that augmenting humans beyond their natural abilities has any real benefits, and Pasquale fears that the secret algorithms behind search engines, computer trading and the like will stymie the public's understanding and control of the information that is presented to them.

While there are many small points on which I find Cowen's logic impenetrable, I did appreciate his characterisation of super-intelligent machines. Rather than have a human-like intelligence appear fully-formed at some choice moment as it does in so much science fiction, he sees machine intelligence emerging gradually and appearing alien and unintelligible to human intelligence. If it takes eighteen years for a human to become fully developed in the legal sense, why expect that a machine — especially the first one ever built, presumably the most primitive of its kind — could achieve the same immediately upon being switched on? And why expect a computer to behave like a human when it is an entirely different sort of construction?

Agar points out that, if the behaviour of super-intelligent machines is incomprehensible to us, what interest would we have in anything they do? Cowen observes that few people are interested in watching computers play chess against each other, precisely because human watchers don't understand what the computer players are doing. Yet, if machine intelligence emerges gradually, at what point might we decide to stop because we're no longer interested?

Pasquale suggests a more sinister possibility. How do we know that secret or incomprehensible behaviour is in our best interests? I'm sure plenty of people would regard Cowen's world as dystopian without any further elaboration, and it's easy to think up even worse dystopias in which the elite (Google et al. in Pasquale's book) enrich themselves while keeping everyone else ignorant of the real state of affairs, or in which machines become trapped in an echo chamber processing only data created or influenced by themselves.

Cowen seems to be confident that his super-intelligent machines will be able to get good results even if we don't understand why, citing examples like the ability to win chess games and match successful romantic partners without any human being able to understand how they made their decisions. For problems with narrow and well-defined goals — like winning games and, at least to a crude approximation, marriage — it's easy to verify that a solution is correct even if we don't know how the solution was arrived at. But computers are already superb for narrow and well-defined goals, and no one would suggest that we allow them to rule over us because such goals are only crude approximations of what we actually want.

Pasquale's solution is to expose the algorithms to scrutiny. Perhaps no human could follow the detailed execution of an algorithm because a human cannot keep track of so many variables so quickly as computer can. But we must understand the algorithms on some level in order to build them in the first place, and to judge whether or not they are good algorithms. And if we can't judge whether or not the algorithms are good, what is our purpose in creating them?

Why talk to a computer?

2014-08-05 by Nick S., tagged as artificial intelligence, user interfaces

I recently caught the movie Her (2013), whose story of a man falling in love with an "operating system" (actually what is more commonly called an "artificial intelligence") seemed like it should provide plenty of material for commentators upon humans' relationships with technology. But apart from the prevalence of pastel shirts and bad moustaches in this imagined future, I was most forcefully struck by the constant use of voice interfaces. The main character makes his living by dictating letters to a computer that prints them out in faux handwriting, and, once his new operating system is installed, constantly chats away to it without any apparent regard to what might be overheard by the people around him. Nor do the people around him pay him any regard.

I've long suspected that talking computers are part of The Amazing Science Fiction Future That Never Arrived. Not because they don't work — though my limited personal contact with them suggests that voice recognition is still not particularly good — but because they aren't nearly as useful as many a science fiction writer has supposed them to be. Is it really so hard for an able-bodied person to push a button or touch an icon on a screen? Can't writers, well, write? And would any real writer (or anyone else needing to concentrate) want to work in an office where everyone was babbling at their computers all day?

A week after seeing Her, I happened to read a quote from one John R. Pierce in the August 2014 edition of IEEE Spectrum: "Many early computer enthusiasts thought that computers should resemble human beings and be good at exactly the tasks that human beings are good at" (p. 8). He goes on to describe the pursuit of human-like computers as "facing the future with one's back squarely towards it", that is, looking at the past and assuming that the future will be a technologised version of the same.

I take Pierce to be making a point similar to one I've already discussed a couple of times in this blog: what use would a human have for a computer that did something that he or she is already good at? Computers are so useful precisely because they're good at things at which humans are not — most fundamentally, the rapid and reliable carrying out of minute instructions.

When I was (much) younger, I think I supposed that we'd one day be able to program our computers using English instead of the difficult-to-learn formal languages that we use now. Or at least I assumed that everyone else was pining for that day, as evidenced by depictions like Her. But greater experience tells me that the reason that we don't use English to program computers isn't that they can't understand it (though they can't), it's that English isn't actually a particularly good tool for describing data or issuing instructions. That's why lawyers and philosophers spend so much time debating the precise intepretation of observations and phrases, and why scientists and others resort to mathematics when they want their meaning to be indisputable.

I'm not sure where the idea that computers should or would be like humans came from. They neither look nor act anything like humans, and I'm pretty sure that most psychologists would laugh at the idea that humans behave like neat information-processing machines. And humans have plenty of trouble talking to each other — Her illustrates this itself — so why expect talking to a computer to be any better?

On being replaced by technology

2014-06-08 by Nick S., tagged as artificial intelligence, employment, prediction

By way of celebrating fifty years of IEEE Spectrum, the June 2014 issue investigates some technological trends that it hopes will bring us "the future we deserve". Tekla S. Perry (pp. 40-45) describes a part of this future in which computer-generated humans become indistinguishable from actors captured on film. Explaining why we need to create fake humans when we already have seven thousand million real ones — and plenty of them out-of-work actors to boot — takes some doing. Perry makes some interesting points in this direction, but I nonetheless winced on behalf of all of those already-underemployed actors who might be wondering if Tesla's future leaves them with anything to do.

Fears that we'll all be put out of work by automation go back a long way. Contemptuous dismissals of such fears, and attendant references to Luddism, probably go back nearly as far. The really interesting thing about replacing the work of actors (if it were to happen) is that we'd be replacing something that people actually enjoy doing, not just some tedious chore that they do for the money. As much as an anti-Luddite might assure me, for example, that the growing economy will find me a new job if university teaching were to be replaced by technology, would I find the new job as inspiring as the old one?

One solution for those who enjoy now-automated tasks is to simply continue to do them as a hobby, just as I and other mediaevalists hand-make costumes, beer, embroidery, and other things even though machines can make the same with much less effort. But that does seem to doom us to spending the best eight hours of every day in uninspiring work done just for the money, fitting our passions into our spare time.

By coincidence, The Drum had Alan Kohler take on automation and unemployment in the same week that I read Spectrum. According to Kohler, "automation is suppressing employment, wages and inflation and will do so for a decade or more to come", giving headaches to central bankers attempting to set policies that increase employment while controlling inflation. This is all great for the owners of said machinery, though, who can obtain all of the revenue from their output without having to pay any workers.

Kohler's argument is too sketchy, and my knowledge of economics too weak, for me to say much about his claim. But the potential for automation to create inequality is also a recurring theme in Spectrum's examination of the possible downsides of its futures: those who control technology can use that power to create even more technology and gain even more power, while the rest languish in technological powerlessness.

The threat in Kohler's and Spectrum's dystopias isn't that automation will one day throw masses of people out of work, as the archetypal Luddites might have feared. It's that automation will slowly transfer dignity and power from the broad mass of people to an elite few who control the system. I doubt that many people miss the drudgery faced by mediaeval peasants, who have now been largely replaced by machinery in developed nations. But will we be so glad to give up the passion, autonomy and self-respect that inspires artistic and professional lifestyles?