I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for july 2015

On science fiction and political economy

2015-07-28 by Nick S., tagged as artificial intelligence, prediction

Continuing the science fiction theme from my previous entry, I recalled an interview in which Iain Banks described "the Culture", the society in which most of his science fiction novels are set, as his utopia. One of the distinguishing features of the Culture is its population of artificial "Minds" that perform tasks from waiting on the biological citizens of the Culture, to commanding mammoth spaceships, to governing the whole society.

At first I wasn't convinced — I have no desire to add any extra arms, as does the protagonist of The Hydrogen Sonata (2012), for one — but, having considered some of the alternatives over my past few entries, I'm coming around to the idea. Banks' Minds are a pretty friendly, helpful and cooperative bunch, far from the totalitarian overlords featured in the Terminator movies and the incomprehensible tools of an in-the-know aristocracy imagined by Tyler Cowen and Frank Pasquale. The human(-like) characters don't need to work, but give purpose to their lives through elaborate hobbies like games of strategy (The Player of Games, 1988), absurd musical instruments (requiring those extra arms in The Hydrogen Sonata), and carrying out the alien missions that drive most of the novels' plots. (They also take plenty of time out from their hobbies for parties, sex and drugs.)

Of course Banks doesn't describe the economic or political mechanisms by which all this comes about. The same could be said of Star Trek, in which future humans are imagined to spend their time "improving themselves" rather than working for more material wealth.

Come to think of it, I can't recall science-fiction-inspired technology pundits like Project Hieroglyph or Brian David Johnson's "Science Fiction Prototyping" column in IEEE Computer saying much about economic or political mechanisms, either. Like most people, perhaps, they're primarily interested in how particular imagined technologies might impact society. This might be a fine thing to do, but the thoughts above lead me to wonder if the world could also use some "political economy fiction" exploring something broader than adventures with a particular technology or scientific theory.

Perhaps any such fiction is destined to sound like an old-fashioned utopia, and the term "utopia" has become something of an insult to describe a narrow idealistic vision that suits the interests of its proposers while ignoring the interests of everyone else and being generally impractical. My differences with those I describe as "techno-utopians" in particular were a large part of my motivation in beginning this blog. Still, in the essay that inspired Project Hieroglyph, Neal Stephenson laments what he perceives as a failure to pursue big technological ideas like space travel and robots. But if pursuing space travel and robots is interesting and important, why not our political and economic institutions as well?

Some thoughts on the Butlerian Jihad

2015-07-21 by Nick S., tagged as artificial intelligence, employment

Continuing to think about automation and employment while constructing my last entry, I recalled the "Butlerian Jihad" that Frank Herbert imagines in the history of Dune (1965). In the far distant future in which the novel is set, the Jihad has resulted in a ban on machines that replicate human mental functions. This ban manifests itself in Dune in form of human "mentats" trained to perform the computational work that we now associate with machines.

It's been some time since I read Dune, and I don't remember why the Butlerians went on their Jihad, or if Herbert gives a reason at all. But if they feared that thinking machines might make humans redundant, or at least spawn the monumental inequality envisaged by thinkers like Tyler Cowen and Eryk Brynjolfsson and Andrew McAfee, could the Butlerians have a point? I imagine that orthodox economists and technologists, including those I've just mentioned, would simply dismiss the Butlerians as a form of Luddite. But why should we accept machines if they're not doing us any good?

Part of the problem with any such jihad, aside from the violence associated with it in the novels, is that what makes us human is not so clear-cut or obvious as is traditionally presumed. Evolutionary biology argues that we are not so different from other animals, work in artificial intelligence is continually re-drawing the line between computation and what we think of as "intelligent", and neurologists are yet to identify a soul. The introduction of mentats illustrates the computational part of the difficulty: in ridding the galaxy of machines with human-like capabilities, the Butlerians introduced a need for humans with machine-like capabilities. Brynjolfsson and McAfee (I think) also make the point that it isn't just in mental powers that humans distinguish themselves machines: humans remain better at tasks requiring fine manual dexterity, meaning that robots aren't yet ready to replace pickers and packers, masseurs, and all manner of skilled tradespeople. Any would-be Butlerians have some work to do in defining exactly what it is that they object to.

A second problem is that people differ in what they want to do themselves, and what they want automated. I enjoy making my own beer, for example, but plenty of other people are happy to buy it from a factory that can make it much more efficiently. On the other hand, I'm usually happy to have my camera choose its own settings for focus, shutter speed and the like, where I imagine a photography enthusiast might be appalled to leave such things to a machine. Should I smash breweries, or photographers smash my camera, to preserve the need for the skills that we like to exercise ourselves?

Of course I don't need to smash breweries in order to brew my own beer: I have a non-brewing-related income that leaves me with the time and resources to brew my own beer even if no one else will pay for it. This brings me back to a point I've already come to several times in thinking about automation and work: to what degree should our worth and satisfaction depend on paid employment at all? If machines allowed us to reduce the amount of work we do, freeing up more time and resources to do what we actually want to do, would we have any reason to fear the machines?

How can engineers approach a race against the machine?

2015-07-19 by Nick S., tagged as dependence, philosophy

Not long after struggling with how to approach work and automation last month, I happened to pick up Nicholas Carr's The Glass Cage (2014) and Eryk Brynjolfsson and Andrew McAfee's Race Against the Machine (2011), which cover some of the same territory. My perspective is similar to Carr's in that we both acknowledge that machinery has brought us many benefits — I even make my living from building more machines and teaching other people to do the same — but remain nonetheless wary about uncritical adoption of machines that at first seem handy helpers, but ultimately prove to be inadequate replacements for human skills and/or straightjackets from which we cannot extricate ourselves.

So what should we be automating, what should we be leaving alone, and how do I reconcile my profession with the possibility that the machines I build will transfer wealth and dignity from the people who used to do the work, to the owners of the machines? As Brynjolfsson and McAfee note, the orthodox economic view is that new jobs have appeared to replace the automated ones — and we've done pretty well by this in the long run — but there's no known principle of economics that assures us that this will proceed always and forever.

The first principle that occurred to me was to recommend that we adopt machines only when they enable us to do things that could not have been done without them: new technologies must be more than faster ways of performing existing work. This also fits with my doubts about the pursuit of fast and easy as a path to satisfaction.

This principle has at least one flaw, obvious to anyone familiar with arguments in favour of economic growth: automating a specific task that could be done by a human may free that human to do something that he or she couldn't do before for lack of time, energy or resources. The orthodox view I mentioned earlier depends on exactly this kind of process. For this reason, I don't think the principle could be sensibly applied on a task-by-task basis.

Nonetheless, the principle gets at what we surely want from machines in general: why bother with them if they simply leave us doing the same things as before (even if we can do them faster)? What's more, Carr points out that being "freed up" isn't much consolation if it means being unemployed and without access to resources that might enable the victim to make use of their notional freedom.

That I can't apply the principle on a task-by-task basis, however, makes pursuing it very difficult: I have no way of determining the worth of any particular engineering project in light of it. (Not that I often get to make such determinations: my need to pay my bills means that what I do is dictated as much as by what other people are willing to pay for as by my private views of what would make the world a better place.) Perhaps the principle isn't hopeless, but it requires a better formulation than what I'm able to come up with at the moment.