Not long after struggling with how to approach work and automation last month, I happened to pick up Nicholas Carr's The Glass Cage (2014) and Eryk Brynjolfsson and Andrew McAfee's Race Against the Machine (2011), which cover some of the same territory. My perspective is similar to Carr's in that we both acknowledge that machinery has brought us many benefits — I even make my living from building more machines and teaching other people to do the same — but remain nonetheless wary about uncritical adoption of machines that at first seem handy helpers, but ultimately prove to be inadequate replacements for human skills and/or straightjackets from which we cannot extricate ourselves.
So what should we be automating, what should we be leaving alone, and how do I reconcile my profession with the possibility that the machines I build will transfer wealth and dignity from the people who used to do the work, to the owners of the machines? As Brynjolfsson and McAfee note, the orthodox economic view is that new jobs have appeared to replace the automated ones — and we've done pretty well by this in the long run — but there's no known principle of economics that assures us that this will proceed always and forever.
The first principle that occurred to me was to recommend that we adopt machines only when they enable us to do things that could not have been done without them: new technologies must be more than faster ways of performing existing work. This also fits with my doubts about the pursuit of fast and easy as a path to satisfaction.
This principle has at least one flaw, obvious to anyone familiar with arguments in favour of economic growth: automating a specific task that could be done by a human may free that human to do something that he or she couldn't do before for lack of time, energy or resources. The orthodox view I mentioned earlier depends on exactly this kind of process. For this reason, I don't think the principle could be sensibly applied on a task-by-task basis.
Nonetheless, the principle gets at what we surely want from machines in general: why bother with them if they simply leave us doing the same things as before (even if we can do them faster)? What's more, Carr points out that being "freed up" isn't much consolation if it means being unemployed and without access to resources that might enable the victim to make use of their notional freedom.
That I can't apply the principle on a task-by-task basis, however, makes pursuing it very difficult: I have no way of determining the worth of any particular engineering project in light of it. (Not that I often get to make such determinations: my need to pay my bills means that what I do is dictated as much as by what other people are willing to pay for as by my private views of what would make the world a better place.) Perhaps the principle isn't hopeless, but it requires a better formulation than what I'm able to come up with at the moment.
I was recently without Internet access at home for a week, apparently due to flooding at my local telephone exchange. I've heard that some people get very upset at losing their connectivity even for periods much shorter than a week, most recently in a Conversation article from Michael Cowling claiming that "we are all connected, every minute of every day, and without your phone you are on the outskirts of everybody else’s new, more digital, world." The local newspaper also ran a suitably angry headline on a stand outside my local newsagent towards the end of the outage. (I didn't read the newspaper itself.)
Frustrating as the lack of connectivity might have been on occasions, I actually found myself enjoying the adventure of a daily trip to the local library or city mall, where I could check my e-mail using WiFi services provided the local council. (I used to wonder what use public WiFi would be given that we all have Internet connections at home anyway, but now I know.) I was reminded of the days of dial-up modems, when connecting to the Internet was a minor treat, and I maintained a list of Internet-things-to-do to be serviced by dialling in for a couple of hours every day or two. The only really annoying thing, in fact, was that I fell behind in my Coursera studies due to an inability to download course videos over the public WiFi network. I was almost disappointed when the fault came to an end and the adventure was over (though I did catch up on my studies.)
One might suppose that I'm quite a different person to the smartphone-driven folk that inhabit the world described in Cowling's article. I'm certainly older. On the other hand, I presume that the video that Cowling presents to support the quote at the beginning of this entry is staged — not even the youngest and most gadget-conscious of my acquaintances or students behaves anything like the folks shown in it, and I'm sure that most people would regard those folks' behaviour as anti-social and obnoxious.
I recently went on a camping trip during which I was told that a young camper fitting Cowling's description had, in the process of this camp, discovered that she could, in fact, enjoy time without her gadget. One can speculate that I've just had twenty years longer than her to find this out, not to mention first-hand experience of a time when everybody went without a mobile phone all the time.
Perhaps being without the Internet appeals to a similar part of us to that to which camping appeals. I don't suppose I'd want to be camping indefinitely, though maybe I could if I had to given that I'm of the same species as ancestral humans who reached every scrap of land except Antarctica without motorised transport, electricity, or even agriculture. Similarly, my younger acquaintances can surely go without their phones for a bit, and might even enjoy it up to a point, given that all of us did just that only twenty years ago. We just need to remember that there's more to us than the fashion of the day.
This quarter's issue of IEEE Technology and Society presents the results of a survey of blog entries concerning tablet PCs conducted by Efpraxia D. Zamani and colleagues. Two things about the survey struck me immediately: that "most of the bloggers hold upper level managerial positions", and that nearly all of the material quoted from their blogs seems rather inane. Zamani et al. could almost be writing a parody of Dilbert's Pointy-Haired Boss.
To be fair to upper-level managers, Zamani and colleagues compiled their material in a way that seems likely to select only the most inane stuff: they searched for "blog" and "iPad", and discarded any technical reviews. That is, they sought out casual writing about iPads and explicitly ignored rigourous reviews of the technology. We can still hope that most upper level managers actually have better things to do than write uninsightful observations like "it was ultra-convenient to just flip out the iPad ... without having to whip out a laptop or projector" (p. 76), which I would otherwise expect to find only in mediocre undergraduate essays. What, exactly, is the difference between "flipping out" and "whipping out"?
Whatever the merits of the bloggers' own observations, Zamani and colleagues identify a euphoric attachment to technological devices that seems totally alien to me. There are plenty of devices that I find useful for one purpose or another, and occasionally I might even mention so in conversation or on this blog. But I don't think I'd ever use words like "love", "passion" and "excitement", as Zamani and colleagues do in the last part of their article (p 78). I don't, for example, feel any loss when going for days or even weeks without my phone or computer when camping or travelling. (I do eventually wonder if any of my friends sent me any e-mail, though.)
In part, I guess this reflects my engineering background. For me and other engineers, technological artifacts are simply the end result of sound engineering principles. If we get excited about anything, it's the cleverness and power of the principles themselves. For non-engineers, however, technological artifacts can be magic boxes to be marvelled at in their own right.
Perhaps I'm also just not one to feel attached to non-human objects. Similar enthusiasm about cars and pets, for example, leaves me feeling cold. I do feel sentimental about objects I've owned for a long time, and I hate throwing things out. But I can't imagine myself writing a blog entry praising the guitar I've owned for twenty years (but rarely play), or chronicling the life and times of my once-sturdy pair of cargo pants that were torn beyond repair during a hike last month.
I certainly can't imagine anyone wanting to read such a blog entry. But I guess iPad enthusiasts probably don't find my actual blog very interesting either.
Apparently deciding to take a break from electronics for the month, IEEE Spectrum takes a look at agricultural technology for its June 2013 issue. Spectrum is sufficiently impressed with what it sees to predict the coming of an age of plenty with food for all, whatever food crises and starvation might be feared by less optimistic forecasters.
Keith Fuglie (pp. 20-26) leads the optimism with an article explaining his supreme confidence that agricultural technology will provide nutrition for everyone into the foreseeable future. Whether or not we're going to starve is a topic for a different blog, but I do want to comment on the technology-bound world-view apparent in Fuglie's article and many of the others that follow it.
From the standpoint of technological optimism taken by Spectrum's contributors, all problems can, must and will be solved by technology. While a technology magazine like Spectrum could be expected to focus on the technological aspects of its subject matter, technology-bound articles like Fuglie's do not even appear to imagine that solutions might also come from policy, design, economics, culture and other areas. It's technology or bust (but of course there will be no bust because technology is presumed capable of solving any problem).
One can imagine an engineer who, upon seeing a piece of litter beside the road, sees an opportunity to develop an army of rubbish-collecting robots. A city taking up this army could spend millions of dollars to free its citizens from the trivial hassle of putting their litter in a bin. Pro-robot councillors, I suppose, might argue that litterbugs will drop litter regardless of how cheap and easy the bin seems to tidier citizens, and the robots will completely solve the problem where civic virtue might only partially solve it. But that tells a pretty sad story of the cost of laziness and irresponsibility: one might say that the technology has improved, but the citizens haven't.
Over the past couple of months, I've come across a few stories of mis-adventures with maps. The first involved a man who blamed his GPS for guiding him to the wrong side of the road. The second involved the discovery that an island appearing on several maps in the Coral Sea does not appear to exist. The third involved "Apple Map Disasters" reported in the February 2013 edition of APC Magazine (p. 15). The first two of these stories amazed me for different, but perhaps related, reasons, while the third provides something of an explanation.
The driver involved in the wrong-side-of-the-road episode presumably allowed his technological assistance to over-ride his pre-GPS-navigator skills of reading road signs and following road markings. One or two of the commentators in the story also blame "distraction", which I also believe to be a hallmark of poor user interfaces. Either way, technology has frustrated a skill possessed by any competent driver. >
An unnamed APC staff member seemes to have suffered a similar lapse when Apple Maps' guidance led him to lug his equipment for ten minutes in the wrong direction down a street. On any ordinary Australian street, a simple glance at the street numbers would have told him the correct direction in which to go. Here, indeed, seems to be a pair of cases in which technology has made us stupid by causing its users to overlook their own skills in favour of technology that is not, in fact, adequate to replace them.
The existence or not of obscure islands sounds like a problem out of the seventeenth century, except that we now have Google Maps to blame. The Sydney Morning Herald, which seems to have broken the story, made much of the fact that Google Maps records a "Sandy Island" in the Coral Sea that could not be found by a recent scientific expedition. The story was consequently picked up as "IT news" by The Register and IEEE Spectrum. Shaun Higgins of the Auckland Museum (among others), however, points out that the supposed island pre-dates Google Maps, and, indeed, any computerised mapping system. It seems that Google Maps was simply repeating an error made by cartographers for a hundred years or more, yet news outlets interpreted the whole thing as an "IT glitch". (I should point out that all is not lost: the Sydney Morning Herald itself followed up with Shaun Higgins' explanation, and numerous commenters on The Register offered plausible suggestions on how the error might have come about without Google's intervention.)
APC quotes an explanation of Apple Maps' problems given by Mike Dobson. Apple, he thinks, relied on computerised quality-assurance algorithms without any human oversight to check that the algorithms themselves were correct. News outlets presuming Google Maps to be the source of all cartographic knowledge, I think, risk falling into a similar trap.
Ordinary users, I suppose, could arguably be forgiven for presuming that the products of big-name companies like Google, Apple and in-car navigation manufacturers meet certain standards of quality. Yet we all know that technology makers are fallible, and that even a device that performs one task well might not perform a related one at all. Perhaps "trust, but verify" would be better advice?