I recently caught the movie Her (2013), whose story of a man falling in love with an "operating system" (actually what is more commonly called an "artificial intelligence") seemed like it should provide plenty of material for commentators upon humans' relationships with technology. But apart from the prevalence of pastel shirts and bad moustaches in this imagined future, I was most forcefully struck by the constant use of voice interfaces. The main character makes his living by dictating letters to a computer that prints them out in faux handwriting, and, once his new operating system is installed, constantly chats away to it without any apparent regard to what might be overheard by the people around him. Nor do the people around him pay him any regard.
I've long suspected that talking computers are part of The Amazing Science Fiction Future That Never Arrived. Not because they don't work — though my limited personal contact with them suggests that voice recognition is still not particularly good — but because they aren't nearly as useful as many a science fiction writer has supposed them to be. Is it really so hard for an able-bodied person to push a button or touch an icon on a screen? Can't writers, well, write? And would any real writer (or anyone else needing to concentrate) want to work in an office where everyone was babbling at their computers all day?
A week after seeing Her, I happened to read a quote from one John R. Pierce in the August 2014 edition of IEEE Spectrum: "Many early computer enthusiasts thought that computers should resemble human beings and be good at exactly the tasks that human beings are good at" (p. 8). He goes on to describe the pursuit of human-like computers as "facing the future with one's back squarely towards it", that is, looking at the past and assuming that the future will be a technologised version of the same.
I take Pierce to be making a point similar to one I've already discussed a couple of times in this blog: what use would a human have for a computer that did something that he or she is already good at? Computers are so useful precisely because they're good at things at which humans are not — most fundamentally, the rapid and reliable carrying out of minute instructions.
When I was (much) younger, I think I supposed that we'd one day be able to program our computers using English instead of the difficult-to-learn formal languages that we use now. Or at least I assumed that everyone else was pining for that day, as evidenced by depictions like Her. But greater experience tells me that the reason that we don't use English to program computers isn't that they can't understand it (though they can't), it's that English isn't actually a particularly good tool for describing data or issuing instructions. That's why lawyers and philosophers spend so much time debating the precise intepretation of observations and phrases, and why scientists and others resort to mathematics when they want their meaning to be indisputable.
I'm not sure where the idea that computers should or would be like humans came from. They neither look nor act anything like humans, and I'm pretty sure that most psychologists would laugh at the idea that humans behave like neat information-processing machines. And humans have plenty of trouble talking to each other — Her illustrates this itself — so why expect talking to a computer to be any better?
I recently found myself with contradictory reactions after reading an article on "citizen developers" on the ABC's Technology & Games site. Peter Fuller writes about the potential for ordinary users to develop their own software using "application platform-as-a-service" (aPaaS) technology. This technology is supposed to allow what software engineers call "domain experts" to construct their own domain-specific software without recourse to professional analysts and developers, at least for relatively simple applications.
My first reaction was that Fuller might be making a misguided attempt to promote software development as fun and easy. This reaction was probably also influenced by hearing the assertion that maths and science ought to be "fun" in order to attract school students in another recent ABC report, which had Australia's Chief Scientist bemoaning an alleged fall in education standards. I've long wondered if such advice might be misguided: mastering mathematics, science and engineering requires substantial effort, and anyone expecting fun and games is surely kidding themselves. Mastery might be all of rewarding, interesting and useful, but it's not fun in any conventional sense. As one of my harder-partying friends observed during our undergraduate days: "I can't really call myself a hedonist; I'm studying engineering."
Upon further reflection, I began to wonder if aPaaS or similar technology might also provide an opportunity for users to take control of their computers where they are willing to make an effort, but don't have the time to turn themselves into professional software developers. Could aPaaS be one approach to the critical computing that I pondered last month?
Having not investigated any aPaaS software myself, or observed any non-developers using it, I can't say for sure which reaction comes closer to the truth. My experience with integrated development environments hasn't been encouraging: they seem to encourage even professional software developers, let alone students and amateurs, to produce lazy code that satisfies the formal syntax of the language but omits error handling, meaningful comments and other qualities of well-made software. And I'm pretty sure I've heard similar claims about non-programming developers before — mostly recently, in the form of "mash-ups" — but I'm yet to see much useful software that is actually made this way.
Most likely, though, aPaaS can be used in both modes (as can integrated development environments): careful users can use them to increase their productivity and the control that they have over their computers, while superficial observers confuse cobbling together a few lego blocks with engineering. Fuller makes a similar point with a cooking analogy: many of us can put together a satisfying meal for a few friends, but we employ professional caterers when it comes to preparing a six-course meal for a hundred guests at a big event.
Since writing my previous entry on positive computing, I've pondered how software might promote my well-being beyond its traditional promise to make things faster and easier. I've struggled. Perhaps I'm just not particularly creative when it comes to positive ideas, or maybe I'm not sufficiently well-versed in the theory of subjective well-being to know what might be helpful.
I've found myself thinking more about the consumer side of the question, which I left unanswered in my previous entry. Having made connections to some earlier complaints about lazy use of communication tools and e-mail, I realise that I've begged the question: how should we be using our computers?
On one hand, I've been critical of blind acceptance of trendy devices and services, and of lazy submission to user interfaces developed by misguided software designers. On the other hand, I don't think it's reasonable to expect every user to possess the deep technical understanding of computers required to control every detail of his or her experience. Even the most sophisticated users simply don't have the time to build every item of hardware and compose every item of software to meet their precise needs, even if they have the theoretical ability to do so.
The first approach that occurred to me would be to demand that we make our "best effort", that is, do as much as we can within the constraints of our time and technical ability, and always strive to improve. Whenever I'm particularly irritated by a feature that isn't meeting my needs, for example, I'll do a quick search for how to modify that feature. And when I've got more time, I'll invest that time in customising my computer to meet my needs.
The second approach that occurred to me was suggested by Richard Thaler and Cass Sunstein's book Nudge (2008), in which they discuss the development of "choice architectures" that encourage people to make good choices when unable to think the matter through carefully. The basic idea is to think carefully about the desired outcomes during the design phase, and design the system to make it easy to make the choices leading to those outcomes. Such thinking is (I hope) amongst the bread and butter of software designers, and Thaler and Sunstein specifically mention the example of e-mail clients that pop up warnings when the user asks to send an e-mail that contains the word "attach" but does not have any attachments. But software users can apply the same idea, as is occassionally suggested in advice columns like Cassie White's article on digital overload that happened to appear on the ABC's Health & Wellbeing site while I was working on this entry.
Lastly, I wonder if we need a concept of "critical consuming" analogous to the "critical reading" concept that teachers try to impart to students. In a computing context, we need more than the mere ability to move a mouse or touch a screen to get the goods on offer; we need also to think about which goods we want, why we want them, and whether or not they're really the best goods for our purposes.
Shortly after writing my entry on the joys of engineering and the banality of products, I found that Rafael A. Calvo and Dorian Peters had addressed much the same issue in the Winter 2013 issue of IEEE Technology and Society (pp. 19-21). In fact, they say they're soon to publish a whole book on the subject, to be called Positive Computing. I've added it to my reading list.
In the mean time, the book's title comes from a 2011 position paper written by Tomas Sander, who looks at the role of information technology in pursuing the "positive psychology" proposed by Martin Seligman. The basic idea is to create computer applications that promote what psychologists and economists call "subjective well-being", rather than applications that merely allow us to do things faster. Tibor Scitovsky might have had exactly this in mind if he were writing The Joyless Economy today.
I'm sure that plenty of applications already exist that promote well-being in one way or another. Calvo and Peters specifically mention SuperBetter, bLife and the Mindfulness App, which seem to implement ideas from the positive psychology school. The promises made by these applications might be a little saccharine for my tastes, and I have certain misgivings about aspects of Seligman's ideas, but I think there's reason to believe that great games, for example, can provide meaningful and satisfying challenges.
On the other hand, I'm sure that there is plenty of software out there that promises meaningful and satisfying experiences, but ultimately provides only superficial simulacra of such experiences. The development and use of such software might be driven, in part, by a wish for fast and easy access to desirable experiences.
Whatever the motives of producers in creating the products that they do, Scitovsky calls for consumers to become more sophisticated in their choice and use of products. In a computing context, for example, word processing software may make it fast and easy to edit and format documents, but it's still up to writers to strive for meaningful words, and up to designers to strive for attractive pages. If they don't, word processors are just a fast and easy way to produce unsatisfying junk. I've previously made similar comments about communication technology that I can now interpret as a need to be more sophisticated about the communication tools that we use.
Seeing that Scitovsky and others were writing about these notions back in the 1970s, I wonder why we still appear to be prioritising fast and easy over meaningful and satisfying. I suppose that Scitovsky's critics might argue that history has shown him wrong, and that the majority of people really do value fast and easy products over what a few elitists think are more worthy pursuits, Maslow's hierarchy of needs be damned. But when I see the degree to which Australians appear to have convinced themselves that we're "doing it tough" despite enjoying one of the highest levels of material wealth that has ever existed in the world, I suspect that the critics and their followers might just have chosen to pursue the fast and easy path because it is itself the fast and easy choice.
This month, a couple of the magazines to which I subscribe presented some challenges to the black box fallacy. February's issue of IEEE Spectrum (p. 23) outlines the views of Jakob Nielsen on Windows 8, whose lab testing of the new operating system leads him to suppose that "Microsoft tried to almost optimize for the mobile scenario, and that’s why their desktop design falls through so bad." APC Magazine's extensive review in November 2012 came to a similar conclusion, albeit without the precision terminology that Nielsen uses to explain his views in the full interview about his experiments. In the March 2013 issue of the same magazine, Tony Sarno's editorial (p. 3) lambasts the view that "the PC is dying", to be replaced by the present fashion for tablets and smartphones.
Now, I suppose that a mobile computing enthusiast might assert that the PC is indeed dying and that Microsoft is therefore doing exactly the right thing in optimising its operating systems for mobile computing devices. More accurately, if Microsoft sees its market as consisting largely of mobile devices (rather than office computers and server farms), then maybe it is indeed doing the right thing for its commercial purposes.
Yet, if for no other reason than the size of the screens involved, tablets and smartphones are surely not going to replace server farms, home theatres, and probably not even office computers in the foreseeable future. Obviously it's Microsoft's own business as to what market they want their products to serve, but I haven't heard the company announce an end to its interest in the PC market, and all of the reviewers mentioned above clearly expect Windows to continue to serve this market. Unless we've all completely mis-understood Microsoft's intentions for Windows 8, it seems that Microsoft may have fallen victim to a form of black box fallacy in which there is one grand unified interface suitable to interacting with all kinds of device, regardless of the device's purpose or form factor.
Considering APC's Future of the PC Poll, however, did give me some first-hand experience with why someone might forget about PCs: they are, it has to be said, pretty boring. I could, no doubt, amply answer the poll's question about what can be done with a PC, but word processing, software development and the hosting of web sites don't much inspire me to "suggest a positive marketing message or slogan that PC makers can use in their marketing". Then again, maybe that's exactly how you'd expect an engineer to answer. I find soap, say, pretty boring too, but my local supermarket still moves shelves full of the stuff.