I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for november 2012

Re-inventing the wheel, I mean, human

2012-11-29 by Nick S., tagged as artificial intelligence, user interfaces

I read David F. Dufty's Lost in Transit: The Strange Story of the Philip K Dick Android over the weekend, whose subject matter is plainly described by its sub-title. I found the book informative and entertaining in its own right, but reading about androids also reminded me of a rhetorical question I asked in the first entry in this blog: what is the purpose of creating human-like artificial intelligence when we have seven thousand million human intelligences already?

I, and most other academics, could probably produce of a long list of intellectual reasons for such a pursuit, ranging from better understanding of how humans interact with other animate objects to illuminating the concept of "intelligence". But some of the folks in Dufty's book (and elsewhere) clearly think that human-like artificial intelligences have more immediate practical uses.

David Hanson, the sculptor who championed the project and built the android's head, argues that "when we interact with things in our environment we interact more naturally, and form more natural relationships, with things that look like us" (p. 73). The surrounding text suggests that Hanson was also thinking about the more academic reasons outlined above, but I think the assumption in this quote deserves some scrutiny.

An essay that I read some years ago, and whose citation I now forget, disputed this kind of thinking using the example of cars. Nearly everyone can learn to drive a car, and we think nothing of it once we've got our licence. This is not, the essay points out, because cars look or behave anything like people, but because they have an interface suited to the task of controlling a motorised vehicle. Why expect that computers (or robots) should be any different?

Hanson might be correct in surmising that we interact more naturally with devices that look like us, at least in the sense that such interaction requires no skill beyond ones we acquire informally at a very early age — though I suspect that many people (Sherry Turkle, for one) would consider the concept of a natural relationship with an artificial being to be an oxymoron. But that's not to say that a hammer or refrigerator, for example, would necessarily be easier to use if it looked like a person.

I think Hanson means to imply that human-like interfaces are worth pursuing because they seem likely to be the most appropriate ones in at least some situations. I'm not yet convinced that human-like interfaces are the best way of interacting with anything other than humans, but maybe that's because I don't have any particular uses for androids.

On the structure of computing revolutions

2012-11-23 by Nick S., tagged as buzzwords, mobile computing, prediction

I recently read an article mocking its own authors for failing to recognise that the iPhone (or some particular version of it) would instigate a revolution. Unfortunately I didn't record where I read this, and I haven't been able to find it again after later thinking about what constitutes a "revolution", and what it might feel like to live through one.

My immediate reaction upon reading the article was: are you sure you weren't right the first time? I, at least, don't feel like I've been through a revolution any time in the past ten years, or, indeed, my entire life. Sure, technology has steadily improved, but I've only ever perceived it as "evolution". I have no doubt that someone catapulted into 2012 from the time of my birth in the 1970's would find much to be amazed about. But, having lived through all of the intervening years myself, I had the much more mundane experience of seeing the changes one product at a time.

This begs the question: how much change is required, and how sudden does it need to be, to constitute a "revolution"? When talking of the history of computing to my computer systems students, I often talk of "trends" from analogue to digital, from stand-alone computers to networked ones, and from single-core to multi-core CPUs. I say "trend" because I perceive the changes as a gradual process of older products being replaced one-by-one by newer products. But proponents of the iPhone (or digital or network or multi-core) revolution presumably perceive the changes as one big leap from existing products to a spectacular new entrant. (Either that, or they use the word "revolution" to mean "any perceptible change".)

Now, many small changes may add to up to a big one. Someone of my mind born in Britain in 1800, say, might have observed machines or factories appearing one at a time over his or her life time. But that person's lifetime now seems short compared to the span of human history, and we consequently refer to that period as the Industrial Revolution. Still, I suspect that future historians will be looking at more than iPhones when they decide what to call the current period.

One of my students foreshadowed the taxonomic problems awaiting future historians when he observed to me that the articles he had been reading disagreed about what era of computing we currently enjoyed. I forget the exact list of candidate eras, but one might have been the "mobile era" and another the "network era", and so on. Off the cuff, I suggested two explanations: firstly, that his sources were talking crap, and, secondly, that his sources were talking about two different aspects of computing.

The two explanations might not be mutually exclusive. Perhaps the iPhone revolutionised mobile telephony/computing for some definition of "revolution", but I didn't notice this revolution because I do relatively little telephony and mobile computing. But the iPhone didn't revolutionise other aspects of computing -- let alone biotechnology or space travel or any of numerous other technologies of the modern period -- so attributing a broader revolution to it would seem to be a load of crap.

Why is it so boring to use the right tool for the job?

2012-11-18 by Nick S., tagged as buzzwords, mobile computing

In thinking about both tablet PCs and Alone Together over the last month or so, I noted the paradigm of using the right tool for the job. To recommend using the right tool for the job seems fairly banal, but I wondered if my perceived need to recommend it reflects the apparent existence of a contrary view in which there exists, or will shortly exist, some universal tool appropriate to all uses.

Henry Jenkins refers to this contrary view as "the black box fallacy" in his book Convergence Culture. I find it hard to identify any particular person who propagated the black box fallacy -- or dream, if you disagree with Jenkins and I -- and I can't imagine anyone owning up to a statement as simplistic as "device X is all we will ever need". Yet, the black box idea seems implicit in utopian (and dystopian) narratives like that implied by questions like "Have digital tablets become essential?"

To be fair to anyone anticipating the arrival of a black box, there are presumably some limits in mind, albeit unstated and vague. Surely no one foresees a single black box performing all the functions of a computer, a vehicle, an oven, a refrigerator and a washing machine! But, even if we restrict the imagined functions of a black box to those currently performed by microelectronics, why expect a single box when there is plainly a whole host of different boxes on the market?

I suppose that the hype and excitement surrounding a new device tends to drown news of existing devices, giving a false and unintended impression that the new device is far more important and interesting than the old ones. Presumably not even the most enthusiastic supporters of smartphones or tablet PCs believe that such devices are about to replace server farms or home theatres, for example. But the features of server farms and home theatres are likely to be far from the mind of someone enthusing over the latest mobile device.

The gradations between phones, smartphones, tablets, netbooks, laptops and desktops are more subtle, though. If desktop computers were only introduced in 2012, after we had been accustomed to mobile telephony and portable computing, could we be so amazed by their computing power, large screens and keyboards as to forget that they aren't very mobile?

Alone together and feeling used by communication tools

2012-11-09 by Nick S., tagged as communication, experience, social networks

My recent difficulties with social networking inspired me to read Sherry Turkle's Alone Together: Why We Expect More from Technology and Less from Each Other. The book's subtitle neatly captures my dissatisfaction with LinkedIn and other supposedly social media: it's very easy to click a button that creates a record in a database stating that I'm "connected" with someone, but there's a whole lot more to do if I want to form and maintain a significant and effective relationship with that person.

Turkle makes a distinction between "performance" and "friendship". In the first half of the book, "performance" refers to robotic toys that are programmed to enact rituals that children expect from conscious beings: the robots say they are happy, hungry, etc. even though they (presumably) don't experience such emotions like humans do. In the second half of the book, "performance" refers to manipulating text messages and Facebook profiles to present the desired standards of coolness, connection and caring. She believes that

sociable technology will always disappoint because it promises what it cannot deliver. It promises friendship when it can only deliver performances (p. 101).

Turkle acknowledges critics who point out that we are always performing to one degree or another, in that we craft different personae for friends, family, work, school and so on. And how does one distinguish "authenticity" from a highly sophisticated and nuanced performance anyway? Of course Turkle contends that the performances exhibited by current robots and social networking sites are hopelessly inadequate to fully capture human emotion and relationships, and I find it hard to disagree.

It is, of course, conceivable that improvements in technology will one day overcome such inadequacy. But what to do in the mean time? Turkle doesn't recommend eliminating robots and social media and, indeed, seems to be quite comfortable with handing them out by the dozen as part of her research.

For me, the answer has to be about recognising the capabilities and limitations of particular media, employing them for what they are good at and dispensing with them for what they are not. The saddest stories in Turkle's book involve people feeling psychologically or socially compelled to use some tool despite its evident incapacity to meet the person's needs. Someone whose only tool is a hammer, as the saying goes, struggles with tasks that don't involve nails.

Most of the people in Turkle's studies are young -- children or teenagers -- and it could be that they simply haven't yet learned which tools work best for which tasks. Even older people struggle with how best to use new tools. Perhaps it isn't so surprising that things go awry in these situations.

Towards the end of the book, Turkle writes about people who have realised that the tools they have been using aren't working for them, and have consequently developed strategies like scheduling one-on-one phone conversations and deleting their Facebook profiles. Some of these strategies are fairly crude, but I think they demonstrate an important (and possibly under-rated) mind-set: a determination to make technology serve one's needs in place of passive acceptance of what technology happens to be in vogue.

Hackers? In this day and age?

2012-11-06 by Nick S., tagged as freedom, hackers, law

The (Australian) ABC's news web site recently featured a radio discussion between two unidentified persons regarding anonymous publication of material on the Internet. I'm not familiar with the story that sparked the discussion, but the conversation caught my attention for two reasons. Firstly, one of the participants referred several times to classical computer hacker attitudes that I had thought had vanished, or at least been seriously marginalised, by the popularisation of the Internet. Secondly, the other participant noted that certain "rights" supposed to exist by such hackers (in this case, anonymity and taking any file available for download) do not actually exist in law.

My graduate certificate in communications had me studying a lecture that, in part, presented the romantic ideal of computer hackers as freedom-loving individuals bent on understanding, using and, if necessary, subverting computer technology for some greater purpose. I gather that many of the students were not particularly impressed with this portrayal, possibly because they identified "hackers" with virus-writers, identity thieves and spammers. While I don't think either the lecture or the original users of the word "hacker" intended it to mean "computer criminal", I also think it's very naïve to equate freedom with the power to use technology in whatever way one is capable of doing.

My own response to the lecture described the hacker mentality as a "might-makes-right philosophy that equates freedom with one's technological power exercise it". Inspired by a related observation in David Brin's The Transparent Society, I postulated that competitions of technological power would, in fact, be won by well-resourced organisations rather than a few lone hackers.

Sure, classical hackers have won the occasional battle like reverse-engineering the Content Scrambling System for DVDs or jailbreaking iPods. But I'm pretty sure that Google, Apple, Microsoft and the rest ultimately have a far mightier influence over our electronic devices than Jon Lech Johansen, Richard Stallman or even Linus Torvalds. Meanwhile, the public's image of a "hacker" is largely informed by the kind of lawless computer whizzes they encounter most often: spammers, phishers, data thieves and authors of malware.

The law recognises this, and curtails rights like freedom of action and freedom of speech where, in the view of the law-makers, one person's exercise of those freedoms would interfere with someone else's freedom or well-being. So my freedom and ability to write e-mail software, for example, does not entail the right to e-mail fraudulent advertisements for Viagra to every e-mail address I can download.

Perhaps an honest-to-God cyberlibertarian would say that I should have the right to send whatever e-mail I like to whomever I like. But would he or she appreciate the same activity from Google, say, who possesses vastly greater reserves of information and software development skill than I?