I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for january 2013

Terrorist, freedom fighter, or hacktivist?

2013-01-29 by Nick S., tagged as hackers

I read a collection of articles today confirming that the traditional hacker ethic isn't quite as dead as I might have thought it to be. Firstly, David Glance tells the story of a Canadian student "expelled for idealistically point out security flaws" in The Conversation, while The Register ran articles on the prosecution of Aaron Swartz for fraudulently obtaining access to scientific articles on JSTOR and an attack on the University of Western Sydney criticising its decision to purchase iPads for its student population.

The recent Whitehaven Coal Hoax spawned a lot of comment on civil disobedience vs vigilantism in Australia. I don't think I've seen the same terminology applied to computer-based protest — "hacktivism" seems to be the preferred term — but it's surely much the same issue. When is defying the law nobly standing up for a cause, and when is it attempting to get one's way by force?

The cynical answer is that it's "civil disobedience" when one agrees with the political view being expressed, and "vigilantism" otherwise. More nuanced answers involve the availability of alternative methods of protest, the level and kind of harm resulting from the action, and the perpetrator's willingness to brave the ascribed punishment for his or her actions.

Hacktivism typically seems to me to fail most or all of the above tests, starting with the cynical one. The "rights" championed by the hacker ethic are frequently of little interest to anyone other than computing experts, and some of them would come at the expense of other people and industries (such as free access to private and commercial information). Of course computing experts might have genuine rights that are particular to their profession, but is anyone outside the hacker community impressed by the vandalisation of web sites, much less see the need to establish a right to it?

The Western liberal democracies that hosted all of the events listed in the first paragraph of this entry, and many others besides, provide numerous avenues through which people can make known their opinion on iPads, open access and just about anything else without needing to resort to fraud and the commandeering of other people's computer equipment. Librarians and academics, for example, are already making significant strides towards open access to scientific literature, so what need is there for vigilantism? Sure, your opinion may not be as well-known or as influential as you'd like it to be, but just about everyone else would probably say the same thing.

Gaming, on and off the armchair

2013-01-17 by Nick S., tagged as games

Andy Ruddock's article on violent computer games on The Conversation last week mentions Henry Jenkins' opinion that "the trouble with most gaming violence ... was that it was boring" following the Columbine Massacre in the US in 1999.

Having myself tired of yet another first-person shooter around the same time, I'm inclined to agree with Jenkins. To judge by the popularity of games like World of Warcraft and the endless stream of blowing-stuff-up that appears in games reviews in APC Magazine and the conversations of my gaming acquaintances, however, millions of gamers disagree.

Reading through said games reviews, and enduring such conversations, it's easy for scholarly types to dismiss computer games as the most repetitive and unimaginitive form of art ever devised. It being cricket season in Australia, however, reminded me that games like cricket, baseball and various forms of football have been played by more or less the same rules for 150 years or so, and yet people (including me) still find them interesting to both play and watch.

So why shouldn't computer games have the same longevity? If playing a constantly-evolving roster of opponents at cricket and football can keep us entertained for 150 years, why not a constantly-evolving roster of computerised space aliens, fantastical creatures, and terrorists?

Some classic computer games may conceivably have this sort of longevity: I still think fondly of games like Pacman, Tetris and Bubble Bobble long after I lost interest in Doom, Quake and all their clones. Jenkins' and my complaints of repetitive violence might just be symptoms of Theodore Sturgeon's classic observation that "ninety percent of everything is crap" — it's not like every film, book or piece of music released is a masterpiece of inspiration and originality, either.

Game enthusiasts of the 1990s and 2000s often seemed to me to be pre-occupied with the quality of sound and graphics, rather like cricketers being pre-occupied with the construction of bats and balls. A visit to the International Cricket Hall of Fame (formerly the Bradman Museum of Cricket) earlier this week, however, reminded me that the rules and equipment used in cricket developed for a century or more before what we now recognise as the first test match in 1877. In a hundred years' time, will we look back on gamers of the 1990s in the same way we look back on those who experimented with the lengths of pitches, the construction of bats, and styles of bowling in the 1800's?

On near-replacements for navigational skills

2013-01-16 by Nick S., tagged as dependence, transport

Over the past couple of months, I've come across a few stories of mis-adventures with maps. The first involved a man who blamed his GPS for guiding him to the wrong side of the road. The second involved the discovery that an island appearing on several maps in the Coral Sea does not appear to exist. The third involved "Apple Map Disasters" reported in the February 2013 edition of APC Magazine (p. 15). The first two of these stories amazed me for different, but perhaps related, reasons, while the third provides something of an explanation.

The driver involved in the wrong-side-of-the-road episode presumably allowed his technological assistance to over-ride his pre-GPS-navigator skills of reading road signs and following road markings. One or two of the commentators in the story also blame "distraction", which I also believe to be a hallmark of poor user interfaces. Either way, technology has frustrated a skill possessed by any competent driver.

An unnamed APC staff member seemes to have suffered a similar lapse when Apple Maps' guidance led him to lug his equipment for ten minutes in the wrong direction down a street. On any ordinary Australian street, a simple glance at the street numbers would have told him the correct direction in which to go. Here, indeed, seems to be a pair of cases in which technology has made us stupid by causing its users to overlook their own skills in favour of technology that is not, in fact, adequate to replace them.

The existence or not of obscure islands sounds like a problem out of the seventeenth century, except that we now have Google Maps to blame. The Sydney Morning Herald, which seems to have broken the story, made much of the fact that Google Maps records a "Sandy Island" in the Coral Sea that could not be found by a recent scientific expedition. The story was consequently picked up as "IT news" by The Register and IEEE Spectrum. Shaun Higgins of the Auckland Museum (among others), however, points out that the supposed island pre-dates Google Maps, and, indeed, any computerised mapping system. It seems that Google Maps was simply repeating an error made by cartographers for a hundred years or more, yet news outlets interpreted the whole thing as an "IT glitch". (I should point out that all is not lost: the Sydney Morning Herald itself followed up with Shaun Higgins' explanation, and numerous commenters on The Register offered plausible suggestions on how the error might have come about without Google's intervention.)

APC quotes an explanation of Apple Maps' problems given by Mike Dobson. Apple, he thinks, relied on computerised quality-assurance algorithms without any human oversight to check that the algorithms themselves were correct. News outlets presuming Google Maps to be the source of all cartographic knowledge, I think, risk falling into a similar trap.

Ordinary users, I suppose, could arguably be forgiven for presuming that the products of big-name companies like Google, Apple and in-car navigation manufacturers meet certain standards of quality. Yet we all know that technology makers are fallible, and that even a device that performs one task well might not perform a related one at all. Perhaps "trust, but verify" would be better advice?

The predictable top ten predictions of 2013

2013-01-12 by Nick S., tagged as prediction

Around this time of year, many commentators on technology (and probably other things) like to offer their "top ten predictions" for the coming year. I've recently skimmed through IEEE Spectrum's 2013 Tech to Watch and the top ten tech predictions of The Conversation's Mark Gregory.

I say "skimmed through" because I'm doubtful that it would be worth my time to examine such predictions in detail. For a start, it isn't clear to me how the commentators define "top ten". Do they mean their ten most confident predictions? Or, since this criterion would result in unhelpful predictions like "computers will represent numbers in binary", maybe they mean their ten most confident predictions of what will change? Or, do they mean the ten most profitable technologies? Or the ten most influential? Or the ten most interesting (which is surely subjective)?

I recently read Duncan Watts' Everything is Obvious, which, among many other things, makes the point that commentators making these sorts of predictions are rarely held to account for what they say. Commentators set out their predictions for the year in December or January, but, so far as I can tell, they're largely forgotten come February. The predictions have no obvious consequences for either their makers or their users, and, indeed, seem to amply satisfy Harry Frankfurt's rigourous definition of "bullshit" as speech made without any concern as to whether it is true or not.

Watts observes that, not only is it difficult to predict the fate of current trends, we don't even know what to predict. Perhaps, in this case, Spectrum's contributors can make some informed guesses about electric vehicles or computer displays that they happen to have heard about, but numerous technologies of the future are likely being developed in currently unheard-of lab experiments or software development houses where neither the IEEE nor anyone else knows to look.

To be fair to Spectrum, I don't think the editors necessarily mean to make any grand statements about what technology will or won't be popular or profitable or influential, but only to draw the reader's attention to some technology that the editors think is interesting. The writers do acknowledge the doubters and pitfalls of technology like Google Glasses, for example. I wonder if they and their fellow prognosticators ought to dispense with the "top ten 10" and the "predictions", and use a more modest "ten interesting things"? After all, that's what the editors do effectively when they put together an ordinary issue of Spectrum or The Conversation.

The University of Western Sydney set to deploy black boxes

2013-01-06 by Nick S., tagged as education

The University of Western Sydney ("UWS") recently announced that it would give all new students an iPad. Numerous commentators on The Conversation and elsewhere have — probably rightly, in my view — panned the initiative as an example of marketing over substance.

UWS' own information on the initiative provides a vague assurance that "the iPad initiative will assist academic staff in the delivery of cutting edge learning and teaching." The concrete examples that follow are limited to online lectures and library services, which have been available for a decade or more at universities around the world, and work fine with devices that existed long before the iPad.

The Conversation quotes one Phillip Dawson observing that the iPad may help bridge the "digital divide" (though he thinks it is an expensive option). I can certainly see a lot of sense in providing facilities that mean that students, no matter what their background, are able to participate in their courses and complete the work required by them. UWS, however, seems to have fallen victim to the black box fallacy in thinking that iPads are the solution for all courses. Given that much university work involves writing essays, doing mathematics and (in the courses that I teach) writing computer programs, what are students expected to do with a device without a keyboard?

Dawson goes on to observe that students can expect "this sort of technology will be an integral part of the learning experience at UWS", which seems consistent with UWS' own announcements as well as the comments of Simon Pyke on a similar initiative at the University of Adelaide. If so, I pity the academics at UWS (and the University of Adelaide) who I suppose are being asked to teach to the technology instead of being offered the technology that best supports their teaching. I fear to write what I would think if someone told me that I had to teach programming using an iPad, which I understand to have no keyboard, no compiler, and no ability to run programs until they have been approved by Apple.

I'm pretty sure that Apple will be the biggest winner out of UWS' purchase. Apple will sell thousands of devices, and add UWS' imprimatur to its educational credentials. Maybe the students will get a piece of equipment with some value as a content delivery and communications tool, but to what are they going to turn when they want to practice the critical thinking, scientific skills, art and communication skills that they actually came to university to develop?