Shortly before I hit the "publish" button on my previous entry, I read through the November 2014 issue of APC Magazine (yes, in October). The feature article, How to Hack Everything (p. 29 ff.), espoused a view aligned with mine in that it encouraged computer owners to put in the effort required to understand and customise their devices, but also quite different in that such hacking was promoted with "unlock extra features, better performance and more with these hardware secrets". Customisations of the latter sort are of great interest to computer technologists — especially when it got to learning "how Wi-Fi and the Web are hacked" with packet sniffers, cross-site scripting and the like — but surely of little direct interest to anyone else.
This is all to be expected from a magazine whose main business is reviewing and investigating computer technology for readers with a high level of technical expertise. But it did cause me to pause before I published my entry, add "Part 1" to the title, and make a note to come back for a "Part 2" that contrasted the hacker's view of customisation with what I was imagining.
For APC and others wanting to assert the more mythologised meaning of the word, "hacking" is about understanding computer technology and bending it to one's will, more or less for its own sake. The goal of APC's hacks, for example, include overclocking CPUs, installing software on WiFi routers, and automating one's "online life" to no clear purpose.
Some of these, such as obtaining root access to smartphones, may be precursors to achieving something that is (or perhaps should be) of interest to ordinary people. I recently installed CyanogenMOD, for example, in order to remove the numerous applications that my phone's manufacturer had pre-installed on my phone, but for which I have no use. Surely no one (apart from a phone manufacturer, I suppose) would say that such a situation is ideal: the folks who produced CyanogenMOD, and people who use it, need to employ a deep understanding of computer technology in order to achieve something that anyone can do using the standard install/de-install facility of a desktop operating system. (In fact, while searching for the reasons that phone manufacturers install these applications in the first place, I discovered that South Korea has recently issued guidelines forcing manufacturers to make almost all apps removable, which may do more for ordinary users than any amount of hacking.)
So all this may be a means to an end, even if it's an awkward one used only because better means aren't available. (This is actually the sense in which I most often use the word "hack" in describing a piece of engineering.) But what is the end? Understanding how technology works is a fine thing for engineers, and I'm sure no one would complain if others understood something of it as well. But people ultimately build technology to be used, not merely to be understood.
With this in mind, I can refine my concept of "critical computing" to be concerned primarily with the use of technology rather than its construction. Hacking of the APC sort isn't incompatible with this, and is perhaps even complementary. But I don't expect that there will ever be a day in which we all build our own hardware and software, any more than we build our own cars and bridges. We can nonetheless think about how we choose and use the technology that engineers make available to us: do we blindly pick up the latest product and join the latest web site, or do we think through what we want from our devices and how to best achieve it?
I read a collection of articles today confirming that the traditional hacker ethic isn't quite as dead as I might have thought it to be. Firstly, David Glance tells the story of a Canadian student "expelled for idealistically point out security flaws" in The Conversation, while The Register ran articles on the prosecution of Aaron Swartz for fraudulently obtaining access to scientific articles on JSTOR and an attack on the University of Western Sydney criticising its decision to purchase iPads for its student population.
The recent Whitehaven Coal Hoax spawned a lot of comment on civil disobedience vs vigilantism in Australia. I don't think I've seen the same terminology applied to computer-based protest — "hacktivism" seems to be the preferred term — but it's surely much the same issue. When is defying the law nobly standing up for a cause, and when is it attempting to get one's way by force?
The cynical answer is that it's "civil disobedience" when one agrees with the political view being expressed, and "vigilantism" otherwise. More nuanced answers involve the availability of alternative methods of protest, the level and kind of harm resulting from the action, and the perpetrator's willingness to brave the ascribed punishment for his or her actions.
Hacktivism typically seems to me to fail most or all of the above tests, starting with the cynical one. The "rights" championed by the hacker ethic are frequently of little interest to anyone other than computing experts, and some of them would come at the expense of other people and industries (such as free access to private and commercial information). Of course computing experts might have genuine rights that are particular to their profession, but is anyone outside the hacker community impressed by the vandalisation of web sites, much less see the need to establish a right to it?
The Western liberal democracies that hosted all of the events listed in the first paragraph of this entry, and many others besides, provide numerous avenues through which people can make known their opinion on iPads, open access and just about anything else without needing to resort to fraud and the commandeering of other people's computer equipment. Librarians and academics, for example, are already making significant strides towards open access to scientific literature, so what need is there for vigilantism? Sure, your opinion may not be as well-known or as influential as you'd like it to be, but just about everyone else would probably say the same thing.
The (Australian) ABC's news web site recently featured a radio discussion between two unidentified persons regarding anonymous publication of material on the Internet. I'm not familiar with the story that sparked the discussion, but the conversation caught my attention for two reasons. Firstly, one of the participants referred several times to classical computer hacker attitudes that I had thought had vanished, or at least been seriously marginalised, by the popularisation of the Internet. Secondly, the other participant noted that certain "rights" supposed to exist by such hackers (in this case, anonymity and taking any file available for download) do not actually exist in law.
My graduate certificate in communications had me studying a lecture that, in part, presented the romantic ideal of computer hackers as freedom-loving individuals bent on understanding, using and, if necessary, subverting computer technology for some greater purpose. I gather that many of the students were not particularly impressed with this portrayal, possibly because they identified "hackers" with virus-writers, identity thieves and spammers. While I don't think either the lecture or the original users of the word "hacker" intended it to mean "computer criminal", I also think it's very naïve to equate freedom with the power to use technology in whatever way one is capable of doing.
My own response to the lecture described the hacker mentality as a "might-makes-right philosophy that equates freedom with one's technological power exercise it". Inspired by a related observation in David Brin's The Transparent Society, I postulated that competitions of technological power would, in fact, be won by well-resourced organisations rather than a few lone hackers.
Sure, classical hackers have won the occasional battle like reverse-engineering the Content Scrambling System for DVDs or jailbreaking iPods. But I'm pretty sure that Google, Apple, Microsoft and the rest ultimately have a far mightier influence over our electronic devices than Jon Lech Johansen, Richard Stallman or even Linus Torvalds. Meanwhile, the public's image of a "hacker" is largely informed by the kind of lawless computer whizzes they encounter most often: spammers, phishers, data thieves and authors of malware.
The law recognises this, and curtails rights like freedom of action and freedom of speech where, in the view of the law-makers, one person's exercise of those freedoms would interfere with someone else's freedom or well-being. So my freedom and ability to write e-mail software, for example, does not entail the right to e-mail fraudulent advertisements for Viagra to every e-mail address I can download.
Perhaps an honest-to-God cyberlibertarian would say that I should have the right to send whatever e-mail I like to whomever I like. But would he or she appreciate the same activity from Google, say, who possesses vastly greater reserves of information and software development skill than I?