I've already written one entry inspired by a recent Conversation article in which Graham Murdock suggested that "surveillance threatens us with a new serfdom". It's not an easy article to understand, and I'm still uncertain if he is trying to cover too much in too little space, or has just mashed choice bits of history, politics and modern technology into an incoherent fantasy of totalitarian government. Whatever Murdock's intent, an alien reading the comments on the article would be certain that Australia and countries like it are totalitarian states.
I dithered for a while over whether I'd bother to write a comment of my own, in part because I wasn't sure I understood Murdock's point, in part because I wasn't sure I had anything (new) to say, and in part because I feared that questioning anti-surveillance rhetoric would have me perceived as a champion of totalitarian surveillance. The last motivation is the most interesting to me now, and ultimately led me to decide that I should comment by way of accepting my own criticism of the idea that secrecy protects us from discrimination. The same might be said of my previous blog entry, which contained a few rhetorical questions that I can imagine being answered with contemptuous and/or increduluous rants from anti-surveillance commenters and cyberlibertarians.
As it turned out, no one replied to my comment at all, so I either didn't offend as many people as I feared, wasn't as interesting as I'd hoped, or didn't make enough sense of my own. So what did I have to fear, if not ridicule from commenters who I've dismissed as ranters and fantasists anyway?
In working through my previous blog entry, I came to realise that a large part of my difficulty came from from trying to confront anti-surveillance rhetoric on its own terms, in which "surveillance" is presumed to imply arbitrary discrimination and persecution, and "privacy" is presumed to imply freedom. But the whole purpose of my critique is that this view is confused and unhelpful, not to mention absurd if its adherents really hold that Australia is a totalitarian state or anything close to one.
One cure might then be to eschew terms like "surveillance" and "discrimination", and instead draw on terminology developed within a more nuanced worldview. Of course I can't make everyone else adopt whatever terminology or worldview I choose, certainly not within the scope of a comment on an article. But this challenge does encourage me to think carefully about how I present my ideas.
Responding to Murdock's article, I cobbled together something about control of information, which is a bit of mish-mash of an idea that appeared (somewhat vaguely) in the article, and the view of privacy as being about use of information that we used when I was developing experimental privacy protection systems. I'm ultimately not all that happy with this response, though I hope I at least indicated that "privacy", "secrecy" and "liberty" might not have quite the straightforward relationship that anti-surveillance rhetoric supposes.
In responding to a recent Conversation article on surveillance, I drew an analogy between the wearing of ninja costumes and secrecy-centred approaches to privacy. I've used this analagy several times elsewhere on this blog, but writing on The Conversation — where even comments probably receive far more attention than this blog — forced me to focus on the quality of the analogy.
In trying to portray rants about surveillance as simplistic and beside the point, I worked out a brief description of a world in which we used ninja costumes and ID numbers to prevent anyone finding out about us. I later wondered, off-line, if the imagined world was not just as absurd as I intended it to be, but also just as bad for freedom as the totalitarian state being imagined by the author of the article (Graham Murdock) and most of the other commenters. On the face of it, feeling compelled to wear ninja costumes and answer to ID numbers sounds very much like the dehumanised totalitarianism that Murdock and commentators say they fear.
Of course surveilling electronic networks doesn't work quite the same way as surveilling the streets: the kind of information involved is quite different, and computers can process and record much greater quantities of information than street-walking spies. But nor is it entirely different: both forms of surveillance can support the kind of arbitrary discrimination that anti-surveillance rants presume to be the goal of surveillance systems, and both can be combatted by a tell-nobody approach.
Re-reading my previous blog entries on privacy, I realised that I'd come across the key point in a Conversation article from Ashlin Lee and Peta Cook: a large part of freedom concerns the freedom to express oneself, and it is exactly this freedom that would be threatened by the ninja-costume state. Sure, the government would be unable to persecute any of its ninja-citizens, but no one would be able to do what they wanted to do anyway (unless all they wanted to do was to dress as ninjas).
Now suppose I encrypted and anonymised all of the entries in the blog, and all of my comments on the Conversation, just to make sure that no government or corporate overlords could pick me up in a crackdown on people with English names, long hair, or sceptical views on technology boosterism. I'd be free to have all the views I liked, but no one would be able to read them (they're encrypted), and no one would know that I'm a person who identifies with these characteristics. Is this a freedom worth having?
I suppose that critics of Lee and Cook's idea might argue that they want to express themselves to certain chosen people, but not to the world at large for fear of embarrassment or persecution. I can see a certain amount of pragmatic appeal in this position under various circumstances, but how does one identify those chosen people in the first place? And would not limiting our expression to only a select few like-minded fellow citizens leave us in a filter bubble from which we were unable to see perspectives other than our own?
Or perhaps we'd like to express ourselves to the world at large, but have the government and corporations politely ignore us. But, again, how do we choose those whose attention we want to attract and those by whom we wish to be politely ignored, and how do those parties know what we want of them? And, come to think of it, do we really want a government that ignores us anyway?
I've just finished reading Evgeny Morozov's To Save Everything, Click Here (2013), which is something of a rant against what he calls "technological solutionism", or what I might otherwise call "techno-utopianism". Morozov is against a lot of things — so many and in such wide variety that it's hard to know what he is actually for — but one of them is technological systems designed to encourage or coerce good behaviour. Being a researcher in information security, the entire purpose of which might be said to be to coerce behaviour, I felt this idea required closer examination.
Morozov fears that deploying technological and psychological tools (he seems to find Richard Thaler and Cass Sunstein's Nudge (2007) at least as disagreeable as techno-utopians) that affect behaviour might rob humans of their moral responsibilities. Not only might such systems deprive humans of the ability to engage in civil disobedience, he imagines, but they might cause our moral sense to wither away altogether from lack of any opportunity to apply it.
Thaler and Sunstein themselves offer what I think is the most devastating critique of this line of reasoning: the designers of any system, technological or otherwise, cannot choose not to choose. The designer(s) of a system can make various things more or less difficult, or more or less prominent, or more or less valued, and so on, but they cannot design a system with no design. (And refusing to design anything is just accepting whatever choices are embodied in the status quo.)
Deep down, Morozov probably knows this, and he does make a few suggestion as to how he thinks certain systems might be improved. But what about the danger that our moral senses will atrophy through lack of exercise?
I heard a similar thought expressed in regards to digital rights management during a seminar in about 2009. The speaker (whose name I forget) told us that certain critics of digital rights management claim that it inhibits the moral expression of media users by not allowing them to decide for themselves whether or not to obey copyright law. This might sound noble enough, the speaker noted, but pointed out that not many of us worry that the locks on our doors might inhibit the moral expression of burglars. Most people really do want to inhibit moral expressions that they deem harmful; they just disagree over what is harmful, or what is the most effective way of dealing with any particular harmful expression.
In any case, I was recently wondering if establishing a prohibition might exercise our moral sense just as much (or even more than) not establishing one. When confronted with a rule that I don't understand, I ask: why does this rule exist? The answer may enlighten me about the point of view of the person who made the rule, or may cause me to suggest an improvement to the rule. Perhaps this is my engineering brain trying to figure out how things work. But I generally only feel comfortable with breaking the rule if I've consciously determined it to be bad one, or me to be in an exceptional situation.
No one is likely to advocate establishing prohibitions on everything just to make people think harder before they do something. But nor is anyone likely to advocate removing all rules in order to provide everyone with the opportunity to think about the same. For a start, what guarantee is there that they will think about whatever moral principles might be at stake? And what if someone (such as a burglar) exercises his or her freedom to impose rules on other people?
A better answer is that we need to think when we design the system, which is surely what any good engineer or lawmaker strives to do. There are numerous examples of designers getting it wrong — but also many examples of designers getting it right, or at least better than not doing anything at all. Because refusing to design anything is surely abandoning our moral sense just as thoroughly as unthinking submission to someone else's design.
I've been reading quite a bit about Bitcoin and other anonymisation technologies over the past week or so, partly driven by the recent shut down of an anonymous marketplace known as Silk Road. David Glance has a bit to say about Bitcoin, Silk Road and Liberty Reserve on The Conversation, while Jonathon Levin discusses possible directions for Bitcoin and Nigel Phair ponders likely replacements for Silk Road in the same venue. G. Pascal Zachary comes at similar issues from the point of view of surveillance in the October 2013 issue of IEEE Spectrum (p. 8).
Levin opens with a statement about Bitcoin enthusiasts and libertarians being confused by the slow take-up of what, to them, is a tremendous advance in anonymity and freedom from Big Bad Government. I don't know which, if any, specific libertarians are being referred to by Levin, but Levin's statement certainly seems consistent with traditional cyberlibertarian thinking that anonymity and secrecy is the path to the protection of rights and freedom.
Non-libertarians, of course, probably think more like Nigel Phair and G. Pascal Zachary, who accept that there are certain behaviours deemed to be illegal for good reason, and that law enforcement agencies must therefore have some sort of power to detect and arrest those who engage in those behaviours. Assuming that the non-libertarians aren't doing any of these illegal things themselves, they perceive somewhat less need for anonymity. For that matter, even libertarians agree that the state should enforce property rights and contracts, and one wonders if even they would be pleased with a technology that allowed anonymous miscreants to steal property and dishonour contracts.
Anti-surveillance commentators love to mock the surveillers' defence that "you've got nothing to worry about if you're not doing anything wrong", but the surveillers may be perfectly correct if they're referring to what the surveillers consider wrong. Why waste time persecuting behaviour with which one has no problem, after all? The problem is, not everyone agrees with the surveillers' vision of wrongness, and anti-surveillers fear persecution for behaviours that they consider acceptable, but which the surveillers consider wrong.
The dealing of drugs, identities and violence alleged to be taking place on Silk Road and its like probably doesn't do much for the anti-surveillers' case. Apparently Silk Road users really do have something to hide under the law of most countries, and I doubt many people are shedding a tear for those poor old criminal gangs who've just lost one of their meeting places.
Hal Berghel's take on PRISM in the July 2013 issue of IEEE Computer asks that politicians do not take the "trust me" approach to defending government surveillance apparatus, in which politicians ask us to trust that said apparatus is only being used to apprehend genuine criminals. Simply hearing "trust me" is certainly dissatisfying. Said politicians need to prove their trustworthiness by demonstrating that, if you're not doing anything wrong, you really do have nothing to fear. But anti-surveillers have a similar problem: why accept a statement of "trust us" from a shadowy on-line marketplace any more than a statement of "trust us" from a shadowy government department?
The (Australian) ABC's news web site recently featured a radio discussion between two unidentified persons regarding anonymous publication of material on the Internet. I'm not familiar with the story that sparked the discussion, but the conversation caught my attention for two reasons. Firstly, one of the participants referred several times to classical computer hacker attitudes that I had thought had vanished, or at least been seriously marginalised, by the popularisation of the Internet. Secondly, the other participant noted that certain "rights" supposed to exist by such hackers (in this case, anonymity and taking any file available for download) do not actually exist in law.
My graduate certificate in communications had me studying a lecture that, in part, presented the romantic ideal of computer hackers as freedom-loving individuals bent on understanding, using and, if necessary, subverting computer technology for some greater purpose. I gather that many of the students were not particularly impressed with this portrayal, possibly because they identified "hackers" with virus-writers, identity thieves and spammers. While I don't think either the lecture or the original users of the word "hacker" intended it to mean "computer criminal", I also think it's very naïve to equate freedom with the power to use technology in whatever way one is capable of doing.
My own response to the lecture described the hacker mentality as a "might-makes-right philosophy that equates freedom with one's technological power exercise it". Inspired by a related observation in David Brin's The Transparent Society, I postulated that competitions of technological power would, in fact, be won by well-resourced organisations rather than a few lone hackers.
Sure, classical hackers have won the occasional battle like reverse-engineering the Content Scrambling System for DVDs or jailbreaking iPods. But I'm pretty sure that Google, Apple, Microsoft and the rest ultimately have a far mightier influence over our electronic devices than Jon Lech Johansen, Richard Stallman or even Linus Torvalds. Meanwhile, the public's image of a "hacker" is largely informed by the kind of lawless computer whizzes they encounter most often: spammers, phishers, data thieves and authors of malware.
The law recognises this, and curtails rights like freedom of action and freedom of speech where, in the view of the law-makers, one person's exercise of those freedoms would interfere with someone else's freedom or well-being. So my freedom and ability to write e-mail software, for example, does not entail the right to e-mail fraudulent advertisements for Viagra to every e-mail address I can download.
Perhaps an honest-to-God cyberlibertarian would say that I should have the right to send whatever e-mail I like to whomever I like. But would he or she appreciate the same activity from Google, say, who possesses vastly greater reserves of information and software development skill than I?