Kat Krol and Sören Preibusch discuss "effortless privacy negotiations" (pp. 88-91) in the May/June 2015 issue of IEEE Security & Privacy. In doing so, they (inadvertantly) address some of the questions I wondered about in an article for The Social Interface last year — most notably, whether or not people would be willing to pay for services of the sort now provided by advertising, if it meant that they could obtain the services without handing over data to advertisers.
According to the research cited by Krol and Preibusch, most people would not, but a significant number of people would. I think I suspected as much when I wrote my article, but Krol and Preibusch propose a slightly different (but perhaps complementary) explanation for why they wouldn't: most people value the tangible and immediate gain of access to a service more than the nebulous and future risks of handing over private data.
In the same issue of Security & Privacy, Angela Sasse scolds security nerds for "scaring and bullying people into security" (pp. 80-83) with fearsome dialogues intended to warn people of the risks — again, mostly distant and nebulous — that they face in clicking on links that don't meet the approval of the security community. The same might be said of privacy nerds who demand that privacy policies be read and rejected if readers can imagine misuse of the policy.
Whatever the explanation for people who won't pay, those who would pay might wonder: where do I go if I want to search the web or join social media, but I don't want the ads? None of Google, Facebook, or Twitter will take my money!
Krol and Preibusch mention one (experimental) solution from Google, for whom Preibusch works: Google Contributor. According to Contributor's web page, subscribers to the service will see "pixel patterns" or "thank you messages" instead of ads on participating web sites. (This sounds a bit kludgy but I guess it's a start). But the article focuses on negotiation between users and service providers.
I've seen proposals for negotiating privacy settings before, but never found them particularly convincing: why would anyone agree to anything other than handing over the minimum amount of information required to get the job done? Krol and Preibusch identify the point I was missing: the participants need to negotiate not just the privacy settings, but the service they get in return for them. So those who'd rather pay than see targeted ads, for example, could negotiate untargeted service in return for a subscription. (This might not just be about privacy: my main objection to advertising isn't that I'm worried about the data collection involved, it's that I find it irritating.)
The title of Krol and Preibusch's article identifies the obvious weakness in all this negotiation: it takes a lot of effort to both provide and use such a flexible service. Of course reading and understanding current privacy policies requires a fair bit of effort too, which is partly why they remain largely unread and ununderstood. (The other part is that the reader can't do anything about them anyway, for which negotiation might offer some remedy.)
Still, well-designed computer systems can take a lot of the effort out of things that might otherwise be tedious and time-consuming. Krol and Preibusch don't describe any particular solutions; their article is more of a call-to-arms. I don't kow if negotiation is the solution — I'm at least as interested in Google Contributor, which has the advantage of existing — but Krol and Preibusch have at least renewed my interest in something I'd previously dismissed.
I've already written one entry inspired by a recent Conversation article in which Graham Murdock suggested that "surveillance threatens us with a new serfdom". It's not an easy article to understand, and I'm still uncertain if he is trying to cover too much in too little space, or has just mashed choice bits of history, politics and modern technology into an incoherent fantasy of totalitarian government. Whatever Murdock's intent, an alien reading the comments on the article would be certain that Australia and countries like it are totalitarian states.
I dithered for a while over whether I'd bother to write a comment of my own, in part because I wasn't sure I understood Murdock's point, in part because I wasn't sure I had anything (new) to say, and in part because I feared that questioning anti-surveillance rhetoric would have me perceived as a champion of totalitarian surveillance. The last motivation is the most interesting to me now, and ultimately led me to decide that I should comment by way of accepting my own criticism of the idea that secrecy protects us from discrimination. The same might be said of my previous blog entry, which contained a few rhetorical questions that I can imagine being answered with contemptuous and/or increduluous rants from anti-surveillance commenters and cyberlibertarians.
As it turned out, no one replied to my comment at all, so I either didn't offend as many people as I feared, wasn't as interesting as I'd hoped, or didn't make enough sense of my own. So what did I have to fear, if not ridicule from commenters who I've dismissed as ranters and fantasists anyway?
In working through my previous blog entry, I came to realise that a large part of my difficulty came from from trying to confront anti-surveillance rhetoric on its own terms, in which "surveillance" is presumed to imply arbitrary discrimination and persecution, and "privacy" is presumed to imply freedom. But the whole purpose of my critique is that this view is confused and unhelpful, not to mention absurd if its adherents really hold that Australia is a totalitarian state or anything close to one.
One cure might then be to eschew terms like "surveillance" and "discrimination", and instead draw on terminology developed within a more nuanced worldview. Of course I can't make everyone else adopt whatever terminology or worldview I choose, certainly not within the scope of a comment on an article. But this challenge does encourage me to think carefully about how I present my ideas.
Responding to Murdock's article, I cobbled together something about control of information, which is a bit of mish-mash of an idea that appeared (somewhat vaguely) in the article, and the view of privacy as being about use of information that we used when I was developing experimental privacy protection systems. I'm ultimately not all that happy with this response, though I hope I at least indicated that "privacy", "secrecy" and "liberty" might not have quite the straightforward relationship that anti-surveillance rhetoric supposes.
In responding to a recent Conversation article on surveillance, I drew an analogy between the wearing of ninja costumes and secrecy-centred approaches to privacy. I've used this analagy several times elsewhere on this blog, but writing on The Conversation — where even comments probably receive far more attention than this blog — forced me to focus on the quality of the analogy.
In trying to portray rants about surveillance as simplistic and beside the point, I worked out a brief description of a world in which we used ninja costumes and ID numbers to prevent anyone finding out about us. I later wondered, off-line, if the imagined world was not just as absurd as I intended it to be, but also just as bad for freedom as the totalitarian state being imagined by the author of the article (Graham Murdock) and most of the other commenters. On the face of it, feeling compelled to wear ninja costumes and answer to ID numbers sounds very much like the dehumanised totalitarianism that Murdock and commentators say they fear.
Of course surveilling electronic networks doesn't work quite the same way as surveilling the streets: the kind of information involved is quite different, and computers can process and record much greater quantities of information than street-walking spies. But nor is it entirely different: both forms of surveillance can support the kind of arbitrary discrimination that anti-surveillance rants presume to be the goal of surveillance systems, and both can be combatted by a tell-nobody approach.
Re-reading my previous blog entries on privacy, I realised that I'd come across the key point in a Conversation article from Ashlin Lee and Peta Cook: a large part of freedom concerns the freedom to express oneself, and it is exactly this freedom that would be threatened by the ninja-costume state. Sure, the government would be unable to persecute any of its ninja-citizens, but no one would be able to do what they wanted to do anyway (unless all they wanted to do was to dress as ninjas).
Now suppose I encrypted and anonymised all of the entries in the blog, and all of my comments on the Conversation, just to make sure that no government or corporate overlords could pick me up in a crackdown on people with English names, long hair, or sceptical views on technology boosterism. I'd be free to have all the views I liked, but no one would be able to read them (they're encrypted), and no one would know that I'm a person who identifies with these characteristics. Is this a freedom worth having?
I suppose that critics of Lee and Cook's idea might argue that they want to express themselves to certain chosen people, but not to the world at large for fear of embarrassment or persecution. I can see a certain amount of pragmatic appeal in this position under various circumstances, but how does one identify those chosen people in the first place? And would not limiting our expression to only a select few like-minded fellow citizens leave us in a filter bubble from which we were unable to see perspectives other than our own?
Or perhaps we'd like to express ourselves to the world at large, but have the government and corporations politely ignore us. But, again, how do we choose those whose attention we want to attract and those by whom we wish to be politely ignored, and how do those parties know what we want of them? And, come to think of it, do we really want a government that ignores us anyway?
Imagine that, every time you bought an item of food, you were expected to peruse the grower's and/or cook's "edibility policy" to determine whether or not it was up to your personal standards of non-poisonousness. (People with allergies do do something like this, and I don't envy them.) Personally, I much prefer the system of regulation by which eaters can trust that all food offered for sale is edible.
I suspect that most of us are hoping that privacy works much the same way when we click through privacy agreements: we presume that any reputable company is only going to use data in ways that we'd expect. Maybe they actually do, most of the time, but no one would ever know because it's buried in legalese.
Not long after reading Evgeny Morozov's complaints about more or less everything technological, I happened to pick up Dave Eggers' latest novel, The Circle (2013), which features a utopian-minded Internet corporation dedicated to exactly the kind of "technological solutionism" that Morozov derides. One of the most prominent features of said company is "transparency" something like that envisaged by David Brin in The Transparency Society (1998), in which everyone's activities are open to an unstoppable wave of recording devices.
Eggers has his company promote transparency as the ultimate weapon for the accountability of public figures (and, eventually, everyone else). Morozov, however, asserts that such transparency leads to public decision-making becoming bogged down in the pursuit of trivial misdemeanours. I certainly found myself wondering if Eggers' characters could ever achieve anything given the amount of time they spend on commenting on each other, and perhaps Eggers in part intended to make us wonder exactly this. Morozov even goes so far as to claim that certain amounts of duplicity and hypocrisy are necessary to public decision-making, though I can't recall him giving any specific examples.
I doubt that many people would find the world of The Circle very appealing. For one, many would likely recoil in horror at the thought of being subject to some of the revelations made public about its characters, and at the unthinking vigilantism that sometimes follows. Many would also be very disturbed by the amount of power ceded to the private company at the heart of The Circle (though one might take the point of The Circle to be that we are presently handing this sort of power to real Internet companies of our own free will).
I nonetheless found a few things to question about The Circle's and Morozov's portrayals of transparency. For a start, current public debate, at least in Australia, is hardly a model of nuanced thinking and intellectual rigour, and one wonders if a transparent society would actually have any depths of triviality left to plumb.
Eggers and Morozov both seem to neglect the possibility that trivialisers and vigilantes would themselves be watched and criticised. I don't suppose that the public and the media outlets that serve them will leave off their pursuit of triviality after being scolded by the scholars that watch them — plenty of scholars have already done plenty of scolding — but those who make decisions do nonetheless have the choice to listen to the scholars rather than the trivialisers. And debates over the behaviour of public figures in venues like The Drum and The Conversation suggest to me that there are, indeed, watchers prepared to argue both sides.
I consequently wondered: is The Circle's problem transparency per se, or the trivialisation, discrimination and point-scoring to which people apply it? After I'd been studying privacy seriously for a while, I came to suspect that the privacy debate was bogged down in debating the collection of data, a debate that can only lead to absurd extremes of either transparency (from pro-surveillers) or opacity (for anti-surveillers). If we were to be confronted by a real world Circle — and some might argue that we already are — is the solution secrecy, or a bit of maturity?