I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for october 2014

On embedding morality

2014-10-27 by Nick S., tagged as freedom, philosophy

I've just finished reading Evgeny Morozov's To Save Everything, Click Here (2013), which is something of a rant against what he calls "technological solutionism", or what I might otherwise call "techno-utopianism". Morozov is against a lot of things — so many and in such wide variety that it's hard to know what he is actually for — but one of them is technological systems designed to encourage or coerce good behaviour. Being a researcher in information security, the entire purpose of which might be said to be to coerce behaviour, I felt this idea required closer examination.

Morozov fears that deploying technological and psychological tools (he seems to find Richard Thaler and Cass Sunstein's Nudge (2007) at least as disagreeable as techno-utopians) that affect behaviour might rob humans of their moral responsibilities. Not only might such systems deprive humans of the ability to engage in civil disobedience, he imagines, but they might cause our moral sense to wither away altogether from lack of any opportunity to apply it.

Thaler and Sunstein themselves offer what I think is the most devastating critique of this line of reasoning: the designers of any system, technological or otherwise, cannot choose not to choose. The designer(s) of a system can make various things more or less difficult, or more or less prominent, or more or less valued, and so on, but they cannot design a system with no design. (And refusing to design anything is just accepting whatever choices are embodied in the status quo.)

Deep down, Morozov probably knows this, and he does make a few suggestion as to how he thinks certain systems might be improved. But what about the danger that our moral senses will atrophy through lack of exercise?

I heard a similar thought expressed in regards to digital rights management during a seminar in about 2009. The speaker (whose name I forget) told us that certain critics of digital rights management claim that it inhibits the moral expression of media users by not allowing them to decide for themselves whether or not to obey copyright law. This might sound noble enough, the speaker noted, but pointed out that not many of us worry that the locks on our doors might inhibit the moral expression of burglars. Most people really do want to inhibit moral expressions that they deem harmful; they just disagree over what is harmful, or what is the most effective way of dealing with any particular harmful expression.

In any case, I was recently wondering if establishing a prohibition might exercise our moral sense just as much (or even more than) not establishing one. When confronted with a rule that I don't understand, I ask: why does this rule exist? The answer may enlighten me about the point of view of the person who made the rule, or may cause me to suggest an improvement to the rule. Perhaps this is my engineering brain trying to figure out how things work. But I generally only feel comfortable with breaking the rule if I've consciously determined it to be bad one, or me to be in an exceptional situation.

No one is likely to advocate establishing prohibitions on everything just to make people think harder before they do something. But nor is anyone likely to advocate removing all rules in order to provide everyone with the opportunity to think about the same. For a start, what guarantee is there that they will think about whatever moral principles might be at stake? And what if someone (such as a burglar) exercises his or her freedom to impose rules on other people?

A better answer is that we need to think when we design the system, which is surely what any good engineer or lawmaker strives to do. There are numerous examples of designers getting it wrong — but also many examples of designers getting it right, or at least better than not doing anything at all. Because refusing to design anything is surely abandoning our moral sense just as thoroughly as unthinking submission to someone else's design.

How critical can we expect computer users to be? (Part 2)

2014-10-22 by Nick S., tagged as experience, hackers

Shortly before I hit the "publish" button on my previous entry, I read through the November 2014 issue of APC Magazine (yes, in October). The feature article, How to Hack Everything (p. 29 ff.), espoused a view aligned with mine in that it encouraged computer owners to put in the effort required to understand and customise their devices, but also quite different in that such hacking was promoted with "unlock extra features, better performance and more with these hardware secrets". Customisations of the latter sort are of great interest to computer technologists — especially when it got to learning "how Wi-Fi and the Web are hacked" with packet sniffers, cross-site scripting and the like — but surely of little direct interest to anyone else.

This is all to be expected from a magazine whose main business is reviewing and investigating computer technology for readers with a high level of technical expertise. But it did cause me to pause before I published my entry, add "Part 1" to the title, and make a note to come back for a "Part 2" that contrasted the hacker's view of customisation with what I was imagining.

For APC and others wanting to assert the more mythologised meaning of the word, "hacking" is about understanding computer technology and bending it to one's will, more or less for its own sake. The goal of APC's hacks, for example, include overclocking CPUs, installing software on WiFi routers, and automating one's "online life" to no clear purpose.

Some of these, such as obtaining root access to smartphones, may be precursors to achieving something that is (or perhaps should be) of interest to ordinary people. I recently installed CyanogenMOD, for example, in order to remove the numerous applications that my phone's manufacturer had pre-installed on my phone, but for which I have no use. Surely no one (apart from a phone manufacturer, I suppose) would say that such a situation is ideal: the folks who produced CyanogenMOD, and people who use it, need to employ a deep understanding of computer technology in order to achieve something that anyone can do using the standard install/de-install facility of a desktop operating system. (In fact, while searching for the reasons that phone manufacturers install these applications in the first place, I discovered that South Korea has recently issued guidelines forcing manufacturers to make almost all apps removable, which may do more for ordinary users than any amount of hacking.)

So all this may be a means to an end, even if it's an awkward one used only because better means aren't available. (This is actually the sense in which I most often use the word "hack" in describing a piece of engineering.) But what is the end? Understanding how technology works is a fine thing for engineers, and I'm sure no one would complain if others understood something of it as well. But people ultimately build technology to be used, not merely to be understood.

With this in mind, I can refine my concept of "critical computing" to be concerned primarily with the use of technology rather than its construction. Hacking of the APC sort isn't incompatible with this, and is perhaps even complementary. But I don't expect that there will ever be a day in which we all build our own hardware and software, any more than we build our own cars and bridges. We can nonetheless think about how we choose and use the technology that engineers make available to us: do we blindly pick up the latest product and join the latest web site, or do we think through what we want from our devices and how to best achieve it?

How critical can we expect computer users to be? (Part 1)

2014-10-16 by Nick S., tagged as experience

I've been reading Michael Pollan's The Omnivore's Dilemma (2006) this week, in which Pollan investigates the way in which food is produced in America. Arguing that much of this food is produced in brutal industrial settings that are good for neither farmers nor animals nor the people who eat them, he calls on his readers to take a deeper interest in the way food is produced, and to look for qualities beyond the lowest price. The fastest way to end factory farming, he suggests, might be to require that feedlots and slaughterhouses be built with transparent walls because no one would want to eat anything from a factory farm after seeing what goes on there.

This doesn't have much to do with computing, but I nonetheless saw some parallels with what I ended up calling "critical computing" around the beginning of the year. Just as Pollan calls for eaters to better understand the origins and qualities of the food they are eating, I called for computer users better understand their relationship with the technology they use.

The obvious problem with all this, of course, is that we each have a limited amount of time and resources to apply to improving this understanding. Perhaps it's all well and good for me, an experienced software developer, to customise my computing devices to meet my exact needs, but what about someone who doesn't have a degree in computer engineering and twenty years' experience with the things? Thinking about Pollan's call for me to take a comparable interest in the food I eat put me in a better position to answer a question of this sort.

Obviously I must have some interest in the preparation of food to have picked up Pollan's book in the first place. I cook my own food and I've grown a few herbs in pots on my balcony, but I have no plans to take up farming or to slaughter my own meat. When I think about following Pollan's suggestion, I think: how on Earth am I going to find out so much about what I eat, let alone take action on it from the highly urbanised locale in which I live?

Pollan goes to quite some effort to procure the food that he does, far beyond what I think almost anyone would find practical on a day-to-day basis, and I'm sure he himself would be among the first to acknowledge that there's no immediate prospect for a food chain free of factory farms and other industrial baddies. But that doesn't mean that the whole exercise is hopeless: eaters can make a decision to choose factory-free food whenever it is available — even if it costs a bit more — and eaters can put in a modest effort to seek out food-conscious farmers instead of uncaring industrial food conglomerates. And with continual modest effort, perhaps farmers and eaters (and maybe even conglomerates) can improve the food production system over time.

A comparable pursuit of understanding of computing would probably look very different — computers can't be made other than in factories, for a start, and they have no "natural" lifestyle bequeathed to them by evolution — but perhaps it's reasonable to ask for a comparable level of effort and continual improvement.

If an e-mail lies on a hard drive, does it make a sound?

2014-10-06 by Nick S., tagged as privacy

In attempting to expand my thoughts on what we imagine surveillers might do recently, I considered starting with the question: if an e-mail lies on a hard drive, does it make a sound? My purpose was to challenge the centrality of data collection in debates about privacy and surveillance. If data about someone is collected on a computer, but no human ever looks at it, is that person's privacy invaded?

I happened to be reading Steve Talbott's Devices of the Soul (2006) this week, which gives an answer in a chapter entitled Privacy in an Age of Data. Talbott argues that privacy is properly conceived as being a property possessed by a person, and that the privacy of data is therefore meaningless or at least beside the point. He goes on to say that

the ideal of privacy gains substance only in those primary contexts where we know each other well enough to care (p. 233; emphasis in original).
Read in the context of the question above, I take this as a "no".

Talbott takes a somewhat mystical view of humanity, and elsewhere lambasts scientific materialists like Richard Dawkins and Rodney Brooks as "reductionists" for holding that humans are made up of chemicals. In this view, maybe machines can never invade privacy because they don't "care". But, shorn of any mysticism, I take the point to be that privacy only has meaning amongst entities that interact with each other and can make choices about the relationship. Mere knowledge of someone else, without any capacity to have an effect on that person, is simply data.

This tallies with what I experience when I read sordid news stories. Given that I've never met the people involved, and nor am I likely to, I simply take on board the information that people did the things that they're reported to have done. Not because I'm a machine (even if Rodney Brooks et al. are right), but because I have no relationship with those people and no direct capacity to either influence them or be influenced by them. But I doubt that I would react the same way if someone I knew was involved in the same sorts of activities.

But do the subjects of sordid news stories, being on the other side of the experience, feel the same way? I read in the Sydney Morning Herald this weekend that Britain's Prince William and Kate Middleton have accused two photographers of "surveilling" their son (Paparazzi warned off pursuing George, 4 October 2014, p. 26). Prince George presumably isn't involved in anything more sordid than dirty nappies, but his parents clearly aren't happy with some of the attention he's been getting.

I'm not a royal-watcher and I can't speak for what kind of relationship royal-watchers think they're in with the family. The royals themselves, I suppose, are in some sort of relationship with the public or at least the media, and perhaps this relationship is the source of their frustration. They are, after all, affected by the public's and the media's treatment of them. (I've often wondered what I'd feel like upon reading about myself in the news but have never had the opportunity to find out.)

Getting back to my hypothetical e-mail, one can imagine a computer system that collects e-mails but takes no action unless a human explicitly asks for it. In fact, traditional e-mail systems work something like this, and I've never heard anyone complain that their privacy has been invaded by an SMTP or IMAP server. I doubt that even Google or the NSA pays human voyeurs to dig through the stuff that they collect.

Of course Google and other ad-supported services do take action on the data they collect; they use it to select the advertisements to be shown to each user. Intelligence agencies use selected information to pursue investigations and make arrests. I'm nonetheless pretty sure that the computer systems involved don't "care" in any human sense, but I'm also sure that critics would say that this is not the point. So does my e-mail make a sound?