I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Posts tagged as philosophy

Summary and Conclusion

2015-10-15 by Nick S., tagged as philosophy

I've gone two whole months now without adding any new entries, partly because I've had a busy semester and partly because I haven't come upon any topics on which I've felt I had anything new to say. I'm also about to take up a new position, at the Singapore Institute of Technology, which will mean a change in country and in my day-to-day work. In view of all this, I've decided it's time to close the blog.

I don't mean to forget about the things I've written about here, and I hope I'll be able to continue thinking and writing on them in future. But I expect that the new position will give me plenty to do in other areas as well, and it may take me some time to settle into a new pattern of endeavour. If I do return to blogging, or any other form of writing, I expect I'll be doing it as a faculty member at the Singapore Institute of Technology.

Writing the blog has refined my thinking on a lot of topics, led me to some interesting ideas, and spun off a couple of articles for The Social Interface and The Conversation. There's a list of categories (as my blogging software calls them) on the right-hand side of the screen, but I'd also like to note a few of the major points here, in no particular order, along with links to some of the entries that have had the most impact on my thinking.

The computer industry is not special. Computer technology is just one of many modern and ancient technologies that can empower us and make our lives more comfortable, and its industry has no claim to special favours from other industries, or special exemptions from economics, society or law. I've most often written about its interaction with the creative industries, frequently portrayed as greedy and/or clueless stick-in-the-muds frustrating an imagined right of helpless consumers to have entertainment delivered to their computers on their own terms.

Critical computing. Both techno-utopian and techno-dystopian narratives portray humans as helpless tools of computer technology, one for the better and the other for the worse. In reality, we do have choices about the way that we interact computing devices, but it is easy to forget to exercise them. It might be because we're besotted with the latest fad, or pursuing a fast-and-easy route that ends in superficial simulacra of what we actually want to achieve. We need to put effort into learning what our devices can and can't do, and how to best apply them to our needs.

Privacy is not secrecy is not freedom. The freedoms enjoyed by the citizens of liberal democracies are protected by pluralistic societies and a rule of law upholding the values of such societies. The freedom to be oneself in secret is not freedom at all, both states and citizens are accountable for what they do, and much ink continues to be spilt indulging fantasies of totalitarianism that make little contribution to the debate.

Where to now?

While I'd like to think that this blog and its spin-off popular articles have gone a small way towards the contribution with meaning to people outside computer science that I wrote about when I created the blog, I'm yet to publish a peer-reviewed article or anything with similar kudos. I still like to think I could do this, and I have a few ideas drafted, but I expect I'll take some time to settle into my new position before pursuing them further.

I also hope to continue practising computer technology, not just writing about it. Reading some reviews of the latest technology in a recent edition of APC Magazine, I was struck by how many of the products seemed like toys for the wealthy, likely to make a marginal difference to the quality of life of people who can afford to buy things like augmented-reality goggles and high-definition televisions. But of course the products mentioned were only a small subset of all the products out there, and I'd like to think that there are less banal uses for technology that I could turn my hand to.

And so here I hang up my keyboard, until that next article or program.

How can engineers approach a race against the machine?

2015-07-19 by Nick S., tagged as dependence, philosophy

Not long after struggling with how to approach work and automation last month, I happened to pick up Nicholas Carr's The Glass Cage (2014) and Eryk Brynjolfsson and Andrew McAfee's Race Against the Machine (2011), which cover some of the same territory. My perspective is similar to Carr's in that we both acknowledge that machinery has brought us many benefits — I even make my living from building more machines and teaching other people to do the same — but remain nonetheless wary about uncritical adoption of machines that at first seem handy helpers, but ultimately prove to be inadequate replacements for human skills and/or straightjackets from which we cannot extricate ourselves.

So what should we be automating, what should we be leaving alone, and how do I reconcile my profession with the possibility that the machines I build will transfer wealth and dignity from the people who used to do the work, to the owners of the machines? As Brynjolfsson and McAfee note, the orthodox economic view is that new jobs have appeared to replace the automated ones — and we've done pretty well by this in the long run — but there's no known principle of economics that assures us that this will proceed always and forever.

The first principle that occurred to me was to recommend that we adopt machines only when they enable us to do things that could not have been done without them: new technologies must be more than faster ways of performing existing work. This also fits with my doubts about the pursuit of fast and easy as a path to satisfaction.

This principle has at least one flaw, obvious to anyone familiar with arguments in favour of economic growth: automating a specific task that could be done by a human may free that human to do something that he or she couldn't do before for lack of time, energy or resources. The orthodox view I mentioned earlier depends on exactly this kind of process. For this reason, I don't think the principle could be sensibly applied on a task-by-task basis.

Nonetheless, the principle gets at what we surely want from machines in general: why bother with them if they simply leave us doing the same things as before (even if we can do them faster)? What's more, Carr points out that being "freed up" isn't much consolation if it means being unemployed and without access to resources that might enable the victim to make use of their notional freedom.

That I can't apply the principle on a task-by-task basis, however, makes pursuing it very difficult: I have no way of determining the worth of any particular engineering project in light of it. (Not that I often get to make such determinations: my need to pay my bills means that what I do is dictated as much as by what other people are willing to pay for as by my private views of what would make the world a better place.) Perhaps the principle isn't hopeless, but it requires a better formulation than what I'm able to come up with at the moment.

On embedding morality

2014-10-27 by Nick S., tagged as freedom, philosophy

I've just finished reading Evgeny Morozov's To Save Everything, Click Here (2013), which is something of a rant against what he calls "technological solutionism", or what I might otherwise call "techno-utopianism". Morozov is against a lot of things — so many and in such wide variety that it's hard to know what he is actually for — but one of them is technological systems designed to encourage or coerce good behaviour. Being a researcher in information security, the entire purpose of which might be said to be to coerce behaviour, I felt this idea required closer examination.

Morozov fears that deploying technological and psychological tools (he seems to find Richard Thaler and Cass Sunstein's Nudge (2007) at least as disagreeable as techno-utopians) that affect behaviour might rob humans of their moral responsibilities. Not only might such systems deprive humans of the ability to engage in civil disobedience, he imagines, but they might cause our moral sense to wither away altogether from lack of any opportunity to apply it.

Thaler and Sunstein themselves offer what I think is the most devastating critique of this line of reasoning: the designers of any system, technological or otherwise, cannot choose not to choose. The designer(s) of a system can make various things more or less difficult, or more or less prominent, or more or less valued, and so on, but they cannot design a system with no design. (And refusing to design anything is just accepting whatever choices are embodied in the status quo.)

Deep down, Morozov probably knows this, and he does make a few suggestion as to how he thinks certain systems might be improved. But what about the danger that our moral senses will atrophy through lack of exercise?

I heard a similar thought expressed in regards to digital rights management during a seminar in about 2009. The speaker (whose name I forget) told us that certain critics of digital rights management claim that it inhibits the moral expression of media users by not allowing them to decide for themselves whether or not to obey copyright law. This might sound noble enough, the speaker noted, but pointed out that not many of us worry that the locks on our doors might inhibit the moral expression of burglars. Most people really do want to inhibit moral expressions that they deem harmful; they just disagree over what is harmful, or what is the most effective way of dealing with any particular harmful expression.

In any case, I was recently wondering if establishing a prohibition might exercise our moral sense just as much (or even more than) not establishing one. When confronted with a rule that I don't understand, I ask: why does this rule exist? The answer may enlighten me about the point of view of the person who made the rule, or may cause me to suggest an improvement to the rule. Perhaps this is my engineering brain trying to figure out how things work. But I generally only feel comfortable with breaking the rule if I've consciously determined it to be bad one, or me to be in an exceptional situation.

No one is likely to advocate establishing prohibitions on everything just to make people think harder before they do something. But nor is anyone likely to advocate removing all rules in order to provide everyone with the opportunity to think about the same. For a start, what guarantee is there that they will think about whatever moral principles might be at stake? And what if someone (such as a burglar) exercises his or her freedom to impose rules on other people?

A better answer is that we need to think when we design the system, which is surely what any good engineer or lawmaker strives to do. There are numerous examples of designers getting it wrong — but also many examples of designers getting it right, or at least better than not doing anything at all. Because refusing to design anything is surely abandoning our moral sense just as thoroughly as unthinking submission to someone else's design.

On the myth of the machine

2014-03-19 by Nick S., tagged as philosophy

I've recently been reading a bit about science vs humanities, having worked my way through Neil Postman's Technopoly (1993), Joseph Weizenbaum's Computer Power and Human Reason (1976), Lewis Mumford's The Myth of the Machine (1967, 1970) and finally something of a rant about the alleged STEM crisis from Hal Berghel in the March 2013 issue of IEEE Computer (p. 70-73). Each complain about what they see as a "mechanisation" (as suggested by Mumford's title) of society, driven by a narrow pursuit of economic efficiency and technological progress at the expense of real human interests.

I've never quite understood some of the antagonism that seemed to exist between disciples of the sciences and the humanities around the middle of the twentieth century, and arguments over the merits of quantitative vs qualitative research. Maybe everyone was over it by the time I began studying for my undergraduate degree in the 1990s, having finally accepted that there are many interesting fields of endeavour and many valid approaches to research with their own strengths and weaknesses.

I, like a lot of other scientists and engineers, have a lot of affinity for hierarchical reductionism, in which any particular system is studied and explained in terms of its immediate sub-components. So biology is explained in terms of biochemistry, which is explained in terms of chemistry, which is explained in terms of physics, for example. As far as any credible science can tell, humans are indeed made up of sub-atomic particles and the forces that act on them, but hardly anyone supposes that sub-atomic physics is an effective tool for describing or understanding, say, art or politics. At the same time, to claim that humans somehow transcend or defy the laws of physics is a likely recipe for bullshit.

The real problem for humanists, perhaps, is that few people feel the need to hire out historians, philosophers and art critics in the way that they hire out accountants, physicians and engineers. Yet almost everyone is interested in art and history to some degree, and meaningful participation in society surely requires some knowledge of that society's culture, history, philosophy and much else besides. In a sense, we're all amateur humanists, but we leave science and engineering to the professionals. Consequently, the humanities become invisible to narrow economic analyses that track only the transfer of material wealth from one person to another.

The real enemy here is narrowness, whether it be an economist's pre-occupation with material wealth, an engineer's pre-occupation with machines, or a humanist's pre-occupation with soul. If you want to achieve some narrow task, a machine is indeed likely to be an excellent tool for performing it efficiently and well. But who wants to be a machine?