Continuing the adventure of train timetables from my previous entry, I recently asked a similar group of friends if anyone could give me a lift to the nearest railway station. I was astonished to find that none of them appeared to know the location of the railway station, even though all of them drove right past it on their way home.
I suppose that my own perspective is biased by my preference to travel by foot and public transport: my car-driving friends might be similarly astonished that I can't comprehend the desire to live in or visit XXXX Heights that has never seen a bus or train, and which I think might as well be on the moon.
Nonetheless, I've long felt that car-dependent people have a somewhat less intimate understanding of geography than those of us who take the time to walk through it. I get a similar feeling after travelling by underground train in cities like London and Tokyo, where I'm effectively teleported from one part of the city to another without any experience of what lies between.
More generally, it seems that technology-dependent people have a less intimate understanding of the world around them -- at least in the sense that they don't experience it directly, even if people with modern educations have a better theoretical understanding of physics, biology, etc. than their Stone Age forebears.
This seems like a bad thing, but it is surely inevitable: no one person could experience everything we know about physics, chemistry, biology, psychology and all the rest. My mode of travel aside, I don't farm my own food, prepare my own medicines, or even build my own computers.
An early chapter of Sherry Turkle's Life on the Screen talks about people with "transparent" and "opaque" views of computers. People with transparent views are interested in how computers work, while those with an opaque view are only interested in what they can do. Before reading Life on the Screen, I sometimes characterised non-technical people as having a "magic box view" of computers.
I found Turkle's discussion refreshing in that she doesn't make one view superior to other, where computer nerds might consider the opaque view stupid and ignorant while woollier minds might disparage engineers as boring and inhuman. I've since come to think of good engineering and design, at least in part, as using the transparent view to enable the opaque view (or to enable the "magic" in my former terminology).
Perhaps the transparent view of technology (and geography) provides a more direct and complete experience of the world, and the person who took an opaque view of everything would surely be a supremely ignorant and uninquisitive one. But it clearly isn't practical to take a transparent view of everything all the time, and the opaque view is a very practical one.
I've long carried a printed timetable for the railway line that I use most often. A couple of weeks ago, I pulled it out in order to check when the next train home left.
The friends that I was with asked me why I didn't have a more modern appliance for doing such things. I said that "It works and it's free". I then went on to my standard explanation that, as someone who sits at a computer all day at work and who also has a computer at home, I don't feel the need to have one while I'm walking around the place as well. (I don't actually sit at a computer all day now that I'm a teacher, but this explanation comes from when I worked as a programmer.)
One of the friends reminisced about the days in which she had a complete collection of printed timetables for Sydney's rail network. I have no doubt that an electronic device containing all of this information would be more convenient than such a collection, and one of the few mobile apps I've seen that actually seemed interesting to me is one that provides timetables for public transport in various cities (including Sydney).
Still, I have no plans to replace my printed timetable. For one, it does work quite well for all of the routine trips that I take, and it is free, which cannot be said about mobile devices and mobile data plans. I use the CityRail and TransportInfo sites from my home computer to plan non-routine trips, but I find these sites to be a little clumsy compared to looking up my printed timetable for routine trips.
More importantly, perhaps, I also enjoy the challenge of working out the most efficient public transport route for myself. For me, an app that works out how to get from A to B would be like an app that solves crosswords or games of patience: efficient, maybe, but not very entertaining.
The Conversation recently published a couple of articles on "Mass Open Online Courseware" (MOOC), firstly by David Glance and later by Simon Marginson. This courseware, created and made available by various prestigious universities in the US, is supposed to enable any willing student to undertake a subject for free and obtain a "statement of accomplishment", or similar document. Both Glance and Marginson appear to believe that MOOCs are "disruptive", and wonder if such things will mean the end of university education as we know it.
Being employed as a teacher in a university, I, of course, might not have much to look forward to in being replaced by on-line courseware produced by teachers with much more prestige than I. As Gavin Moodie documents in his comments on the above articles, however, various sorts of courses have been available before -- all the way back to textbooks -- and people like myself are yet to be replaced.
Much of what I read about technology in education, particularly at the pop level, says a lot about technology and not much about education. I consequently found a lot to like in Tony Harland's critique of the supposed rise of a "net generation" amongst students and the resulting technophilia in Chapter 6 of his recent book University Teaching: An Introductory Guide. Harland quite rightly points at that it is highly unlikely that "students have undergone rapid evolution into some new type of hominid" whose learning needs are radically different from those of students of previous generations.
Why teach when I could be developing software?
Applying for lecturing positions, I've sometimes found myself responding to selection criteria like "An interest in developing the use of new technologies and approaches in teaching and learning" (this particular example comes from a position description for a Lecturer in Computer Science at Charles Sturt University in 2011). At the risk of making myself unemployable at such institutions, I'll admit to feeling unsure of how best to answer criteria that seem to me to make technology an end in itself.
Of course I use technology in my teaching where appropriate technology is available, and I believe it would help the students learn or meet the administrative needs of the university. I'm sure I'd have a very tough time teaching programming without a computer lab. I've even had a thought or two about how I might write some software to provide some useful teaching tool. But, as a teacher, am I supposed to be focusing on the development of technology, or the development of teaching?
A few years ago, I went to a research seminar presented by a mathematician friend of mine, in which he took the now-extraordinary step of writing out his material on a whiteboard instead of bringing a computer pre-loaded with slides. I thought it worked fantastically well: I, at least, find it much easier to follow mathematics by watching someone write it out line-by-line rather than being confronted with a slide full of equations.
Should my friend be chastised for failing to develop technology to support his presentation, or for ignoring disruptive trends in presentation technology? Or did he just use the best technology for the job?
Still thinking about Future Imperfect, Friedman appears to be broadly optimistic about our chances of successfully negotiating all of the technologies that he discusses. He clearly believes in the power of the market to resolve resource shortages, for example, and he happily points out the unfulfilled predictions of The Population Bomb and Limits to Growth.
Technological optimists might well feel justified by a history of "so far, so good". Despite the creation of potentially catastrophic technology like nuclear weapons, and numerous localised mishaps like oil spills and factory explosions, the human race is still here. And, while occasional shortages of commodities might cause temporary price spikes and the world is right out of dodos, our material wealth continues to increase.
"So far, so good" is a fairly shallow analysis, though. I recently read a joke about an economist falling from an aeroplane without a parachute. The economist has no fear because "the market will provide a solution". Perhaps it will, given that markets have provided solutions in the past, but does this tell us much about who, specifically, is going to realise that a parachute is necessary, and by what mechanism the parachute is to be created and delivered in time to save the economist from an abrupt end on the ground?
Here, perhaps, is a job for pessimists. Predictions of doom can fail to materialise because, being made aware of the danger, we can change our behaviour. Why haven't nuclear weapons made the world uninhabitable? Because we saw how much destruction they could cause and refrained from using them. Why hasn't the global ecology collapsed? Because we realised that cutting down every tree and eating every fish means there won't be any trees or fish left.
Predictions of global catastrophes and collapses of civilisation no doubt gain much more attention than sober and careful examination of the dangers that attend some technology. Perhaps doom-sayers have a bad reputation as a result. Yet I think few people would say that blind optimism alone ensured that so far, so good.
I sat on this blog for a very long time before writing this, its first entry. I drafted the "About the Blog" page back in April, when I first had a bit of time and thought the time might have come to start a blog. But I didn't have any immediate idea for things to write about, and my teaching load shortly increased so that I didn't have so much spare time to fill. So the blog was still-born.
It's a new semester now, and this weekend I happened to read David D. Friedman's Future Imperfect. The recent paperback edition happened to be on display at my local library, and there are few enough books on this topic at the library that I feel I might as well read them all.
I gather that the book is intended as a discussion-starter for a university course in the legal implications of technology. I've read enough literature in this area to have seen all the same topics before, so the essays didn't hold much new for me. Nonetheless, it seemed like an opportune time to start on the blog.
When I read futurological discussions like those in Future Imperfect, I often ask myself: is there any point in speculating about this stuff? Aside from the science fiction fun to be had, is it really possible to form a meaningful view of how society or the law should treat technologies that we not only don't know how to build, but don't even know what their precise capabilities and limitations might be? Imagine engineers and lawyers of the Victorian era speculating about the appropriate setting of speed limits for motorised vehicles, without any knowledge of how a car is operated, how it functions, or what it is capable of!
Of course some technologies are more predictable than others: the cryptographic technologies discussed in the first few chapters of Future Imperfect already exist (and Friedman's discussion is arguably already a bit dated), and we're probably near enough to genetic screening to be able to -- and, arguably, ought to -- have a meaningful discussion about it. But why even ask what rights should be accorded to human-like artificial intelligences when we don't even have a meaningful definition of "intelligence", let alone any machines that even vaguely approach human capabilities of speech, creativity and emotion?
If you get your speculation right, you might be hailed as visionary in fifty or a hundred years' time. If not, you can disappear amongst the plentiful ranks of those who thought that heavier-than-air flight would never work, that we'd now be working just 15 hours per week, or that we'd be living in underwater cities. Either way, it won't make any immediate difference to you or anyone else.
Of course, some speculative questions generate more immediate questions, like "what is intelligence?" and "what is the purpose of creating human-like artificial intelligence when we have seven thousand million human intelligences already?". (Curiously, perhaps, I don't think I've ever heard the latter question asked.)
I don't suppose there's any hard-and-fast way of determining what's a meaningful and important question that needs answering in order for us to be adequately prepared for technological developments, and what's idle (if potentially amusing) speculation about the unknowable. So I won't tell Friedman's students to stop wasting their time. If nothing else, they're probably getting some good philosophical exercise.