I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard

Opening gambit

2012-08-05 by Nick S., tagged as prediction

I sat on this blog for a very long time before writing this, its first entry. I drafted the "About the Blog" page back in April, when I first had a bit of time and thought the time might have come to start a blog. But I didn't have any immediate idea for things to write about, and my teaching load shortly increased so that I didn't have so much spare time to fill. So the blog was still-born.

It's a new semester now, and this weekend I happened to read David D. Friedman's Future Imperfect. The recent paperback edition happened to be on display at my local library, and there are few enough books on this topic at the library that I feel I might as well read them all.

I gather that the book is intended as a discussion-starter for a university course in the legal implications of technology. I've read enough literature in this area to have seen all the same topics before, so the essays didn't hold much new for me. Nonetheless, it seemed like an opportune time to start on the blog.

When I read futurological discussions like those in Future Imperfect, I often ask myself: is there any point in speculating about this stuff? Aside from the science fiction fun to be had, is it really possible to form a meaningful view of how society or the law should treat technologies that we not only don't know how to build, but don't even know what their precise capabilities and limitations might be? Imagine engineers and lawyers of the Victorian era speculating about the appropriate setting of speed limits for motorised vehicles, without any knowledge of how a car is operated, how it functions, or what it is capable of!

Of course some technologies are more predictable than others: the cryptographic technologies discussed in the first few chapters of Future Imperfect already exist (and Friedman's discussion is arguably already a bit dated), and we're probably near enough to genetic screening to be able to -- and, arguably, ought to -- have a meaningful discussion about it. But why even ask what rights should be accorded to human-like artificial intelligences when we don't even have a meaningful definition of "intelligence", let alone any machines that even vaguely approach human capabilities of speech, creativity and emotion?

If you get your speculation right, you might be hailed as visionary in fifty or a hundred years' time. If not, you can disappear amongst the plentiful ranks of those who thought that heavier-than-air flight would never work, that we'd now be working just 15 hours per week, or that we'd be living in underwater cities. Either way, it won't make any immediate difference to you or anyone else.

Of course, some speculative questions generate more immediate questions, like "what is intelligence?" and "what is the purpose of creating human-like artificial intelligence when we have seven thousand million human intelligences already?". (Curiously, perhaps, I don't think I've ever heard the latter question asked.)

I don't suppose there's any hard-and-fast way of determining what's a meaningful and important question that needs answering in order for us to be adequately prepared for technological developments, and what's idle (if potentially amusing) speculation about the unknowable. So I won't tell Friedman's students to stop wasting their time. If nothing else, they're probably getting some good philosophical exercise.