I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Posts tagged as prediction

Prediction, being popular and being right

2013-03-11 by Nick S., tagged as prediction

Looking at the categories listed on the right-hand side of this blog, I noticed that the two I've used most are "prediction" and "mobile computing". This seemed ironic given that prediction and mobile computing are things that I mostly avoid. Of course the tags reflect the number of posts in which I've taken issue with buzzword-driven predictions and the black box fallacy. Reading the works of Duncan Watts and Daniel Kahneman recently gave me further justification for staying out of the prediction business, refined my dim view of it, and gave me some constructive thoughts on how to deal with forecasting.

According to both Watts and Kahneman, predictions made by supposed experts in a variety of fields are no better than random guesses. This, they suggest, is due to the prevalence of "unknown unknowns" in complex systems like politics, human societies, the economy, and the environment: in systems like this, there are invariably numerous unforeseeable influences that would-be forecasters can't even know that they ought to be thinking about, let alone predict. Experts nonetheless make bold predictions because confidence sells: few people have much interest in "experts" who can only give them uncertain and equivocal advice, so experts are under great social and financial pressure to make bold and confident predictions that, by and large, they won't be held accountable for anyway.

Reading the chapter of Kahneman's Thinking, Fast and Slow on this, I was reminded of a misgiving that I had about Martin Seligman's work on positive psychology. Seligman observes that optimists frequently blame failure on specific changeable circumstances, giving the optimist hope that he or she can still succeed when trying again under (hopefully) other circumstances. Seligman contends that this is good for the optimist, who will persist, but seems to be largely uninterested in its consequences for everyone else. His favourite example seems to be the selling of insurance through cold-calling, in which the heroes are those optimists who persist despite being knocked back by nine out of ten potential customers. Seligman shows no interest in the fate of the nine out of ten people who had to endure an unasked-for sales pitch for a product they didn't want.

So it seems that being a bold and confident personality might be beneficial for an individual while being detrimental to the societies or organisations to which they belong. Kahneman cites the example of optimistic CEOs who make bold and impressive-sounding decisions that wind up losing money for their companies. (He also observes that the opposite might happen: the economy as a whole benefits from the optimism of would-be business owners who start ventures even though statistics indicate that only one-third of such ventures actually succeed).

Reading Kahneman's book (and maybe this blog), it's easy to feel like all prediction is hopeless. Of course there are plenty of things we can predict with great confidence: engineers, for example, routinely make accurate predictions about the behaviour of structures, machines and computers based on well-tested models of how the the world works. But few people find such predictions very interesting, perhaps because they are so routine. There's simply more glory to be had in making hopelessly inaccurate predictions and ignoring their consequences.

The predictable top ten predictions of 2013

2013-01-12 by Nick S., tagged as prediction

Around this time of year, many commentators on technology (and probably other things) like to offer their "top ten predictions" for the coming year. I've recently skimmed through IEEE Spectrum's 2013 Tech to Watch and the top ten tech predictions of The Conversation's Mark Gregory.

I say "skimmed through" because I'm doubtful that it would be worth my time to examine such predictions in detail. For a start, it isn't clear to me how the commentators define "top ten". Do they mean their ten most confident predictions? Or, since this criterion would result in unhelpful predictions like "computers will represent numbers in binary", maybe they mean their ten most confident predictions of what will change? Or, do they mean the ten most profitable technologies? Or the ten most influential? Or the ten most interesting (which is surely subjective)?

I recently read Duncan Watts' Everything is Obvious, which, among many other things, makes the point that commentators making these sorts of predictions are rarely held to account for what they say. Commentators set out their predictions for the year in December or January, but, so far as I can tell, they're largely forgotten come February. The predictions have no obvious consequences for either their makers or their users, and, indeed, seem to amply satisfy Harry Frankfurt's rigourous definition of "bullshit" as speech made without any concern as to whether it is true or not.

Watts observes that, not only is it difficult to predict the fate of current trends, we don't even know what to predict. Perhaps, in this case, Spectrum's contributors can make some informed guesses about electric vehicles or computer displays that they happen to have heard about, but numerous technologies of the future are likely being developed in currently unheard-of lab experiments or software development houses where neither the IEEE nor anyone else knows to look.

To be fair to Spectrum, I don't think the editors necessarily mean to make any grand statements about what technology will or won't be popular or profitable or influential, but only to draw the reader's attention to some technology that the editors think is interesting. The writers do acknowledge the doubters and pitfalls of technology like Google Glasses, for example. I wonder if they and their fellow prognosticators ought to dispense with the "top ten 10" and the "predictions", and use a more modest "ten interesting things"? After all, that's what the editors do effectively when they put together an ordinary issue of Spectrum or The Conversation.

Extreme prophecy and unsubtle predictions

2012-12-14 by Nick S., tagged as prediction

I've recently read a couple of smug comments from technology enthusiasts lambasting what they perceive as Luddism from sceptics of some recent technological adventure. One comment on The Conversation equated doubters of massive on-line open courses with a newspaper executive insisting that people would never want to read classifieds anywhere other than a printed newspaper. And, having worked in copyright protection for many years, I'm well-acquainted with the hacker triumphalism that follows the breaking of some rights management scheme.

The technology enthusiasts involved are, of course, cherry-picking the failed predictions of their opponents (or, in the case of the comment cited above, describing a caricature that probably doesn't represent tbe opinion held by any actual person). The Register, for example, recently described ten technology fails illustrating that technology enthusiasts can be just as mistaken in their views of the future as anyone else.

I guess I'm pre-disposed to doubt apocalyptic predictions like the notions that universities will be replaced by massive on-line open courses or that iPhones have already brought about a revolution. Aside from the fact that I'm yet to experience any such apocalypse despite the numerous technological changes that have occurred over the course of my life, extreme predictions of this sort are inevitably simplistic.

For one, the world is vastly bigger than any single technology or product, and the changes brought about by any one are always going to tempered by numerous other influences. So the iPhone is a very successful mobile computing product: but what did it do for refrigeration, power generation or surgery?

For another, existing institutions don't just sit back and wait for their demise when a new technology comes along, even if technology enthusiasts would rather that they did. The music and newspaper industries, for example, may have struggled to re-organise their businesses around electronic media, but they never just packed up and walked away, and they continue to try things even now. And far from planning to either shut down their universities in the face of massive on-line open courses, or pretend that such things don't exist, vice-chancellors Ed Byrne and Margaret Gardner, offer some more measured thoughts about how existing universities might work with on-line courses.

Publishing executives, vice-chancellors and others in their position probably won't be correct in every detail — but could they be as wrong as a prediction that technology X will overwhelm everythng?

On the structure of computing revolutions

2012-11-23 by Nick S., tagged as buzzwords, mobile computing, prediction

I recently read an article mocking its own authors for failing to recognise that the iPhone (or some particular version of it) would instigate a revolution. Unfortunately I didn't record where I read this, and I haven't been able to find it again after later thinking about what constitutes a "revolution", and what it might feel like to live through one.

My immediate reaction upon reading the article was: are you sure you weren't right the first time? I, at least, don't feel like I've been through a revolution any time in the past ten years, or, indeed, my entire life. Sure, technology has steadily improved, but I've only ever perceived it as "evolution". I have no doubt that someone catapulted into 2012 from the time of my birth in the 1970's would find much to be amazed about. But, having lived through all of the intervening years myself, I had the much more mundane experience of seeing the changes one product at a time.

This begs the question: how much change is required, and how sudden does it need to be, to constitute a "revolution"? When talking of the history of computing to my computer systems students, I often talk of "trends" from analogue to digital, from stand-alone computers to networked ones, and from single-core to multi-core CPUs. I say "trend" because I perceive the changes as a gradual process of older products being replaced one-by-one by newer products. But proponents of the iPhone (or digital or network or multi-core) revolution presumably perceive the changes as one big leap from existing products to a spectacular new entrant. (Either that, or they use the word "revolution" to mean "any perceptible change".)

Now, many small changes may add to up to a big one. Someone of my mind born in Britain in 1800, say, might have observed machines or factories appearing one at a time over his or her life time. But that person's lifetime now seems short compared to the span of human history, and we consequently refer to that period as the Industrial Revolution. Still, I suspect that future historians will be looking at more than iPhones when they decide what to call the current period.

One of my students foreshadowed the taxonomic problems awaiting future historians when he observed to me that the articles he had been reading disagreed about what era of computing we currently enjoyed. I forget the exact list of candidate eras, but one might have been the "mobile era" and another the "network era", and so on. Off the cuff, I suggested two explanations: firstly, that his sources were talking crap, and, secondly, that his sources were talking about two different aspects of computing.

The two explanations might not be mutually exclusive. Perhaps the iPhone revolutionised mobile telephony/computing for some definition of "revolution", but I didn't notice this revolution because I do relatively little telephony and mobile computing. But the iPhone didn't revolutionise other aspects of computing -- let alone biotechnology or space travel or any of numerous other technologies of the modern period -- so attributing a broader revolution to it would seem to be a load of crap.

Tablets and the elusive black box of computing

2012-09-30 by Nick S., tagged as buzzwords, mobile computing, prediction

The Conversation last week included Roland Sussex wondering if digital tablets have become essential. His answer seems to be "no", since he observes they aren't very good for textual input and aren't sufficiently robust for use by children. He eventually comes to the rather inane conclusion that "for what they do well they are fine." For things they do badly, we still need other devices.

For my part, the answer is obviously "no" since I don't have a tablet and have yet to drop out of society, or even be inconvenienced in any way. I have a netbook that I find quite useful for reviewing lecture material and drafts while I'm on the train. Perhaps a tablet would be better for reading books and magazines (which I also do), but I do enough typing to feel that a device with a keyboard is the most appropriate tool for the job. See my comments on the article for my full argument.

I sometimes wonder if all the fuss about tablets (and cloud computing and any number of other buzzwords that have appeared over the years) is driven by a self-fulfilling prophecy in which everyone buys tablets because everyone says tablets are the way of the future. Did Steve Jobs anticipate the market for tablets when he introduced the iPad, or did the market buy tablets because a charismatic and influential figure anticipated them? After all, tablets of various sorts existed a long time before the iPad: Apple itself released the Newton in 1992, while Microsoft introduced Windows for Pen Computing in 1991 and Windows XP Tablet PC Edition in 2002.

Now, it could well be that technology just wasn't up to the task of making tablets work in the days of the Newton and Windows for Pen Computing, and advances in technology have now made it the right time to try again. Sun Microsystems and others were eager promoters of "network computing" and "thin clients" in the 1990's, for example, but the networks of the time didn't have the capacity or connectivity to support what we now call "cloud computing". With increased network capacity and connectivity, the 2010's might be a more fertile ground for similar ideas.

Whatever the case, it's hard to see us escaping from the boring old paradigm of using the right tool for the job. Henry Jenkins refers to imaginations of a single, unified computing/communications device as "the black box fallacy" in Convergence Culture. As several of the comments on Sussex's article observe, it's not that tablets lack the technological sophistication of Jenkins' black box, it's that ergonomics dictates different tools for different tasks. And even if some future tablet -- or smartphone or wearable computer or microchip implant -- could somehow meet all of one's computing and communications needs, would it make a very good refrigerator?