I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for february 2015

Do super-intelligent machines have a purpose and is it a good one?

2015-02-26 by Nick S., tagged as artificial intelligence

Over the past month, I happened to read a few books in which machine intelligence plays a big part, being Nicholas Agar's Humanity's End (2010), Frank Pasquale's The Black Box Society (2015) and Tyler Cowen's Average is Over (2013).

Cowen is by far the most sanguine, if only because he takes a firmly amoral view that only an economist could love. He presents as inevitable a future of super-intelligent calculating machines tended to by a few elite humans able to work with them, while the remaining workforce finds itself of little value. Agar, on the other hand, doubts that augmenting humans beyond their natural abilities has any real benefits, and Pasquale fears that the secret algorithms behind search engines, computer trading and the like will stymie the public's understanding and control of the information that is presented to them.

While there are many small points on which I find Cowen's logic impenetrable, I did appreciate his characterisation of super-intelligent machines. Rather than have a human-like intelligence appear fully-formed at some choice moment as it does in so much science fiction, he sees machine intelligence emerging gradually and appearing alien and unintelligible to human intelligence. If it takes eighteen years for a human to become fully developed in the legal sense, why expect that a machine — especially the first one ever built, presumably the most primitive of its kind — could achieve the same immediately upon being switched on? And why expect a computer to behave like a human when it is an entirely different sort of construction?

Agar points out that, if the behaviour of super-intelligent machines is incomprehensible to us, what interest would we have in anything they do? Cowen observes that few people are interested in watching computers play chess against each other, precisely because human watchers don't understand what the computer players are doing. Yet, if machine intelligence emerges gradually, at what point might we decide to stop because we're no longer interested?

Pasquale suggests a more sinister possibility. How do we know that secret or incomprehensible behaviour is in our best interests? I'm sure plenty of people would regard Cowen's world as dystopian without any further elaboration, and it's easy to think up even worse dystopias in which the elite (Google et al. in Pasquale's book) enrich themselves while keeping everyone else ignorant of the real state of affairs, or in which machines become trapped in an echo chamber processing only data created or influenced by themselves.

Cowen seems to be confident that his super-intelligent machines will be able to get good results even if we don't understand why, citing examples like the ability to win chess games and match successful romantic partners without any human being able to understand how they made their decisions. For problems with narrow and well-defined goals — like winning games and, at least to a crude approximation, marriage — it's easy to verify that a solution is correct even if we don't know how the solution was arrived at. But computers are already superb for narrow and well-defined goals, and no one would suggest that we allow them to rule over us because such goals are only crude approximations of what we actually want.

Pasquale's solution is to expose the algorithms to scrutiny. Perhaps no human could follow the detailed execution of an algorithm because a human cannot keep track of so many variables so quickly as computer can. But we must understand the algorithms on some level in order to build them in the first place, and to judge whether or not they are good algorithms. And if we can't judge whether or not the algorithms are good, what is our purpose in creating them?

Should universities lead or follow technological trends?

2015-02-20 by Nick S., tagged as education

The Australian's Higher Education section this week either presented some very strange research, or made a very strange presentation of some research, in claiming that Twitter is the least used online resource (18 February 2015, p. 30). (Less used than www.nps.id.au, are you sure? I laughed.) The article doesn't clearly identify the study alleged to have discovered this and I wasn't able to find it via a search engine, so I can only go by the article's presentation here.

As the article has it, Twitter is "the social media platform of choice for academics, journalists and a host of other professionals" but "barely rates as an educational tool". This is based on a survey showing that only 15% of participating students found Twitter useful in their university studies.

To my mind, the most obvious explanation for this is that Twitter just doesn't meet the needs of university education. As far as I know, it was never designed for this purpose, so it's hardly surprising that people don't use it as such. Refrigerators, say, probably get even less use in university courses and no one would anyone expect anything else given that refrigerators were never designed for educating people.

The article instead quotes the study's lead author, Neil Selwyn, speculating that the finding "could be seen as a negative for universities [since Twitter] is where the technological generations are having conversations and finding stuff out." The underlying assumption seems be to that the Cool Kids are using Twitter, and universities might not be cool if they don't use it too.

Well, students probably use refrigerators quite a bit too, but does that mean that it would be useful to have one in my classroom? If Twitter is to be accepted as an educational tool, educators need to be convinced of some educational purpose in using it. Those who do things in order to be cool are more likely to be described as "try-hards" than "innovators".

And are the Cool Kids really using Twitter anyway? According to the article, nearly all students are actually using learning management systems, on-line libraries and on-line videos — and why wouldn't they, given that all these tools have well-established educational uses? The article itself acknowledges that the students are all aware of Twitter, they just don't use it for this particular purpose. Maybe the article could just as meaningfully have read "Twitter barely rates as an educational tool, yet is the social media platform of choice for academics, journalists and a host of other professionals."

Chasing wild geese in search of privacy

2015-02-20 by Nick S., tagged as privacy

There's been a bit of stir recently concerning the behaviour of Samsung televisions. Samsung's privacy policy for its Smart TVs was reported to allow voice captured by the television to be sent to a third party for processing. The open-ended wording of the policy led to some speculation that the television could be used like the "telescreens" used to watch over citizens in Nineteen Eighty Four.

According to The Conversation's David Glance, there's really nothing to worry about; the television just sends the recording to an on-line system able to perform voice recognition, which the television does not have sufficient resources to do itself. Other well-known voice recognition systems for consumer electronic devices do the same. Samsung itself quickly revised its privacy policy to clarify this point.

The episode illustrates weaknesses in two very different, but well-publicised, approaches to privacy: the privacy policy, and privacy-as-secrecy. The first are notorious for being unread by users who have no real choice but to accept them anyway. The second, which focuses on secrecy as the proper way to deal with data, led to fantasies of Big Brother setting up shop in a Korean television factory.

I suppose that folks who hold that privacy is secrecy see themselves as ever-vigilant against the kind of abuse that might result from exploiting loopholes like that created by Samsung's open-ended wording. This might be fair enough as far as it goes. But a large part of the problem with the original privacy policy was a preoccupation with where data is stored rather than what is done with it. The original wording told us that data would be sent to a third party, permitting everyone to imagine the third party that most exercised their minds, instead of explaining the actual functioning of the system.

So far as I know, no one has suggested that anyone at Samsung exploited the loophole, only that Samsung's privacy policy needed clarification. But since hardly anyone reads or takes action on privacy policies anyway, will anyone benefit from the clarification? What users really need is trust that data will only be used to provide the service they've asked for, not a technical guide to distributed computing.

Imagine that, every time you bought an item of food, you were expected to peruse the grower's and/or cook's "edibility policy" to determine whether or not it was up to your personal standards of non-poisonousness. (People with allergies do do something like this, and I don't envy them.) Personally, I much prefer the system of regulation by which eaters can trust that all food offered for sale is edible.

I suspect that most of us are hoping that privacy works much the same way when we click through privacy agreements: we presume that any reputable company is only going to use data in ways that we'd expect. Maybe they actually do, most of the time, but no one would ever know because it's buried in legalese.

A funny sort of progress

2015-02-05 by Nick S., tagged as commerce, employment

The Conversation's David Glance outlined a curious theory this week, suggesting that "part of Apple's success comes from giving us a sense of progress". Glance conjectures that providing workers with updated hardware and software every year might give them a sense of progress that contributes to job satisfaction, and suggests that companies might even consider paying their staff bonuses with which they can upgrade their own devices in bring-your-own-device schemes.

Glance doesn't address the question of whether or not upgrading devices makes any actual progress towards the goals of either a company or an individual worker. For Apple's purposes, it's enough to give a sense of progress if it keeps the customers coming back for more upgrades. As Erich Heinzle's comment points out, this strategy is generally known as planned obsolescence and it's an old strategy that serves car and computer manufacturers well but has some questionable benefits for the rest of us.

A student once told me that he'd grown tired of constantly updating his phone to the latest model, and had given up doing it. I told him, slightly tongue-in-cheek, that it was a sign of maturity. Where a child might grasp for the latest toy, an adult chooses the device that best meets his or her needs at a price that he or she is able to pay. (Indeed, he was studying a subject in which students are supposed to learn how to make informed judgements about what kind of computer equipment meets a set of needs.)

Matthew Tucker's comment alludes to what psychologists call an hedonic treadmill (though Tucker doesn't use the term), in which people chase goals and possessions in the expectation that achieving them will improve their lot, only to find that their happiness shortly returns to its usual level. My student recognised that he was on an hedonic treadmill, and got off it.

I can nonetheless see where Glance is coming from when he writes about the feeling of being left behind when one has to use old equipment while everyone else has, or is presumed to have, the latest model. And upgrading hardware and software can lead to progress if the new versions increase productivity, improve reliability and/or create new opportunities.

Still, serious companies and mature individuals probably want to exercise some caution in interpeting Glance's advice lest they end up on a corporate version of the hedonic treadmill. Glance's article is, after all, mostly about how Apple succeeds, not how its customers succeed. Suppose a company has some money to spend on bonuses. Would the company prefer its bonuses be spent by staff who rush out and buy the latest gadget, or by staff who carefully choose tools that improve the quality, breadth and ease of their work?