I Don't Want To Be A Nerd!

The blog of Nicholas Paul Sheppard
Archive for september 2013

What's a STEM crisis?

2013-09-30 by Nick S., tagged as education, employment

I've recently been reading a bit about a possible "STEM crisis", or lack of one, mostly in IEEE Spectrum, but also on The Conversation. "STEM" is an acronym for "Science, Technology, Engineering and Mathematics", and the crisis, if it exists, is supposed to be caused by a shortage of graduates in STEM disciplines.

The disputants seem to me to be asking two somewhat different questions. STEM enthusiasts like professional societies and chief scientists start with the assumption that STEM is a good thing that we should be doing more of, and argue that we should therefore have more STEM graduates to do it. Economists and out-of-work STEM graduates start with the observation that there are already numerous un- and under-employed STEM graduates, and argue that we should therefore have less of them.

These two views are perfectly consistent if one accepts that we, as a society, ought to be doing more STEM. If so, the enthusiasts are really saying that there is a crisis in the amount of STEM being undertaken. STEM graduates experience this crisis as an inability to find work.

How much STEM should we be doing? In the Conversation article cited above, Andrew Norton assumes that we should be doing exactly that STEM for which buyers are willing to pay (manifested in Norton's article by how many STEM graduates employers are willing to hire). Taken at face value, this is more or less the answer one gets from basic free market economics: if doing some STEM gets the greatest value out of all the ways buyers could use the resources involved, the buyers will pay for that STEM. If doing something else with those resources gives the buyers a greater benefit, the buyers will do something else.

I think that most people, however, would agree that a significant proportion of STEM has the character of what economists call a "public good". Public goods are items like defence forces and street lighting for which is difficult or impossible to charge people according to their individual use. Markets may under-invest in public goods since would-be investors can't extract payment for them even though buyers exist who would actually use them.

Norton implicitly assumes that the government has estimated the value of public STEM and invested a suitable amount of tax money into it, creating a matching demand for STEM graduates in government-funded programmes. I suspect that the enthusiasts, however, place more or less infinite value on STEM. For them, there will always be a "STEM crisis" because no amount of government or industry investment can ever realise such a value.

The rise/fall/whatever-you-call-it of civilisation

2013-09-25 by Nick S., tagged as history, prediction

I read a couple of articles this week that, without being specifically directed at technological optimists, seemed at odds with the technology-is-advancing-faster-than-ever-before narrative that I've become accustomed to in publications like IEEE Spectrum. The Australian (18 September 2013, p. 29) had Peter Murphy contending that "big ideas in art and science seem like a thing of the past", while Radio National had Ed Finn lamenting that current science fiction typically portrays a pretty grim future.

Peter Murphy sounds like the kind of person who contributes to a narrative that I once saw described (I forget where) as "civilisation has been declining since it started". For him, the good old days were left behind somewhere in the middle of the twentieth century, and we no longer have anything interesting to say. Ed Finn is not such a curmudgeon himself, but draws attention to the trend from the largely utopian science fiction of the mid-twentieth century to the dystopian sort now enjoying popularity. Finn himself proposes to encourage more inspirational science fiction through a programme known as Project Hieroglyph. I presume that neither of them have been reading IEEE publications, Ray Kurzweil, Kevin Kelly, or any of their ilk, for whom things are (mostly) quite the opposite.

I was struck by the degree to which Murphy's article used the same technique as that used by more euphoric views of technology, however much their conclusions might differ: make a set of assertions about the importance of certain artworks or technologies that are at best subjective and at worst arbitrary, then conclude with whether you liked the older ones or newer ones better. Whether things are getting better or worse thus seems to depend largely on whether you prefer Daniel Defoe or Stephen King, or whether you happen to see more ploughs or iPhones.

The most convincing analysis of this sort that I've encountered is the one in Jaron Lanier's You Are Not a Gadget (2010). Writing about music, he argues that no new genres of music have appeared since hip hop in the 1980's, and that no one could tell whether a pop song that came out in the past twenty years was released in the 1990s or the 2000s. In another section, he argues that open source software consists largely of clones of prior commercial software. I'm sure there are plenty of musicians and open source software developers who might argue otherwise, but Lanier's points are at least testable hypotheses.

Given that the importance of any particular technology or piece of art is so subjective I'm not sure it's really very meaningful at all to make sweeping statements about whether art or technology is getting better or worse, or faster or slower. The Lord of the Rings, for example, has been immensely influential for generations of fantasy writers and readers, but I don't imagine it means much to writers and readers of, say, romantic comedies. There might nonetheless be more specific statements that could be made, but they need much more robust than merely making a list of what one individual likes and doesn't like.

Social experience machines

2013-09-20 by Nick S., tagged as experience, social networks

While expanding my thoughts on synthetic worlds for The Social Interface, I made a connection between Edward Castronova's concept of migration to synthetic worlds, and Robert Nozick's experience machine. Nozick postulates a machine able to give its user any experience he or she desired, but argues that no one would actually want to live in such a machine. Therefore, he argues, people do not subscribe to the utilitarian notion that we care only about the pain and pleasure we experience.

It's important for Nozick's argument that potential users of the experience machine are aware that it simulates experiences, since he argues that potential users would find this simulation dissatisfying irrespective of how good the experiences were. Castronova's synthetic worlds satisfy this criterion since their users are aware of entering and leaving their worlds, and this would be the case even if virtual reality technology advanced to the point that it could provide perfectly realistic experiences.

Assuming that Nozick is correct about a fully-informed person wanting to live in an experience machine, the question remains as to what might happen were someone to enter an experience machine without knowing it. Fully-functioning experience machines don't exist, but I think an argument can be made that certain aspects of them do. Would a person tricked into entering one feel cheated?

During the discussion that led to my dangerous idea last week, one of my colleagues observed that it felt rewarding to accept connection requests, and rude to decline them. I countered that this was exactly why I'd deleted my LinkedIn profile: it seemed superficially rewarding to accept connection requests, and at first I thought they might lead to something, but this quickly turned to disappointment when I realised that I wasn't actually connected to these people in any meaningful way, and it never led to anything.

For me, LinkedIn was a primitive experience machine that (momentarily) provided the experience of being connected. As Nozick predicted, I got myself out of it once I'd decided that the experience was, in fact, simulated. As Sherry Turkle puts it, it promised friendship but delivered only a performance — and a particularly crude one at that.

I suppose that people who use LinkedIn and other networks might contend that that was my particular experience, that they have built genuine connections with it, and that maybe I wasn't using the tool correctly in order to benefit from it. Or maybe it's just not my thing, in the same way that stamp-collecting and dog ownership aren't my thing.

This all sounds plausible enough, and I can neither prove nor disprove it. When pressed, I guess I find the "not my thing" explanation most convincing. Going back to experience machines, though, I only felt cheated once I'd compared the LinkedIn experience with my physical world experience. If I were still in LinkedIn's experience machine, and ignorant of the physical world, might I not be as happy as everyone else in that machine?

On the dangers of social networking, and of not social networking

2013-09-12 by Nick S., tagged as communication, social networks

Last week, I happened across an essay collection by the name of What Is Your Dangerous Idea? (2007), edited by John Brockman. The eponymous question, originally asked by Steven Pinker, asked contributors to Edge for ideas that "are felt to challenge the collective decency of an age".

Many of the contributors discuss ideas that they themselves appear to be comfortable with, but might seem threatening to more traditional thinkers. Scientific materialists, for example, have long been used to the idea that there is no soul, however terrible this might seem to more spiritualist thinkers. So I got to wondering not just what ideas might seem dangerous to society at large, but also what ideas might seem dangerous to me.

I'm sure there are plenty of ideas that threaten both society and I — like, God exists and he's not very happy with what we're doing — but I'd like to stick to the topic of this blog. As it happens, I found myself in a discussion about social networks — primarily LinkedIn — with some work colleagues at around the same time I read the book.

My dangerous idea in this respect is that social networks support an illusion of connection representing nothing more than the mindless clicking of buttons. Facebook and LinkedIn build an audience based on our need to feel connected, and the feeling that it is rude to say "no" to connection requests. They sell this audience to their advertisers, and the advertisers to sell their products to us, all without actually connecting anyone.

The dangerous idea to me is the converse one that users of social networks are, in fact, using these tools to build significant relationships, and that I've cut myself off from society and opportunity by refusing them. One of my colleagues, for example, claimed that many jobs are advertised only on LinkedIn, and I've read elsewhere that (some) recruiters rely on LinkedIn to fill positions.

Probably — and possibly hopefully — the truth lies somewhere in between. Perhaps some people successfully create or maintain relationships using Facebook (probably in conjunction with other tools), and perhaps some people find jobs using LinkedIn. But not all on-line connections are equal, and some are surely so superficial as to be meaningless. Nor is Facebook the only way of maintaining a relationship, or LinkedIn of finding a job, allowing each of us at least some freedom to choose the tools that best suit our individual needs. If it were otherwise, I think the only people who wouldn't be endangered might be Facebook and LinkedIn.