While expanding my thoughts on synthetic worlds for The Social Interface, I made a connection between Edward Castronova's concept of migration to synthetic worlds, and Robert Nozick's experience machine. Nozick postulates a machine able to give its user any experience he or she desired, but argues that no one would actually want to live in such a machine. Therefore, he argues, people do not subscribe to the utilitarian notion that we care only about the pain and pleasure we experience.
It's important for Nozick's argument that potential users of the experience machine are aware that it simulates experiences, since he argues that potential users would find this simulation dissatisfying irrespective of how good the experiences were. Castronova's synthetic worlds satisfy this criterion since their users are aware of entering and leaving their worlds, and this would be the case even if virtual reality technology advanced to the point that it could provide perfectly realistic experiences.
Assuming that Nozick is correct about a fully-informed person wanting to live in an experience machine, the question remains as to what might happen were someone to enter an experience machine without knowing it. Fully-functioning experience machines don't exist, but I think an argument can be made that certain aspects of them do. Would a person tricked into entering one feel cheated?
During the discussion that led to my dangerous idea last week, one of my colleagues observed that it felt rewarding to accept connection requests, and rude to decline them. I countered that this was exactly why I'd deleted my LinkedIn profile: it seemed superficially rewarding to accept connection requests, and at first I thought they might lead to something, but this quickly turned to disappointment when I realised that I wasn't actually connected to these people in any meaningful way, and it never led to anything.
For me, LinkedIn was a primitive experience machine that (momentarily) provided the experience of being connected. As Nozick predicted, I got myself out of it once I'd decided that the experience was, in fact, simulated. As Sherry Turkle puts it, it promised friendship but delivered only a performance — and a particularly crude one at that.
I suppose that people who use LinkedIn and other networks might contend that that was my particular experience, that they have built genuine connections with it, and that maybe I wasn't using the tool correctly in order to benefit from it. Or maybe it's just not my thing, in the same way that stamp-collecting and dog ownership aren't my thing.
This all sounds plausible enough, and I can neither prove nor disprove it. When pressed, I guess I find the "not my thing" explanation most convincing. Going back to experience machines, though, I only felt cheated once I'd compared the LinkedIn experience with my physical world experience. If I were still in LinkedIn's experience machine, and ignorant of the physical world, might I not be as happy as everyone else in that machine?
Last week, I happened across an essay collection by the name of What Is Your Dangerous Idea? (2007), edited by John Brockman. The eponymous question, originally asked by Steven Pinker, asked contributors to Edge for ideas that "are felt to challenge the collective decency of an age".
Many of the contributors discuss ideas that they themselves appear to be comfortable with, but might seem threatening to more traditional thinkers. Scientific materialists, for example, have long been used to the idea that there is no soul, however terrible this might seem to more spiritualist thinkers. So I got to wondering not just what ideas might seem dangerous to society at large, but also what ideas might seem dangerous to me.
I'm sure there are plenty of ideas that threaten both society and I — like, God exists and he's not very happy with what we're doing — but I'd like to stick to the topic of this blog. As it happens, I found myself in a discussion about social networks — primarily LinkedIn — with some work colleagues at around the same time I read the book.
My dangerous idea in this respect is that social networks support an illusion of connection representing nothing more than the mindless clicking of buttons. Facebook and LinkedIn build an audience based on our need to feel connected, and the feeling that it is rude to say "no" to connection requests. They sell this audience to their advertisers, and the advertisers to sell their products to us, all without actually connecting anyone.
The dangerous idea to me is the converse one that users of social networks are, in fact, using these tools to build significant relationships, and that I've cut myself off from society and opportunity by refusing them. One of my colleagues, for example, claimed that many jobs are advertised only on LinkedIn, and I've read elsewhere that (some) recruiters rely on LinkedIn to fill positions.
Probably — and possibly hopefully — the truth lies somewhere in between. Perhaps some people successfully create or maintain relationships using Facebook (probably in conjunction with other tools), and perhaps some people find jobs using LinkedIn. But not all on-line connections are equal, and some are surely so superficial as to be meaningless. Nor is Facebook the only way of maintaining a relationship, or LinkedIn of finding a job, allowing each of us at least some freedom to choose the tools that best suit our individual needs. If it were otherwise, I think the only people who wouldn't be endangered might be Facebook and LinkedIn.
My recent difficulties with social networking inspired me to read Sherry Turkle's Alone Together: Why We Expect More from Technology and Less from Each Other. The book's subtitle neatly captures my dissatisfaction with LinkedIn and other supposedly social media: it's very easy to click a button that creates a record in a database stating that I'm "connected" with someone, but there's a whole lot more to do if I want to form and maintain a significant and effective relationship with that person.
Turkle makes a distinction between "performance" and "friendship". In the first half of the book, "performance" refers to robotic toys that are programmed to enact rituals that children expect from conscious beings: the robots say they are happy, hungry, etc. even though they (presumably) don't experience such emotions like humans do. In the second half of the book, "performance" refers to manipulating text messages and Facebook profiles to present the desired standards of coolness, connection and caring. She believes that
sociable technology will always disappoint because it promises what it cannot deliver. It promises friendship when it can only deliver performances (p. 101).
Turkle acknowledges critics who point out that we are always performing to one degree or another, in that we craft different personae for friends, family, work, school and so on. And how does one distinguish "authenticity" from a highly sophisticated and nuanced performance anyway? Of course Turkle contends that the performances exhibited by current robots and social networking sites are hopelessly inadequate to fully capture human emotion and relationships, and I find it hard to disagree.
It is, of course, conceivable that improvements in technology will one day overcome such inadequacy. But what to do in the mean time? Turkle doesn't recommend eliminating robots and social media and, indeed, seems to be quite comfortable with handing them out by the dozen as part of her research.
For me, the answer has to be about recognising the capabilities and limitations of particular media, employing them for what they are good at and dispensing with them for what they are not. The saddest stories in Turkle's book involve people feeling psychologically or socially compelled to use some tool despite its evident incapacity to meet the person's needs. Someone whose only tool is a hammer, as the saying goes, struggles with tasks that don't involve nails.
Most of the people in Turkle's studies are young -- children or teenagers -- and it could be that they simply haven't yet learned which tools work best for which tasks. Even older people struggle with how best to use new tools. Perhaps it isn't so surprising that things go awry in these situations.
Towards the end of the book, Turkle writes about people who have realised that the tools they have been using aren't working for them, and have consequently developed strategies like scheduling one-on-one phone conversations and deleting their Facebook profiles. Some of these strategies are fairly crude, but I think they demonstrate an important (and possibly under-rated) mind-set: a determination to make technology serve one's needs in place of passive acceptance of what technology happens to be in vogue.
Today I received an invitation to join ResearchGate, which I gather to be a kind of social network for scientists. I'd never previously heard of ResearchGate and, almost certainly, they'd never heard of me. I nonetheless warranted an invitation because I co-authored a number of papers with someone who had already enrolled.
I mostly reject automated invitations of this sort, in part because I resent web sites expanding their business by taking advantage of my relationship with a third party and in part because I like to think that my real friends would be bothered to write real e-mails. But my experience of LinkedIn is my greatest motivator.
At the time I received my LinkedIn invitation, I had no experience of such sites and it seemed worth a try. But I never found anything useful I could do with it, and I gradually realised that my LinkedIn page was a graveyard of ex-colleagues who had sent me connection invitations but with whom I no longer actually communicated (via LinkedIn or otherwise). I began to wonder if sending a LinkedIn invitation was a tacit declaration that "I will never talk to you again."
After a few years of this, I began replying to invitations with a personal e-mail explaining that I don't really use LinkedIn. In response, one of my would-be connections admitted that she didn't really use LinkedIn either, but she just felt compelled to click on the "Do you know?" buttons. I was already pretty sure that my own LinkedIn connections were a fraud, and my friend's message suggested to me that I'm not the only one. I've since deleted my LinkedIn profile, and I refuse all new invitations with an e-mail explaining that I don't use LinkedIn.
It still feels slightly rude to reject invitations, and perhaps LinkedIn members feel it would be rude to ignore the question "Do you know?" when they do, indeed, know that person. I wonder if we instead ought to feel rude for allowing Internet companies to exploit our relationships in order to build their customer bases, and to present false social networks built up by automated messaging and idle button-clicking?