I recently read Eli Pariser's The Filter Bubble (2011), which discusses the potential for highly-personalised news feeds and search results to trap users in a "filter bubble" from which they can see only what news and results support their existing world-view. Cass Sunstein actually postulated that this might happen some time ago in Republic.com (2002), but Pariser updates the argument for ten years of advances in recommendation and personalisation technology.
By coincidence, my local university library happened to have Tim Dunlop's The New Front Page (2013) on its "new books" shelf around the same time. Dunlop's book is primarily a chronicle of his adventures in political blogging and the traditional media since the word "blog" was coined, but he does spend a little time discussing Sunstein's thesis. Dunlop points out that Sunstein's original analysis was conjectural, and that political bloggers since 2002 have, in fact, read and linked to the blogs of their political opponents.
To judge by the comments sections of opinion sites like The Drum and The Conversation, Dunlop is probably right as far as he goes: whatever the political alignment of an article, plenty of commenters of a competing alignment can always find time to criticise the article. That's not to say that the comments are necessarily insightful or constructive, or even that the commenters have actually read and understood the article: The Drum, especially, features plenty of mindless repetition of party lines and dogma. One is tempted to observe that, while Dunlop might be right about the motions, Sunstein was right about the end result.
Thinking about the news that arrives in my inbox, and some of the thoughts I've had about the computer industry in writing this blog, I wonder if politics is actually the least likely subject to end up in a filter bubble. Opposing political forces are at least aware of each other's existence, even if it's only to hold each other up as bogymen. But a computer scientist (for example) constantly surrounded by news about computers can easily forget that the computer industry is but one of numerous industries and agencies that contribute to modern society being what it is. Hence the mutual incomprehension that arises when one industry's orthodoxy conflicts with another industry's orthodoxy.
So I think there's an argument that we're as likely to build a filter bubble by ourselves as we are to have one built for us by technology. All that dogma on the The Drum is a case in point: the critics have the opportunity to engage with an article in a meaningful way, but many simply choose to re-state a party line. Even supposedly sophisticated communications theorists sometimes like to interpret the world through a one-dimensional lens, be it class or race or gender or sexuality or technology. I'm yet to meet a communications theorist offering an "industryist" analysis of the media, but many of us might be doing it in our own amateur way by being bound to the fate of the industry in which we work.
I've just finished reading Edward Castronova's Synthetic Worlds (2006), which is something I probably ought to have done some time ago. Reading it seven years after its publication, however, reminded me that synthetic worlds — notably Second Life — seemed like big news in the computer community at around the time that Castronova was writing. I remember being told that major companies were opening stores in Second Life, luminaries were holding press conferences there, entrepeneurs were making money there, and that anyone who was anyone would shortly be living, at least in part, in a synthetic world. Yet I don't hear much about Second Life or any similar world anymore.
The worlds themselves are still there and, presumably, making a living for the companies that develop them. But neither the media nor the conversations in which I'm involved have much to do with them. Was I, in 2006, hanging around a bunch of starry-eyed gamers unaware that not everyone was interested in their hobby? Is the media still not taking computer games seriously, as Castronova suggests in his introduction to Part II? Has everyone disappeared into a synthetic world, leaving me wandering alone on the outside?
In both the mainstream media and in conversations of which I'm a part, the giants of the computer industry aren't synthetic worlds of the kind that Castronova wrote about, but web-based tools like Facebook, Twitter and Google. And, to go by the numbers, rightly so: according to Statistic Brain, Facebook has over 1100 million accounts, Twitter has over 550 million accounts, and Google responds to over 5000 million searches per day. The largest synthetic world, World of Warcraft, had a comparatively measly 12 million subscribers at its height.
If there are synthetic worlds to which humanity is migrating, as Castronova puts it, they're surely Facebook and Twitter. The home pages of Second Life and World of Warcraft themselves sport those ubiquitous offers to "like" them on Facebook and follow them on Twitter.
I can think of several possible explanations. Firstly, Facebook and Twitter are free, where the synthetic worlds studied by Castronova ask for subscriptions. Secondly, the user base of game-like synthetic worlds is fragmented into numerous followers of different worlds, while Facebook and Twitter completely dominate their markets.
Lastly, though, I wonder if synthetic worlds have themselves met the fate of virtual reality identified in the appendix to Castronova's book. As Castronova has it, the researchers behind virtual reality originally supposed that virtual worlds would be created by completely immersing the users' senses in computer-generated stimuli. But it turns out that relatively crude representations of characters and landscapes on an ordinary computer are good enough to keep users' minds in a synthetic world. But maybe most of us don't even need those crude representations, at least not most of the time: our needs are adequately met by augmenting the real world with web profiles and instant messaging. After all, it's the one world from which we cannot migrate.