The Conversation began a series on creativity this week with Dan Hunter complaining that copyright is a poor mechanism for encouraging creativity since awarding money for effort is known to reduce the intrinsic desire to make the same effort. Many of the commenters were not impressed, pointing out that this is easy to say for those in publicly-funded university positions; the grant system that Hunter seems to favour has its own problems; Hunter uses very selective examples to assert the supposed success of amateur creation; and, perhaps most importantly, that copyright has never been about encouraging creativity in itself anyway but about protecting artists from exploitation.
A simple experiment, similar to one I've previously proposed on this blog, might illuminate the last two points. Consider one of the many media users who complain that the cost of blockbuster film and television series is too high or unfair. How would such a user respond to being told to just watch YouTube etc. instead?
I doubt that many such users would find this a very satisfactory suggestion. If it was, surely they'd already be watching YouTube instead of Hollywood blockbusters. The point is that, for better or worse, copyright rewards not just any creativity, but only creativity that has value to people other than the artist.
If we leave copyright out of it, Hunter is probably correct to reason that many people enjoy creating for its own sake, and that lawmakers therefore don't need to provide any extrinsic incentive for such people to express themselves. Supporting the intrinsic desire to create is more about providing citizens with reasonable access to the time, materials and skills required to pursue their creative interests. Some of Hunter's suggestions seek to do more or less this, and, indeed, governments already have plenty of programmes seeking to do things of this sort.
Coming back to copyright, perhaps the real question is: how and to what degree should the law encourage artists to create works that are of interest to other people? Would society lose anything if art was only produced to satisfy the creative (and possibly exhibitionary) urges of artists?
Those who complain about lack of access to blockbusters presumably believe that society would lose something if for-profit art were to cease being provided, though I don't know if they would recognise it. Of course it is not easy to know how we'd fare if the kind of art supported by copyright did not exist at all, since we have no recent experience of such a world or any obvious way of simulating one. But, being important enough to warrant fifteen years of loud debate, nor is it an easy thing to dismiss.
The Register, the ABC and The Conversation all recently reported the European Parliament's "Resolution on the Digital Single Market", which seeks to "unbundle search engines from commercial services". The resolution is presumed to target Google, and to address allegations that its search results might favour its own services over services from other providers. No one seems to expect that the resolution will have any practical effect, and a good thing too according to technology enthusiasts like David Glance at The Conversation and Marty Gauvin, interviewed for the ABC piece.
I'm not familiar with European institutions, haven't read the resolution, and can't comment on the merits of this particular resolution. The dismissals offered by Glance and Gauvin, however, seem to be underpinned by a presumption that technology makers know best and that silly uninformed lawmakers should keep out of their way.
I find it a little depressing to think that "the technology landscape fundamentally can't be shaped by politics or the law", as Glance claims. The defeatism that follows this claim doesn't explicitly acknowledge it, but the alternative seems to be to sit back and allow technology — and the companies that control it — to have its way with us. Technology companies and their cheerleaders may be comfortable with this, but are the rest of us?
Bill McKibben, in Enough (2004), points out that claims that some technology or another (he is writing about biotechnology) is "inevitable" represent attempts to sidestep debate over the merits of the claimants' technology. He notes that the Amish, for one, are famous for demonstrating that societies do have a choice to accept or reject technology. Even more mainstream societies routinely govern car technology by road rules, food technology by health regulations, and construction technology by building standards.
Enthusiasts for trendier technologies like computing and biotechnology might like to think that they are uniquely placed to understand said technologies and their effects on society, if they accept that it is possible for their favourite technologies to have a negative impact at all. But why should we believe that technologists, let alone companies with vested interests in selling technology, know any more than lawmakers about what society wants or how to achieve it?
Glance seems to accept towards the end of his article that there are, in fact, things that lawmakers can and should do address "small stuff" like privacy, intellectual property and mis-use of market power. I suppose that he means to say that the law can fiddle around the edges, but that technologies themselves appear and disappear without the input of lawmakers. Lawmakers didn't choose to invent search engines, for example. Yet lawmakers are able to decide to respond to them, and it's not clear to me why doing this should be "large stuff" beyond our ability to address if we determine a need for it.
Mark Rix opens a recent Conversation article on Australia's proposed metadata retention laws with a couple of paragraphs asserting that "privacy and individuals' ability to remain anonymous are important protections against persecution, bullying, intimidation and retaliation." As I understand it, the idea here is that privacy and anonymity provide a kind of first line of defence against unfair discrimination by depriving would-be discriminators of the knowledge on which their discrimination is based. Such an approach seems superficially appealling, and I'm sure I've used it myself when don't-ask don't-tell seems like the easiest way of avoiding an unpleasant confrontation.
When I think it through more carefully, however, I see a number of problems with this view. For a start, there are many situations in which it seems hopelessly impractical: is anyone likely to suggest, for example, that we defeat racial discrimination by donning ninja costumes or applying make-up that obscures the colour of our skin?
Supposing that secrecy is feasible, however, is hiding beneath it really the ideal outcome in the long run? Many years ago, I read a newspaper article (whose citation I sadly forget) making the point that many of our modern freedoms have been won by people who stood up against being driven underground. Would homosexuality, say, be as widely accepted as it is in liberal democracies today if the homosexuals of yesteryear had simply remained out of sight? I'm sure it wasn't easy for those people who did speak out — but the secrecy solution would have them even now cowering in anonymity instead of finding social acceptance.
Words like "discrimination" and others used in Rix's assertion are often used in a pejorative sense to refer to unjust discrimination on the basis of race, gender, etc., but a broader interpretation shows that secrecy in fact cuts both ways. Law enforcement agencies want access to metadata among other things precisely because our law "discriminates" against drugs, violence, money laundering and other activities deemed harmful by lawmakers and the people who vote for them. To law enforcement agencies, secrecy is just an impediment to carrying out the discrimination delineated by the law. The real question is not whether or not to discriminate, but what ought to be discriminated against.
The main reason that I don't feel threatened by my government or anyone else isn't that I'm secure in the knowledge that the police can never find me — they probably can — it's that I'm fortunate enough to live in a country that respects a broad range of views amongst its citizens, and will punish anyone who refuses to respect them likewise. If the government decides to start rounding up computer scientists, mediaeval re-enactors, or bearded men, well, I'll have a problem — not because I don't have a ninja costume and batcave in which to hide, but because my government has ceased to respect my personal choices. And if the government ever decided to do such a thing, would I be best served by going into hiding, or by standing up for my choices?
A recent article in IEEE Spectrum Tech Talk claims that regulation will lag developments in self-driving cars. This is a familiar theme amongst pundits of all kinds of technology, but why would anyone expect regulation to be ahead of technology? What kind of lawmaker would bother to write laws about technology that doesn't yet exist, let alone presume to have the 20/20 foresight required to make sensible ones? And what technologist would applaud lawmakers for making law without first developing an understanding of the technology involved?
I also happened to be reading Jack Goldsmith and Tim Wu's Who Controls the Internet? (2006) this week. A section of Chapter 7 portrays the history of copyright law and attendant media industries as a series of equilibria punctuated by new technologies. The arrival of a new technology heralds a confrontation between established players and new players, frequently loud and ill-tempered. But eventually everyone settles into a new equilibrium that allows life to go on. They conclude the section with an argument that the court cases surrounding Grokster and the like did not come about because the technology had over-run the government's ability to control it, but because the government was simply taking its time to determine the best way forward, just as it had for technologies like vinyl records, radio, cable television and video recorders.
Put that way, having regulation lag technology sounds like eminent good sense. Goldsmith and Wu themselves refer to it as "business as usual". Jonathan Zittrain, writing on related issues, proposes that lawmakers regulate Internet technologies using more or less this strategy in The Future of the Internet and How to Stop it (2008): spend time watching how the technology plays out, then act if some harm becomes evident. And the time spent working out what to do about video recorders, for example, now seems pretty minor compared to the three decades we've spent enjoying rental videos since.
Technologists, I suppose, might like to think that they already know all about the technology and are therefore in a position to set appropriate rules for their inventions right away — if, indeed, they feel the need for any rules at all. I suppose similar thinking underlies the calls for "self-regulation" that feature in much industry input into public policy.
Technologists may well know the most about the technology, and would surely be high on any capable lawmaker's list of people to speak to in drafting legislation. But technologists have some fairly obvious conflicts of interest in developing rules for technology that might make them wealthy or powerful, and even the most disinterested technologist is as subject to the law of unintended consequences as anyone else. As Zittrain suggests, then, perhaps the humble and wise technologist ought to embrace the lag of legislation behind technology rather than expressing constant amazement at the hopeless laggards in parliament, who might just be taking the same care in their job as we do in ours.
I've been reading quite a bit about Bitcoin and other anonymisation technologies over the past week or so, partly driven by the recent shut down of an anonymous marketplace known as Silk Road. David Glance has a bit to say about Bitcoin, Silk Road and Liberty Reserve on The Conversation, while Jonathon Levin discusses possible directions for Bitcoin and Nigel Phair ponders likely replacements for Silk Road in the same venue. G. Pascal Zachary comes at similar issues from the point of view of surveillance in the October 2013 issue of IEEE Spectrum (p. 8).
Levin opens with a statement about Bitcoin enthusiasts and libertarians being confused by the slow take-up of what, to them, is a tremendous advance in anonymity and freedom from Big Bad Government. I don't know which, if any, specific libertarians are being referred to by Levin, but Levin's statement certainly seems consistent with traditional cyberlibertarian thinking that anonymity and secrecy is the path to the protection of rights and freedom.
Non-libertarians, of course, probably think more like Nigel Phair and G. Pascal Zachary, who accept that there are certain behaviours deemed to be illegal for good reason, and that law enforcement agencies must therefore have some sort of power to detect and arrest those who engage in those behaviours. Assuming that the non-libertarians aren't doing any of these illegal things themselves, they perceive somewhat less need for anonymity. For that matter, even libertarians agree that the state should enforce property rights and contracts, and one wonders if even they would be pleased with a technology that allowed anonymous miscreants to steal property and dishonour contracts.
Anti-surveillance commentators love to mock the surveillers' defence that "you've got nothing to worry about if you're not doing anything wrong", but the surveillers may be perfectly correct if they're referring to what the surveillers consider wrong. Why waste time persecuting behaviour with which one has no problem, after all? The problem is, not everyone agrees with the surveillers' vision of wrongness, and anti-surveillers fear persecution for behaviours that they consider acceptable, but which the surveillers consider wrong.
The dealing of drugs, identities and violence alleged to be taking place on Silk Road and its like probably doesn't do much for the anti-surveillers' case. Apparently Silk Road users really do have something to hide under the law of most countries, and I doubt many people are shedding a tear for those poor old criminal gangs who've just lost one of their meeting places.
Hal Berghel's take on PRISM in the July 2013 issue of IEEE Computer asks that politicians do not take the "trust me" approach to defending government surveillance apparatus, in which politicians ask us to trust that said apparatus is only being used to apprehend genuine criminals. Simply hearing "trust me" is certainly dissatisfying. Said politicians need to prove their trustworthiness by demonstrating that, if you're not doing anything wrong, you really do have nothing to fear. But anti-surveillers have a similar problem: why accept a statement of "trust us" from a shadowy on-line marketplace any more than a statement of "trust us" from a shadowy government department?