Sections

Research

A Mutual Aid Treaty for the Internet

Introduction

By 2030 we will have all of humanity’s books online.  Google’s ambitious book scanning project – or something like it – will by then have generated high quality, searchable scans of nearly every book available in the world.  These scans will be available online to library partners and individual users with certain constraints—what those will be, we do not know yet.  It will be a library in the cloud, one that is far larger than any real world library could hope to be.  It will make no sense for a library to store thousands of physical books in its basement.  Rather, under a Google Books plan, there will be one master copy of the book in Google’s possession.[1]  The library partners display it and access it according to particular privileges.  A user can access it from anywhere.  One master book shared among many drastically lowers the costs of updates – or censorship.  For example, if one book in the system contains copyright-infringing material, the rights-holder can get a court order requiring the infringing pages of the book to be deleted from the Google server.  Google has no choice but to comply, at least as long as it continues to have tangible interests within the country demanding a change.  This vulnerability affects every text distributed through the Google platform.  Anyone who does not own a physical copy of the book—and a means to search it to verify its integrity—will now lack access to that material.  Add in orders arising from perceived defamation or any other cause of action, and holes begin to appear in the historical record in a way they did not before.

Some people – and I am in this camp – are alarmed by this prospect; others regard it as important but not urgent. Still others see this as a feature, not a bug. What’s the constitutional problem, after all?  Court orders in the U.S. are subject to judicial review (indeed, they issue from judges), so can’t they be made to harmonize with the First Amendment?  Not so easily.  Current constitutional doctrine has little to say about redactions or impoundment of material after it’s had its day in court.  What has protected such material from thoroughgoing and permanent erasure is the inherent leakiness of a distributed system where books are found everywhere: in libraries, bookstores, and people’s homes.  By centralizing (and, to be sure, making more efficient) the storage of content, we are creating a world in which all copies of once-censored books like Candide, The Call of the Wild, and Ulysses could have been permanently destroyed at the time of the censoring and could not be studied or enjoyed after subsequent decision-makers lifted the ban.[2]  Worse, content that may be every bit as important—but not as famous—can be quietly redacted or removed without anyone’s even noticing. Orders need only be served on a centralized provider, rather than on one bookstore or library at a time.

The systems to make this happen are being designed and implemented right now, and can be fully dominant over the decades this volume asks us to chart.  One helpful thought experiment flows from an incident that could not have been invented better than it actually happened.  Somebody offers, through Amazon, a Kindle version of 1984 by George Orwell.[3]  People buy it.  Later, Amazon has reason to think there is a copyright issue that was not cleared by the source who put it on Amazon.  Amazon panics and sends a signal that actually deletes 1984 off of all the Kindles.  It is as if the user never bought 1984.  It is current, not future, technology that makes it possible.  The only reason this isn’t a major issue is because other copies of 1984 are so readily available – precisely because digitally centralized copies have yet to fully take root.  This is not literally cloud computing; for the period of time the user possessed 1984, it technically resided physically on his or her Kindle.  But because it is not the user’s to copy or to process, and it is Amazon’s power to reach in and revise or manipulate, it is as good as a Google Books configuration—or, in this case, as bad.

By 2030, a majority of global communications, commerce, and information storage will take place online. Much of this activity will be routed through a small set of corporate, governmental and institutional actors. For much but not all of our online history, a limited number of corporate actors have framed the way people interact online.  In the 1990s, it was the online service providers such as Prodigy and AOL that regulated our nascent digital interactions. Today, most people have direct access to the Web, but now their online lives are described by consolidating corporate search engines, content providers, and social networking sites.

With greater online centralization comes greater vulnerability, whether the centralization is public or private. Corporations are discrete entities, subject to pressures from repressive governments and criminal or terrorist threats. If Google’s services were to go offline tomorrow, the lives of millions of people would be disrupted.

This risk grows more acute as both the importance and centralization of online services increase. The Internet already occupies a vital space in public and private life. By 2030, that place will only be more vital. Threats to cybersecurity will thus present threats to human rights and civil liberties. Disruptions in access to cloud-hosted services will cut off the primary and perhaps the only socially safe mode of communication for journalists, political activists, and ordinary citizens in countries around the world. Corrupt governments need not bother producing their own propaganda if selective Internet filtering can provide just as sure a technique of controlling citizens’ access to and perception of news and other information. This is the Fort Knox problem: a single bottleneck in the path to data, or a single logical trove where we put all our eggs in one basket.

This scenario has implications here for both free speech and cybersecurity.  The Fort Knox mentality exposes vulnerable speech to unilateral and obliterating censorship: losing the inherent leakiness of the present model means we lose the benefits of the redundancies it creates.  These redundancies protect our civil liberty and security in ways as important as a constitutional right scheme.  Indeed, the Constitution is interpreted with such reality in mind.  The ease with which an order can be upheld to impound copyright infringing materials, or to destroy defamatory texts, can only be understood in the context of how difficult such actions are to undertake in the world of 2010.  Those difficulties make such actions rare and expensive.  Should the difficulties in censorship diminish or evaporate, there is no guarantee that compensating protections would be enacted by Congress or fashioned by judges.

Moreover, threats to free speech online come not only from governments wishing to censor through the mechanisms of law but from anyone wishing to censor through the mechanisms of cyberattack, such as denial of service.  If a site has unpopular or sensitive content it can find itself brought down – forced to either abandon its message or seek shelter under the umbrella of a well-protected corporate information hosting apparatus.  Such companies may charge accordingly for their services – or, fearing that they will be swamped by a retargeted attack, refuse to host at all.  This is why the more traditional government censorship configurations are best understood with a cybersecurity counterpart.

That which appears safer in the short term for cybersecurity – putting all our bits in the hands of a few centralized corporations – makes traditional censorship easier.

The key to solving the Fort Knox problem is to make the current decentralized Web a more robust one.  This can be done by reforging the technological relationships sites and services have with each other on the Web, drawing conceptually from mutual aid treaties among states in the real world., Mutual aid lets us envision a new socially- and technologically-based system of redundancy and security.



[1]


See
generally Google Books Settlement Agreement,

http://books.google.com/‌googlebooks/‌agreement/

(last visited Apr. 7, 2010).

[2]

See John M. Ockerbloom, Books Banned Online,

http://onlinebooks.library.upenn.edu/banned-books.html

; Jonathan Zittrain, The Future of the Internet – And How to Stop It (Yale: 2008), p. 116.

[3]


See
Brad Stone, Amazon Erases Two Classics from Kindle.  (One Is ‘1984.’), N.Y. Times, July 18, 2009, at B1.