Sections

Research

Innovation’s Darker Future: Biosecurity, Technologies of Mass Empowerment, and the Constitution

Introduction

Using gene-splicing equipment available online and other common laboratory equipment and materials, a molecular biology graduate student undertakes a secret project to recreate the smallpox virus. Not content merely to bring back an extinct virus to which the general population is now largely naïve, he uses public source material to enhance the virus’s lethality, enabling it to infect even those whom the government rushes to immunize. His activities raise no eyebrows at his university lab, where synthesizing and modifying complex genomes is even more commonplace and mundane by 2025 than it is today. While time-consuming, the task is not especially difficult. And when he finishes, he infects himself and, just as symptoms begin to emerge, he proceeds to have close contact with as many people from as many possible walks of life as he can in a short time. He then kills himself before becoming ill and is buried by his grieving family with neither they nor the authorities having any idea of his infection.

The outbreak begins just shy of two weeks later and seems to come from everywhere at once. Because of the virus’s long incubation period, it has spread far by the time the disease first manifests itself. Initial efforts to immunize swaths of the population prove of limited utility because of the perpetrator’s manipulations of the viral genome. Even efforts to identify the perpetrator require many months of forensic effort. In the meantime, authorities have no idea whether the country—and quickly the world—has just suffered an attack by a rogue state, a terrorist group, or a lone individual. Dozens of groups around the world claim responsibility for the attack, several of them plausibly.

The government responds on many levels: It moves aggressively to quarantine those infected with the virus, detaining large numbers of people in the process. It launches a major surveillance effort against the enormous number of people with access to gene synthesis equipment and the capacity to modify viral genomes in an effort to identify future threats from within American and foreign labs. It attempts to restrict access to information and publications about the synthesis and manipulation of pathogenic organisms—suddenly classifying large amounts of previously public literature and blocking publication of journal articles that it regards as high-risk. It requires that gene synthesis equipment electronically monitor its own use, report on attempts to construct sequences of concern to the government, and create an audit trail of all synthesis activity. And it asks scientists all over the world to report on one another when they see behaviors that raise concerns. Each of these steps produces significant controversy and each, in different ways, faces legal challenge.

The future of innovation has a dark and dangerous side, one we dislike talking about and often prefer to pretend does not, in fact, loom before us. Yet it is a side that the Constitution seems preponderantly likely to have to confront—in 2025, at some point later, or tomorrow. There is nothing especially implausible about the scenario I have just outlined—even based on today’s technology. By 2025, if not far sooner, we will likely have to confront the individual power to cause epidemics, and probably other things as well.

Technologies that put destructive power traditionally confined to states in the hands small groups and individuals have proliferated remarkably far. That proliferation is accelerating at an awe-inspiring clip across a number of technological platforms. Eventually, it’s going to bite us hard. The response to, or perhaps the anticipation of, that bite will put considerable pressure on constitutional norms in any number of areas.

We tend to think of the future of innovation in terms of intellectual property issues and such regulatory policy questions as how aggressive antitrust enforcement ought to be and whether the government should require Internet neutrality or give carriers latitude to favor certain content over other content. Broadly speaking, these questions translate into disputes over which government policies best foster innovation—with innovation presumed to be salutary and the government, by and large, in the position of arbiter between competing market players.
But confining the discussion of the future of innovation to the relationship among innovators ignores the relationship between innovators and government itself. And government has unique equities in the process of innovation, both because it is a huge consumer of products in general and also because it has unique responsibilities in society at large. Chief among these is security. Quite apart from the question of who owns the rights to certain innovations, government has a stake in who is developing what—at least to the extent that some innovations carry significant capacity for misuse, crime, death, and mayhem.

This problem is not new—at least not conceptually. The character of the mad scientist muh-huh-huhing to himself as he swirls a flask and promises, “Then I shall destroy the world!” is the stuff of old movies and cartoons. In literature, versions of it date back at least to Mary Shelley in the early 19th century. Along with literary works set in technologically sophisticated dystopias, it is one of the ways in which our society represents fears of rapidly evolving technology.

The trouble is that it is no longer the stuff of science fiction alone. The past few decades have seen an ever-augmenting ability of relatively small, non-state groups to wage asymmetric conflicts against even powerful states. The groups in question have been growing smaller, more diffuse, and more loosely knit, and technology is both facilitating that development and dramatically increasing these groups’ ultimate lethality. This trend is not futuristic. It is already well under way across a number of technological platforms—most prominently the life sciences and computer technology. For reasons I shall explain, the trend seems likely to continue, probably even to accelerate. The technologies in question, unlike the technologies associated with nuclear warfare, were not developed in a classified setting but in the public domain. They are getting cheaper and proliferating ever more widely for the most noble and innocent of reasons: the desire to cure disease and increase human connectivity, efficiency, and capability. As a global community, we are becoming ever more dependent upon these technologies for health, agriculture, communications, jobs, economic growth and development, even culture. Yet these same technologies—and these same dependencies—make us enormously vulnerable to bad actors with access to them. Whereas once only states could contemplate killing huge numbers of civilians with a devastating drug-resistant illness or taking down another country’s power grids, now every responsible government must contemplate the possibility of ever smaller groupings of people undertaking what are traditionally understood as acts of war. We have already seen the migration of the destructive power of states to global non-state actors, particularly Al Qaeda. We can reasonably expect that migration to progress still further. It ultimately threatens to give every individual with a modest education and a certain level of technical proficiency the power to bring about catastrophic damage. Whereas governments once had to contemplate as strategic threats only one another and a select bunch of secessionist militias and could engage with individuals as citizens or subjects, this trend ominously promises to force governments to regard individuals as potential strategic threats. Think of a world composed of billions of people walking around with nuclear weapons in their pockets.

If that sounds hyperbolic, it is probably only a little bit hyperbolic. As I shall explain, the current threat landscapes in the life sciences—the area which I use in this paper as a kind of case study—is truly terrifying. (No less so is the cyber arena, an area Jack Goldsmith is treating in detail and where attacks are already commonplace.) The landscape is likely to grow only scarier as the costs of gene synthesis and genetic engineering technologies more generally continue to plummet, as their capacity continues to grow, and as the number of people capable individually or in small groups of deploying them catastrophically continues to expand. The more one studies the literature on biothreats, in fact, the more puzzling it becomes that a catastrophic attack has not yet happened.

Yet biothreats alone are not the problem; the full problem is the broader category of threats they represent. Over the coming decades, we are likely to see other areas of technological development that put enormous power in the hands of individuals. The issue will not simply be managing the threat of biological terrorism or biosecurity more broadly. It will be defining a relationship between the state and individuals with respect to the use and development of such dramatically empowering new technologies that both permits the state to protect security and at once insists that it does so without becoming oppressive.

To state this problem is to raise constitutional questions, and I’m not entirely sure that a solution to it exists. Governments simply cannot regard billions of people around the world as potential strategic threats without that fact’s changing elementally the nature of the way states and those individuals interact. If I am right that the biotech revolution potentially allows individuals to stock their own WMD arsenals and that other emergent technologies will create similar opportunities, government will eventually respond—and dramatically. It will have no choice.

But exactly how to respond—either in reaction or in anticipation—is far from clear. Both the knowledge and the technologies themselves have proliferated so widely to begin with that the cat really is out of the bag. Even the most repressive measures won’t suffice to stuff it back in. Indeed, the options seem rather limited and all quite bad: intrusive, oppressive, and unlikely to do much good.

And it is precisely this combination of a relatively low probability of policy success, high costs to other interests, and constitutional difficulties that will produce, I suspect, perhaps the most profound change to the Constitution emanating from this class of technologies. This change will not, ironically, be to the Bill of Rights but to the Constitution’s most basic assumptions with respect to security. That is, the continued proliferation of these technologies will almost certainly precipitate a significant erosion of the federal government’s monopoly over security policy. It will tend to distribute responsibility for security to thousands of private sector and university actors whom the technology empowers every bit as much as it does would-be terrorists and criminals. This point is perhaps clearest in the context of cybersecurity, but it is also true in the biotech arena, where the best defense against biohazards, man-made and naturally occurring alike, is good public health infrastructure and more of the same basic research that makes biological attacks possible. Most of this research is going on in private companies and universities, not in government; the biotech industry is not composed of a bunch of defense contractors who are used to being private sector arms of the state. Increasingly, security will thus take on elements of the distributed application, a term the technology world uses to refer to programs which rely on large numbers of networked computers all working together to perform tasks to which no one system could or would devote adequate resources. While state power certainly will have a role here—and probably an uncomfortable role involving a lot of intrusive surveillance—it may not be the role that government has played in security in the past.