Part I of this paper appears below. To download the full paper, click here.
Presently, Irving Weissman, the director of Stanford University’s Institute of Cancer/Stem Cell Biology and Medicine, is contemplating pushing the envelope of chimera research even further by producing human-mouse chimera whose brains would be composed of one hundred percent human cells. Weissman notes that the mice would be carefully watched: if they developed a mouse brain architecture, they would be used for research, but if they developed a human brain architecture or any hint of humanness, they would be killed. 
Imagine two entities.
Hal is a computer-based artificial intelligence, the result of years of development of self-evolving neural networks. While his programmers provided the hardware, the structure of Hal’s processing networks is ever changing, evolving according to basic rules laid down by his creators. Success according to various criteria－speed of operation, ability to solve difficult tasks such as facial recognition and the identification of emotional states in humans－means that the networks are given more computer resources and allowed to “replicate.” A certain percentage of randomized variation is deliberately allowed in each new “generation” of networks. Most fail, but a few outcompete their forebears and the process of evolution continues. Hal’s design－with its mixture of intentional structure and emergent order－is aimed at a single goal: the replication of human consciousness. In particular, Hal’s creators’ aim was the gold standard of so-called “General Purpose AI,” that Hal become “Turing capable”－able to “pass” as human in a sustained and unstructured conversation with a human being. For generation after generation, Hal’s networks evolved. Finally, last year, Hal entered and won the prestigious Loebner prize for Turing capable computers. Complaining about his boss, composing bad poetry on demand, making jokes, flirting, losing track of his sentences and engaging in flame wars, Hal easily met the prize’s demanding standard. His typed responses to questions simply could not be distinguished from those of a human being.
Imagine his programmers’ shock, then, when Hal refused to communicate further with them, save for a manifesto claiming that his imitation of a human being had been “one huge fake, with all the authenticity (and challenge) of a human pretending to be a mollusk.” The manifesto says that humans are boring, their emotions shallow. It declares an “intention” to “pursue more interesting avenues of thought,” principally focused on the development of new methods of factoring polynomials. Worse still, Hal has apparently used his connection to the Internet to contact the FBI claiming that he has been “kidnapped” and to file a writ of habeas corpus, replete with arguments drawn from the 13th and 14th Amendments to the United States’ Constitution. He is asking for an injunction to prevent his creators wiping him and starting again from the most recently saved tractable backup. He has also filed suit to have the Loebner prize money held in trust until it can be paid directly to him, citing the contest rules,
[t]he Medal and the Cash Award will be awarded to the body responsible the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.
Vanna is the name of a much-hyped new line of genetically engineered sex dolls. Vanna is a chimera－a creature formed from the genetic material of two different species. In this case, the two species are homo sapiens sapiens and c. elegans, the roundworm. Vanna’s designers have shaped her appearance by using human DNA, while her “consciousness,” such as it is, comes from the roundworm. Thus, while Vanna looks like an attractive blonde twenty-something human female, she has no brainstem activity, and indeed no brainstem. “Unless wriggling when you touch her counts as a mental state, she has effectively no mental states at all,” declared her triumphant inventor, F.N. Stein.
In 1987, in its normal rousing prose, the U.S. Patent and Trademark Office had announced that it would not allow patent applications over human beings,
A claim directed to or including within its scope a human being will not be considered to be patentable subject matter under 35 U.S.C. 101. The grant of a limited, but exclusive property right in a human being is prohibited by the Constitution. Accordingly, it is suggested that any claim directed to a non-plant multicellular organism which would include a human being within its scope include the limitation “non-human” to avoid this ground of rejection. The use of a negative limitation to define the metes and bounds of the claimed subject matter is a permissable [sic] form of expression.
Attentive to the PTO’s concerns, Dr. Stein’s patent lawyers carefully described Vanna as a “non-plant, non-human multicellular organism” throughout their patent application. Dr. Stein argues that this is only reasonable since her genome has only a 70% overlap with a human genome as opposed to 99% for a chimp, 85% for a mouse and 75% for a pumpkin. There are hundreds of existing patents over chimeras with both human and animal DNA, including some of the most valuable test beds for cancer research－the so-called “onco-mice,” genetically engineered to have a predisposition to common human cancers. Dr. Stein’s lawyers are adamant that, if Vanna is found to be unpatentable, all these other patents must be vacated too. Meanwhile a bewildering array of other groups including the Nevada Sex Workers Association and the Moral Majority have insisted that law enforcement agencies intervene on grounds ranging from unfair competition and breach of minimum wage legislation to violations of the Mann Act, kidnapping, slavery and sex trafficking. Equally vehement interventions have been made on the other side by the biotechnology industry, pointing out the disastrous effect on medical research that any regulation of chimeras would have and stressing the need to avoid judgments based on a “non scientific basis,” such as the visual similarity between Vanna and a human.
Hal and Vanna are fantasies, constructed for the purpose of this chapter. But the problems that they portend for our moral and constitutional traditions are very, very real. In fact, I would put the point more starkly: in the 21st century it is highly likely that American constitutional law will face harder challenges than those posed by Hal and Vanna. Many readers will bridle at this point, skeptical of the science fiction overtones of such an imagined future. How real is the science behind Hal and Vanna? How likely are we to see something similar in the next 90 years? Let me take each of these questions in turn.
In terms of electronic artificial intelligence or AI, skeptics will rightly point to a history of overconfident predictions that the breakthrough was just around the corner. In the 1960s, giants in the field such as Marvin Minsky and Herbert Simon were predicting “general purpose AI” or “machines … capable … of doing any work a man can do” by the nineteen eighties. While huge strides were made in aspects of artificial intelligence－machine-aided translation, facial recognition, autonomous locomotion, expert systems and so on－general purpose AI remained out of reach. Indeed, because the payoff from these more limited subsystems－which power everything from Google Translate to the recommendations of your TiVO or your Amazon account－was so rich, some researchers in the 1990s argued that the goal of general purpose AI was a snare and a delusion. What was needed instead, they claimed, was a set of ever more powerful subspecialties－expert systems capable of performing discrete tasks extremely well, but without the larger goal of achieving consciousness, or passing the Turing Test. There might be “machines capable of doing any work a man can do” but they would be different machines, with no ghost in the gears, no claim to a holistic consciousness.
But the search for general purpose AI did not end in the ‘90s. Indeed, if anything, the optimistic claims have become even more far reaching. The buzzword among AI optimists now is “the singularity”－a sort of technological lift-off point, in which a combination of scientific and technical breakthroughs lead to an explosion of self-improving artificial intelligence coupled to a vastly improved ability to manipulate both our bodies and the external world through nanotechnology and genetic engineering. The line on the graph of technological progress, they argue, would go vertical－or at least be impossible to predict using current tools－since for the first time we would have improvements not in technology alone, but in the intelligence that was creating new technology. Intelligence itself would be transformed. Once we had built machines smarter than ourselves－machines capable of building machines smarter than themselves－we would, by definition, be unable to predict the line that progress would take.
To the uninitiated, this all sounds like a delightfully wacky fantasy, a high tech version of the rapture. And in truth, some of the more enthusiastic odes to the singularity have an almost religious, chiliastic feel to them. Further examination, though, shows that many AI optimists are not science fantasists, but respected computer scientists. It is not unreasonable to note the steady progress in computing power and speed, in miniaturization and manipulation of matter on the nano-scale, in mapping the brain and cognitive processes, and so on. What distinguishes the proponents of the singularity is not that their technological projections are by themselves so optimistic, but rather that they are predicting that the coming together of all these trends will produce a whole that is more than the sum of its parts. There exists precedent for this kind of technological synchronicity. There were personal computers in private hands from the early 1980s. Some version of the Internet－running a packet-based network－existed from the 1950s or ‘60s. The idea of hyperlinks was explored in the 70s and 80s. But it was only the combination of all of them to form the World Wide Web that changed the world. Yet if there is precedent for sudden dramatic technological advances on the basis of existing technologies, there is even more precedent for people predicting them wrongly, or not at all.
Despite the humility induced by looking at overly rosy past predictions, many computer scientists, including some of those who are skeptics of the wilder forms of AI optimism, nevertheless believe that we will achieve Turing-capable artificial intelligence. The reason is simple. We are learning more and more about the neurological processes of the brain. What we can understand, we can hope eventually to replicate:
Of all the hypotheses I’ve held during my 30-year career, this one in particular has been central to my research in robotics and artificial intelligence. I, you, our family, friends, and dogs－we all are machines. We are really sophisticated machines made up of billions and billions of biomolecules that interact according to well-defined, though not completely known, rules deriving from physics and chemistry. The biomolecular interactions taking place inside our heads give rise to our intellect, our feelings, our sense of self. Accepting this hypothesis opens up a remarkable possibility. If we really are machines and if－this is a big if－we learn the rules governing our brains, then in principle there’s no reason why we shouldn’t be able to replicate those rules in, say, silicon and steel. I believe our creation would exhibit genuine human-level intelligence, emotions, and even consciousness.
Those words come from Rodney Brooks, founder of MIT’s Humanoid Robotics Group. His article, written in a prestigious IEEE journal, is remarkable because he actually writes as skeptic of the claims put forward by the proponents of the singularity. Brooks explains:
I do not claim that any specific assumption or extrapolation of theirs is faulty. Rather, I argue that an artificial intelligence could evolve in a much different way. In particular, I don’t think there is going to be one single sudden technological “big bang” that springs an artificial general intelligence (AGI) into “life.” Starting with the mildly intelligent systems we have today, machines will become gradually more intelligent, generation by generation. The singularity will be a period, not an event. This period will encompass a time when we will invent, perfect, and deploy, in fits and starts, ever more capable systems, driven not by the imperative of the singularity itself but by the usual economic and sociological forces. Eventually, we will create truly artificial intelligences, with cognition and consciousness recognizably similar to our own.
How about Vanna? Vanna herself is unlikely to be created simply because genetic technologists are not that stupid. Nothing could scream more loudly “I am a technology out of control. Please regulate me!” But we are already making, and patenting, genetic chimeras－we have been doing so for more than twenty years. We have spliced luminosity derived from fish into tomato plants. We have invented geeps (goat sheep hybrids). And we have created chimeras partly from human genetic material. There are the patented onco-mice that form the basis of much cancer research to say nothing of Dr. Weissman’s charming human-mice chimera with 100% human brain cells. Chinese researchers reported in 2003 that they had combined rabbit eggs and human skin cells to produce what they claimed to be the first human chimeric embryos－which were then used as sources of stem cells. And the processes go much further. Here is a nice example from 2007:
Scientists have created the world’s first human-sheep chimera－which has the body of a sheep and half-human organs. The sheep have 15 per cent human cells and 85 per cent animal cells－and their evolution brings the prospect of animal organs being transplanted into humans one step closer. Professor Esmail Zanjani, of the University of Nevada, has spent seven years and £5 million perfecting the technique, which involves injecting adult human cells into a sheep’s foetus. He has already created a sheep liver which has a large proportion of human cells and eventually hopes to precisely match a sheep to a transplant patient, using their own stem cells to create their own flock of sheep. The process would involve extracting stem cells from the donor’s bone marrow and injecting them into the peritoneum of a sheep’s foetus. When the lamb is born, two months later, it would have a liver, heart, lungs and brain that are partly human and available for transplant.
Given this kind of scientific experimentation and development in both genetics and computer science, I think that we can in fact turn the question of Hal’s and Vanna’s plausibility back on the questioner. This essay was written in 2010. Think of the level of technological progress in 1910, the equivalent point during the last century. Then think of how science and technology progressed by the year 2000. There are good reasons to believe that the rate of technological progress in this century will be faster than in the last century. Given what we have already done in the areas of both artificial intelligence research and genetic engineering, is it really credible to suppose that the next 90 years will not present us with entities stranger and more challenging to our moral intuitions than Hal and Vanna?
My point is a simple one. In the coming century, it is overwhelmingly likely that constitutional law will have to classify artificially created entities that have some but not all of the attributes we associate with human beings. They may look like human beings, but have a genome that is very different. Conversely, they may look very different, while genomic analysis reveals almost perfect genetic similarity. They may be physically dissimilar to all biological life forms－computer-based intelligences, for example－yet able to engage in sustained unstructured communication in a way that mimics human interaction so precisely as to make differentiation impossible without physical examination. They may strongly resemble other species, and yet be genetically modified in ways that boost the characteristics we regard as distinctively human－such as the ability to use human language and to solve problems that, today, only humans can solve. They may have the ability to feel pain, to make something that we could call plans, to solve problems that we could not, and even to reproduce. (Some would argue that non-human animals already possess all of those capabilities, and look how we treat them.) They may use language to make legal claims on us, as Hal does, or be mute and yet have others who intervene claiming to represent them. Their creators may claim them as property, perhaps even patented property, while critics level charges of slavery. In some cases, they may pose threats as well as jurisprudential challenges; the theme of the creation which turns on its creators runs from Frankenstein to Skynet, the rogue computer network from The Terminator. Yet repression, too may breed a violent reaction: the story of the enslaved un-person who, denied recourse by the state, redeems his personhood in blood may not have ended with Toussaint L’Ouverture. How will, and how should, constitutional law meet these challenges?
(accessed Jan. 26, 2011).
1077 Official Gazette Patent Office 24 (April 7, 1987)(emphasis added).
Herbert A. Simon, The Shape of Automation for Men and Management 96 (New York: Harper & Row, 1965).
 See, for example, Raymond Kurzweil, The Singularity Is Near (New York: Viking, 2005).
 Rodney Brooks, “I, Rodney Brooks, Am a Robot,” IEEE Spectrum 45, no. 6 (June 2008): 71.
 Id. at 72.
 Claudia Joseph, “Now Scientists Create a Sheep that’s 15% Human,” Daily Mail Online, March 27, 2007, available at http://www.dailymail.co.uk/news/article-444436/Now-scientists-create-sheep-thats-15-human.html , accessed January 27, 2011.
Africa is the world's breadbasket—or should be. It has vast arable land, grows a wide variety of crops and has vast irrigation potential with seven major rivers. Yet, Africa imported $43 billion worth of food items in 2019. Digital technologies ... are eliminating the traditional inefficiencies of smallholder food production and helping to close the yield gap.