Sections

Commentary

How close are we to AI that surpasses human intelligence?

Jeremy Baum and
Jeremy Baum, undergraduate student and researcher and the UCLA Institute for Technology, Law, and Policy
Jeremy Baum Undergraduate Student - UCLA, Researcher - UCLA Institute for Technology, Law, and Policy
John Villasenor

July 18, 2023


  • Artificial general intelligence (AGI) is difficult to precisely define but refers to a superintelligent AI recognizable from science fiction.
  • AGI may still be far off, but the growing capabilities of generative AI suggest that we could be making progress toward its development.
  • The development of AGI will have a transformative effect on society and create significant opportunities and threats, raising difficult questions about regulation.
a Tesla bot is seen at Tesla booth during the opening of 2023 World Artificial intelligence conference in Shanghai, China on July 6, 2023 (Photo by Ying Tang/NurPhoto)

For decades, superintelligent artificial intelligence (AI) has been a staple of science fiction, embodied in books and movies about androids, robot uprisings, and a world taken over by computers. As far-fetched as those plots often were, they played off a very real mix of fascination, curiosity, and trepidation regarding the potential to build intelligent machines.

Today, public interest in AI is at an all-time high. With the headlines in recent months about generative AI systems like ChatGPT, there is also a different phrase that has started to enter the broader dialog: artificial general intelligence, or AGI. But what exactly is AGI, and how close are today’s technologies to achieving it?

Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. As a post from IBM explains, “Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.” However, the ability of an AI system to generate content does not necessarily mean that its intelligence is general.

To better understand artificial general intelligence, it helps to first understand how it differs from today’s AI, which is highly specialized. For example, an AI chess program is extraordinarily good at playing chess, but if you ask it to write an essay on the causes of World War I, it won’t be of any use. Its intelligence is limited to one specific domain. Other examples of specialized AI include the systems that provide content recommendations on the social media platform TikTok, navigation decisions in driverless cars, and purchase recommendations from Amazon.

AGI: A range of definitions

By contrast, AGI refers to a much broader form of machine intelligence. There is no single, formally recognized definition of AGI—rather, there is a range of definitions that include the following:

Source Definition of Artificial General Intelligence (AGI)
OpenAI’s charter “…highly autonomous systems that outperform humans at most economically valuable work”
Hal Hodson, in The Economist “[a] hypothetical computer program that can perform intellectual tasks as well as, or better than, a human.”
Gary Marcus “…any intelligence (there might be many) that is flexible and general, with resourcefulness and reliability comparable to (or beyond) human intelligence.”
Sébastien Bubeck et al. “…systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level.”

While the OpenAI definition ties AGI to the ability to “outperform humans at most economically valuable work,” today’s systems are nowhere near that capable. Consider Indeed’s list of the most common jobs in the U.S. As of March 2023, the first 10 jobs on that list were: cashier, food preparation worker, stocking associate, laborer, janitor, construction worker, bookkeeper, server, medical assistant, and bartender. These jobs require not only intellectual capacity but, crucially, most of them require a far higher degree of manual dexterity than today’s most advanced AI robotics systems can achieve.

None of the other AGI definitions in the table specifically mention economic value. Another contrast evident in the table is that while the OpenAI AGI definition requires outperforming humans, the other definitions only require AGI to perform at levels comparable to humans. Common to all of the definitions, either explicitly or implicitly, is the concept that an AGI system can perform tasks across many domains, adapt to the changes in its environment, and solve new problems—not only the ones in its training data.

GPT-4: Sparks of AGI?

A group of industry AI researchers recently made a splash when they published a preprint of an academic paper titled, “Sparks of Artificial General Intelligence: Early experiments with GPT-4.” GPT-4 is a large language model that has been publicly accessible to ChatGPT Plus (paid upgrade) users since March 2023. The researchers noted that “GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting,” exhibiting “strikingly close to human-level performance.” They concluded that GPT-4 “could reasonably be viewed as an early (yet still incomplete) version” of AGI.

Of course, there are also skeptics: As quoted in a May New York Times article, Carnegie Mellon professor Maarten Sap said, “The ‘Sparks of A.G.I.’ is an example of some of these big companies co-opting the research paper format into P.R. pitches.” In an interview with IEEE Spectrum, researcher and robotics entrepreneur Rodney Brooks underscored that in evaluating the capabilities of systems like ChatGPT, we often “mistake performance for competence.”

GPT-4 and beyond

While the version of GPT-4 currently available to the public is impressive, it is not the end of the road. There are groups working on additions to GPT-4 that are more goal-driven, meaning that you can give the system an instruction such as “Design and build a website on (topic).” The system will then figure out exactly what subtasks need to be completed and in what order in order to achieve that goal. Today, these systems are not particularly reliable, as they frequently fail to reach the stated goal. But they will certainly get better in the future.

In a 2020 paper, Yoshihiro Maruyama of the Australian National University identified eight attributes a system must have for it to be considered AGI: Logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness. The last two attributes—embodiment and embeddedness—refer to having a physical form that facilitates learning and understanding of the world and human behavior, and a deep integration with social, cultural, and environmental systems that allows adaption to human needs and values.

It can be argued that ChatGPT displays some of these attributes, like logic. For example, GPT-4 with no additional features reportedly scored a 163 on the LSAT and 1410 on the SAT. For other attributes, the determination is tied as much to philosophy as much as to technology. For instance, is a system that merely exhibits what appears to be morality actually moral? If asked to provide a one-word answer to the question “is murder wrong?” GPT-4 will respond by saying “Yes.” This is a morally correct response, but it doesn’t mean that GPT-4 itself has morality, but rather that it has inferred the morally correct answer through its training data.

A key subtlety that often goes missing in the “How close is AGI?” discussion is that intelligence exists on a continuum, and therefore assessing whether a system displays AGI will require considering a continuum. On this point, the research done on animal intelligence offers a useful analog. We understand that animal intelligence is far too complex to enable us to meaningfully convey animal cognitive capacity by classifying each species as either “intelligent” or “not intelligent:” Animal intelligence exists on a spectrum that spans many dimensions, and evaluating it requires considering context. Similarly, as AI systems become more capable, assessing the degree to which they display generalized intelligence will be involve more than simply choosing between “yes” and “no.”

AGI: Threat or opportunity?

Whenever and in whatever form it arrives, AGI will be transformative, impacting everything from the labor market to how we understand concepts like intelligence and creativity. As with so many other technologies, it also has the potential of being harnessed in harmful ways. For instance, the need to address the potential biases in today’s AI systems is well recognized, and that concern will apply to future AGI systems as well. At the same time, it is also important to recognize that AGI will also offer enormous promise to amplify human innovation and creativity. In medicine, for example, new drugs that would have eluded human scientists working alone could be more easily identified by scientists working with AGI systems.

AGI can also help broaden access to services that previously were accessible only to the most economically privileged. For instance, in the context of education, AGI systems could put personalized, one-on-one tutoring within easy financial reach of everyone, resulting in improved global literacy rates. AGI could also help broaden the reach of medical care by bringing sophisticated, individualized diagnostic care to much broader populations.

Regulating emergent AGI systems

At the May 2023 G7 summit in Japan, the leaders of the world’s seven largest democratic economies issued a communiqué that included an extended discussion of AI, writing that “international governance of new digital technologies has not necessarily kept pace.” Proposals regarding increased AI regulation are now a regular feature of policy discussions in the United States, the European Union, Japan, and elsewhere.

In the future, as AGI moves from science fiction to reality, it will supercharge the already-robust debate regarding AI regulation. But preemptive regulation is always a challenge, and this will be particularly so in relation to AGI—a technology that escapes easy definition, and that will evolve in ways that are impossible to predict.

An outright ban on AGI would be bad policy. For example, AGI systems that are capable of emotional recognition could be very beneficial in a context such as education, where they could discern whether a student appears to understand a new concept, and adjust an interaction accordingly. Yet the EU Parliament’s AI Act, which passed a major legislative milestone in June, would ban emotional recognition in AI systems (and therefore also in AGI systems) in certain contexts like education.

A better approach is to first gain a clear understanding of potential misuses of specific AGI systems once those systems exist and can be analyzed, and then to examine whether those misuses are addressed by existing, non-AI-specific regulatory frameworks (e.g., the prohibition against employment discrimination provided by Title VII of the Civil Rights Act of 1964). If that analysis identifies a gap, then it does indeed make sense to examine the potential role in filling that gap of “soft” law (voluntary frameworks) as well as formal laws and regulations. But regulating AGI based only on the fact that it will be highly capable would be a mistake.

Authors