Sections

Research

How insurance can mitigate AI risks

An employee of Nissan Motor Co monitors real-time power usage data at the company's facilities at the Oppama plant in Yokosuka, south of Tokyo July 2, 2011. With 35 of Japan's 54 nuclear power plants shut, the Japanese government has ordered big companies to cut their peak power consumption by 15 percent from last year this summer, in the first such mandate since the oil crisis of 1974. To meet the target, the auto industry has changed its weekend holidays to Thursdays and Fridays between July and September.   REUTERS/Yuriko Nakao (JAPAN - Tags: TRANSPORT ENERGY) - GM1E7721AIH01
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

Introduction

There is a growing consensus that artificial intelligence (AI) will fundamentally transform our economy and society. A wide range of commercial applications are being used across many industries. Among these are anomaly detection (e.g., for fraud mitigation), image recognition (e.g., for public safety), speech recognition and natural language generation (e.g., for virtual assistants), recommendation engines (e.g., for robo-advice), and automated decision-making systems (e.g., for workflow applications).

While AI’s potential benefits are huge, the concerns are substantial as well. Fears exist regarding potential discrimination, safety, privacy, ethics, and accountability for undesired outcomes. There are concerns AI will usurp humanity, endanger cherished values, and further bias and discrimination.

“There are concerns AI will usurp humanity, endanger cherished values, and further bias and discrimination.”

The goal of this policy brief is to outline the risks of AI and discuss how insurance can mitigate these risks. Insurance is a common way to protect people through the creation of risk pools. It works by spreading risks around and helping people deal with potential harms. Along with other steps, insurance can be part of the way we mitigate AI risks.

What are AI risks?

AI applications use large amounts of data to recognize patterns and learn from datasets. There are three major types of AI, which are clustered by their degree of intelligence as narrow, general, and super AI. Narrow AI systems are trained to perform very specific physical or cognitive tasks and operate within a limited context and a pre-defined range. In contrast, general AI is applicable to broader problem areas, has the capacity to assess its surroundings, and gives emotionally driven responses to situations in the way humans do. Super AI systems, which possess the potential to outperform humans across a wide range of disciplines, have not been fully developed yet and are very likely still decades away.

Nearly all existing applications are narrow AI based on data analytics and standard machine-learning techniques. This means they focus on specific tasks, often with an eye toward improving customer value and operational efficacy. With increased deployment, a number of negative consequences already are visible today. Table 1 summarizes six categories of risks that illustrate how AI risks can come from manifold sources. Some of them must be deemed as realistic already today. More extreme scenarios are those described under control risks, such as the destruction of society by uncontrollable AI robots. The sort of hypothetical risk called existential risk can be triggered by general or super AI. The development of these risks requires an in-depth discussion at the national and international levels, and how insurance can mitigate risks. As shown in the table below, there are a number of risks that have become apparent linked to AI deployments.

Table 1: Potential negative consequences through the use of AI

Category Risks Description
Performance risks Risk of errors Al model/algorithm might result in false prediction or large over-/under-prediction for a future event.
Risk of bias AI can be manipulated to perpetuate societal or political biases (e.g., searching “hands” on Google shows only white hands).
Risk of opaqueness/ “black box” AI models/algorithms are not always fully transparent and difficult to understand.
Risk of explainability The outcomes of AI models and algorithms are difficult to explain to non-experts.
Risk of stability of performance The risk that AI is not designed to adequately provide a stable performance and thus disrupts operations.
Security risks Cyber intrusion risks AI-driven malware can disrupt business or society by attacking critical infrastructure or paralyzing systems.
Privacy risks AI can be used to track and analyze an individual’s every move online and offline (e.g., China’s social credit system).
Open source software risks Open-source platforms that are no longer supported or updated by the creators can be highly vulnerable to AI-driven breaches.
Control risks Risk of AI going “rogue” A super-intelligent AI machine can threaten humanity by malfunctioning in decision-making.
Inability to control malevolent AI An error in the centralized system to control AI algorithms might happen so that no solution to stop AI machines exists.
Societal risks Autonomous weapons proliferation AI can be programmed to optimize autonomous weapon systems.
Risk of “intelligence divide” Inequity between groups due to differing access to data/ algorithm/hardware to promote health, prosperity, and safety.
Economic risks Job displacement risks AI machines can replace human workforce in many industries, leading to unemployment in societies.
“Winner-takes-all” risk A nation or company that is ahead of others in AI technology takes advantage of its position to dominate others.
Liability risk Flawed AI model for a user or a developer might trigger large losses to business partners or customers.
Reputation risk Flawed AI model causes bad outcomes (e.g., economic loss or discriminating result), hurting the reputation.
Ethical risks “Lack of values” risk AI might be programmed without values so that AI decisions go in the opposite direction to what humans want.
Value/goal alignment risk When we are unclear with the values/goals that we set for AI, AI might operate without the same values/goals we have.

The way insurance could help mitigate AI risks

The insurance industry plays an important role in modern economies and societies, especially when it comes to the detection and evaluation of risks. Insurance companies put a price tag on risks and help protect people from possible harms. They compile data on losses and damages, and create risk pools designed to socialize risks. As has been the case with other emerging problems such as data breaches, cyber-intrusions, and outright harms, insurance helps people and businesses deal with problematic developments and protects them from the financial costs associated with those things.

“We are still at the very beginning of understanding what the potential AI risks are, so there are neither empirical data nor theoretical models that estimate the potential loss frequency and magnitude of AI risks.”

We are still at the very beginning of understanding what the potential AI risks are, so there are neither empirical data nor theoretical models that estimate the potential loss frequency and magnitude of AI risks. Importantly for AI, we might expect that the losses are not independent, which is another important prerequisite of insurability. AI risks might spread across the global within short time periods so that the potential to geographically diversify risks (which is fundamental for the insurance industry) is in doubt. AI knows no geographical boundaries, and the systemic impact of particular losses on the global economy must be much better understood to improve the insurability of the AI risks noted above.

Those points notwithstanding, insurance companies have been at the forefront when it comes to the detection and mitigation of various kinds of risks. For example, with respect to cyber-risks, the insurance industry is actively working on developing standardized terminologies and policies for data breaches, hacking, and identity theft. They are seeking to understand the risks that individuals and companies face and how to price insurance products that would protect people from these developments.

The insurance industry needs to provide insurance solutions for risks that are not well understood today. The table above lists many risks, some of which are more suitable to insurance protection than others. For example, “performance risks” are suitable for insurance and are typically covered by business interruption insurance coverages. The “security risks” and “liability risks” identified under economic challenges are covered by liability insurances.

But not all risks are insurable. With many current insurance policies, the cyber-insurance market is extremely small, with diminutive insured sums and narrow cover restrictions that do not really address AI risks. Some of the business interruption risks (e.g., through unstable AI systems) might be covered by existing business interruption policies. Risks through AI are, however, not mentioned as an explicit cover in most insurance policies today. Moreover, many substantial risks, such as the “winner takes all risk,” are not addressable by insurance. This is also true for societal and ethical risks as well.

Conclusion

There is a need to better understand how AI changes our work and life, and what the corresponding risks are that arise. We need to collect data, developing scenarios to better understand the transformational process triggered by AI implementation as well as how insurance can mitigate those harms. The industry has a strong interest in the early detection of potential risks arising from new technologies, and, in the case of AI, the potential risks are extremely diverse. Some of the risks might be insurable, while other potential risks, such as existential risk, are outside the scope of the private sector.

“[T]he insurance industry must take an active role in developing responsible AI and maintaining the confidentiality of the use of data.”

Insurance firms should consider their role as early adopters and users of AI. For example, their own AI algorithms might malfunction so that risks are wrongly assessed, or that potential insureds are not covered. The insurance industry might overuse information from their AI systems, which can hurt privacy of their customers. Here, the insurance industry must take an active role in developing responsible AI and maintaining the confidentiality of the use of data.

Despite all the concerns noted in this essay, it is imperative to deal with technological byproducts. AI will significantly change many sensitive sectors, from health care to education to e-commerce. Which data should be used for which purposes and where should the border be between meaningful optimization and unethical applications? A broad discussion of these issues is needed in order to move forward into the future.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Google provides general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Authors

  • Footnotes
    1. At the 2018 World Economic Forum, Google CEO Sundar Pichai stated, “Artificial intelligence is probably the most important thing humanity has ever worked on. I think of it as something more profound than electricity or fire.”
    2. The categorization is adapted from Rao, A., 2019, “Gaining National Competitive Advantage through Artificial Intelligence,” PwC White Paper.
    3. There is one often cited quote by Henry Ford referring to New York City in the early 20th century that very well illustrates this fact: “This has only been made possible by the insurers. They are the ones who really built this city. With no insurance, there would be no skyscrapers. No investor would finance buildings that one cigarette butt could burn to the ground.”