Sections

Commentary

Why people mistrust AI advancements

Illustration of a brain that is partially computer wiring.

Imagine reading about a new medical breakthrough made possible by artificial intelligence (AI). What is your first reaction? Do you feel excited about how it could improve health and well-being? Or do you worry it might cause harm or replace skilled medical staff? Perhaps you simply feel uncertain because you do not fully understand or trust AI. As AI continues to evolve and its applications spread throughout society, understanding how we build trust in this technology is key to fostering its acceptance and adoption. Arguably, adopting AI won’t be optional in the near future. Like the steam engine, electricity, and computers, AI is a general purpose technology—a technology with broad use that keeps improving and drives new innovations. It is already transforming fields like medicine, logistics, and finance, and everyday tasks like writing emails or searching for information. Ignoring it will be as impractical as ignoring electricity.

Generative AI is unlike the technologies of the past. AI is not a static technology with defined capabilities; it is a continuously advancing frontier. The core capabilities of the printing press and mechanical clock, once invented, changed little. But AI is different—it learns, adapts, and its capabilities keep expanding. For example, AI now exceeds humans in many tasks, including language understanding, image classification, coding tasks, and even competition-level mathematics. Unlike the technologies of the past, AI overlaps with the capabilities of high-skilled workers. Also, (generative) AI is being adopted faster than technologies in recent history, such as the personal computer and the internet.

Generally, as people learn more about new technologies or scientific discoveries, they become more familiar with them, which can influence their level of trust. But do people’s reactions and trust differ when they learn about AI as opposed to non-AI developments, and if so, what shapes these differences?

In a newly published paper, we study how people respond when exposed to information about AI vs. non-AI developments. We conducted this experiment as part of the Understanding America Study (UAS), a probability-based internet panel representative of the U.S.  adult population. We randomly assigned a “treatment” group of 744 individuals to read newspaper article excerpts about recent AI advancements in linguistics, medicine, and personal relationships, while the “control” group (747 people) read about non-AI advancements on the same topics. For example, for the topic of linguistics, respondents in the treatment group read a short excerpt from a BBC article about ChatGPT, while respondents in the control group received information from Science News Daily about how babies learn a language. The two groups were identical based on characteristics such as age, gender, income levels, marital status, education, race, and prior knowledge of AI/scientific advancements.

Our main finding is that people trust AI developments much less than non-AI progress in the same domain, with the biggest gap in personal relationships (Figure 1). In dating and relationships—a deeply personal and emotional area—AI advancements can feel unnatural or even unsettling. In contrast, the trust gap is smallest in medicine, a high-stakes field where AI might provide solutions that humans would not think of on their own. This is illustrated by the first use of machine learning to discover a new antibiotic that kills resistant bacteria: halicin. In the search for this antibiotic, the AI not only sifted through compounds faster than human researchers but also found a combination they had never imagined and could not explain, except that it worked.

There are also notable demographic trends in AI trust. Mistrust of AI is higher among women, possibly because they tend to experience higher exposure to AI through their jobs, meaning that they may face greater risks but also greater possibilities due to AI assistance/augmentation. Additionally, women’s aversion to AI may be in part because AI may reinforce existing biases against women and girls. Older individuals also tend to be more skeptical of AI, which could be because they have been historically less able to cope with technological change.

Next, we looked at why these trust differences exist. Key factors are how well people understand AI, how useful they think it is for society, and the emotions it triggers, such as fear or excitement. These factors explain almost all of the trust gap in medicine (90%), about two-thirds in dating (64%), and roughly half in linguistics (46%). However, the single most important factor differs by domain. In linguistics, fear is the biggest factor lowering trust. In medicine, the main reason is that people feel less excited about AI, but it’s not really about fear or lack of understanding. In dating, the main issue is that people see less societal benefit from AI, which drives most of the trust gap. Why these emotional responses differ is not yet clear, but AI’s unfamiliar (or “alien”) and still-evolving capabilities may play a role, and emerging research can help us understand them.

Our findings have clear implications for targeted communication about AI. Even though news coverage of AI is mostly neutral, our experiment shows that people still tend to mistrust AI when they read about it. And we get a glimpse into why this may be the case: Trust is shaped by emotions (such as fear or excitement), perceived societal benefit, and, to a lesser extent, by how much people understand what they are reading about. Importantly, our results do not depend on how much people know about AI or science before they participate in the experiment.

The next step is to consider how our findings matter for AI adoption and governance. As AI spreads across societal domains, low trust could fuel techno-anxiety—fears of losing human control, widespread job loss, or harm from social surveillance deepfakes, malicious content, and algorithmic bias—which in turn may drive political pressure for hasty or overly restrictive regulations that slow innovation or even lead to bans. Better communication of AI’s progress—focused on transparency, societal benefits, and realistic expectations—could potentially narrow the gap between perceived and actual risks, fostering both trust and acceptance.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).