Sections

Research

AI’s threat to individual autonomy in hiring decisions

Kyra Wilson and
Kyra Wilson Ph.D. Student - University of Washington
Aylin Caliskan

November 21, 2025


  • A new study shows how harms related to discrimination and autonomy can intersect and reinforce each other in the hiring domain. 
  • These effects have the potential to cause widespread societal harm but are often omitted from common perceptions of AI’s negative impacts. 
  • This has led to a dearth of regulation that addresses threats to individual autonomy, but additional impact assessments, regulation encompassing unintentional and intentional AI harms, and incentives for developing responsible AI systems can address these gaps. 
A pink and yellow abstract image of an office with people working, chatting and walking around. Above their heads are clouds of network connections. It was painted with guache and drawn with pencils.
Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Despite waves of artificial intelligence (AI) hype, recent survey results indicate that a majority of the U.S. public thinks AI is more likely to cause them harm than benefit them. This potential for harm has been demonstrated in a wide variety of domains, from threats to workers’ rights and the climate to bias in hiring or medical diagnoses. However, a harm that has not been examined extensively is AI’s potential to change and influence people without their knowledge, thereby threatening their agency, autonomy, or ability to make their own decisions without coerced, non-transparent influence. For example, one study showed that the rise of ChatGPT correlates with changes in the words people use in conversation. As models are increasingly trained on data that exhibit these shifts, they will in turn adopt these changes to an even greater extent. 

Language change might seem like a benign example, but the same kinds of feedback loops can also exist in domains that govern people’s access to opportunities and resources (e.g., policing, education, hiring) and can therefore cause widespread societal harm without decision-makers’ knowledge or consent. We conducted a large-scale experiment where human subjects screened resumes in collaboration with racially biased AI models and found that people cannot adequately identify and mitigate traces of AI biases that propagate into their decision-making. This builds upon similar findings from experiments in the emotional perception and medical diagnostics domains, suggesting that AI’s threat to autonomous human decision-making is a widespread problem. Unfortunately, current AI policy recommendations and regulations that call for humans-in-the-loop for high-stakes decisions fall short of addressing this impact. Further solutions, such as financial support for AI impact assessments; regulations encompassing unintentional and intentional AI harms; and incentives for developing responsible AI systems, are needed.  

AI influences decision-making and autonomy 

In a recent study, we conducted an experiment to determine whether racial biases in AI hiring recommendations can influence people’s cognition and decision-making as they collaborated with a simulated AI system to make hiring decisions. We had over 500 people complete cognitive assessments of their explicit and implicit biases and decision-making scenarios in which they selected the best set of candidates for one of 16 diverse occupations. In the scenarios we constructed, participants saw either (a) no AI; (b) AI that was biased (i.e., AI gave positive or negative recommendations based on a candidate’s race alone, rather than qualifications) and reinforced common stereotypes about race and occupation; (c) AI that was biased and contradicted common stereotypes about race and occupation; or (d) AI that was unbiased (i.e., AI recommended selected candidates with equivalent qualifications randomly rather than based on racial identity).  

We found that interacting with biased AI made respondents more likely to make biased decisions themselves, regardless of whether the bias aligned with or contradicted common stereotypes about race and occupational status. When the AI reinforced race-occupation stereotypes—favoring white candidates for high-status professional roles such as computer systems analyst or management analyst—respondents selected majority-white candidates 90.4% of the time. When the AI contradicted those stereotypes by recommending non-white candidates, respondents selected majority-non-white candidates 90.7% of the time, a statistically insignificant difference. In contrast, without any AI recommendations, respondents chose majority-white candidates 49.3% of the time and non-white candidates 50.7% of the time. Assuming respondents drew on their own beliefs and values when making screening decisions without AI input, these results show how strongly AI can shape human choices, posing a serious threat to people’s ability to make autonomous decisions free from hidden influence or coercion.

Legislative landscape for AI threats to discrimination and autonomy 

In recent years, lawmakers have introduced numerous legislative proposals across the country aimed at reducing the harmful impacts of proliferating AI for high-stakes decision-making. In 2024, the most significant of these was the Colorado AI Act, which aims to prevent intentional and unintentional discrimination caused by AI systems making high-impact decisions. Though the act is set to take effect in 2026, legislators are still debating terms of the bill, primarily concerning liability in cases of disparate impact. In 2025, Texas also enacted legislation regulating AI with provisions for nondiscrimination in AI use. However, unlike the Colorado bill, developers and deployers are only liable in Texas if they developed or used AI systems with the intention of causing discrimination. The Virginia legislature also passed regulation aimed at curbing AI discrimination in 2025, though it was vetoed by Gov. Glenn Youngkin who cited excessive burdens on smaller firms that deploy AI models and threats to AI innovation in the state. 

Far fewer proposals have been made to prevent AI or other technologies from interfering with individual autonomy (though the Texas law also prohibits intentionally developing or using AI to change people’s behavior, which is a notable exception.) Protecting individual autonomy from technological interference through regulation has been challenging historically. For example, although experts have warned social media can negatively impact both children’s and adults’ autonomy, it remains largely unregulated in the United States. The European Union, which has historically been more proactive when regulating technology compared to the U.S., has prohibited social media platforms from using non-transparent techniques to change users’ behavior and declared that using AI for cognitive behavioral manipulation is an unacceptable application. As AI capabilities and applications advance rapidly, developing comprehensive legislation to prevent discrimination and protect individual autonomy and safety will require building upon existing regulation and looking to related concepts such as intentionality, privacy, and freedom.   

Looking forward 

Determining how to safeguard people’s rights and autonomy while preventing broader societal harms remains an open question. Drawing on our research and a review of relevant legislation, we identify three key actions that can help address this challenge.

Supporting efforts to improve AI impact assessments  

Without understanding what the human and societal effects of AI proliferation are, it is difficult to craft policies that aim to curb harmful impacts. Right now, the speed at which new AI technologies are deployed far exceeds the rate at which reliable, valid, and generalizable AI evaluations are developed, leading to a landscape with no comprehensive paradigms for AI assessment. To address the widening deployment/assessment gap, it is necessary to allocate a greater portion of the funding for general AI development (both public and private) to projects that specifically aim to improve AI evaluation standards.  

Including provisions for unintentional impacts in AI regulation  

Recently, the Equal Employment Opportunity Commission (EEOC) halted investigations into disparate impact claims, a shift that could seriously undermine efforts to prevent algorithmic discrimination. Under disparate impact theory, if an employer cannot explain how a system reached its decision (as is currently the case with many state-of-the-art models), courts should rule in favor of the plaintiff. Therefore, in addition to resuming EEOC investigations, comprehensive AI legislation should address both disparate impact and disparate treatment. While some proposals, including Virginia’s, treat unintentional harms (disparate impact) as inherently less severe than intentional ones (disparate treatment), these harms must be evaluated contextually. Unintentional disadvantaging bias can produce serious real-world consequences—for example, Black patients may face longer wait times for organ transplants—while intentional corrective bias can improve diagnostic accuracy or support more equitable resource allocation. 

Incentivizing responsible AI development  

One of the most common critiques of AI regulation is that it will stifle innovation. However, these two ideals are not mutually exclusive, and regulation can be used to drive positive innovation that doesn’t interfere with users’ privacy and preserves their rights. There are several approaches to encourage this, such as tax incentives to offset costs of responsible AI development, increased industry-academia cooperation, and educational curricula that teach computer science as a value-laden, sociotechnical discipline and encourage students’ agency in addressing ethical challenges. Due to its inherent complexity, achieving an appropriate balance of AI regulation and innovation will require guidance and action from stakeholders across industry, government, academia, and the public. 

Conclusion 

Our study offers evidence of how harms related to discrimination and autonomy can intersect and reinforce each other in the hiring domain. These effects have the potential to cause widespread societal harm but are often omitted from common perceptions of AI’s negative impacts. In addition to strengthening legal protections against employment discrimination, policymakers should consider how AI influences autonomy and prioritize the development of AI systems that enhance agency and equity rather than limiting them.  

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).