Sections

Commentary

AI labor displacement and the limits of worker retraining

Julian Jacobs
Julian Jacobs Doctoral Student - University of Oxford, Department of Politics and IR

May 16, 2025


  • Worker retraining programs are often proposed as a policy response to AI-driven labor displacement, but it’s not clear that they have previously worked very well. 
  • These programs typically focus on providing new skills to individuals who have lost their jobs, are at risk of job loss, or are struggling to find new jobs due to factors like technological change, trade exposure, or industry shifts.
  • Methodological challenges make it hard to draw conclusions on the efficacy of retaining, especially using datasets without all the necessary information.
A person enters at a jobs center in San Francisco, California, on February 4, 2010. The number of workers filing for jobless benefits unexpectedly rose last week, but another big gain in productivity in the fourth quarter offered hope companies were getting close to adding to payrolls.
A person enters at a jobs center in San Francisco, California, on February 4, 2010. The number of workers filing for jobless benefits unexpectedly rose last week, but another big gain in productivity in the fourth quarter offered hope companies were getting close to adding to payrolls. REUTERS/Robert Galbraith

As artificial intelligence (AI) marches forward, a common refrain has emerged: We need to retrain workers, “upskilling” them to better meet the demands of the modern economy. Yet there has been comparatively little discussion about what these programs look like and their feasibility. The evidence that does exist, however, provides reasons for policymakers to be skeptical of retraining as a means of supporting labor adjustment to AI-enabled automation. For retraining to keep up with AI advancements, we may need to fundamentally rethink how we provide it, study its effects, formulate its overarching goals, and understand its limitations. 

The United States first implemented worker training programs as a policy tool to support labor market adjustment during the Great Depression. And since 1961, these retraining programs have become a core component of American active labor market policies. This includes the Manpower Development and Training Act of 1962 (MDTA), which established the first federal program to provide training on a large scale (1.9 million participants between 1963 and 1972) with a focus on helping workers navigate technological automation by training “the hundreds of thousands of workers who are denied employment because they do not possess the skills required by our constantly changing economy.” Through MDTA programming, men were typically trained into work as machine shop workers, auto mechanics, and welders, while women were typically trained into clerical and administrative roles.  

In 1982, the Reagan administration signed the Job Training Partnership Act (JTPA), which was a bipartisan effort to provide federal funding for decentralized worker retraining programs run by local organizations. Specifically, the JTPA mandated the establishment of Private Industry Councils in each local training jurisdiction to provide direction and oversight for training initiatives. Relative to the MDTA, the JTPA decreased the overall scale of federal employment initiatives and narrowed their focus. It is often viewed today as a public policy failure, and it was eventually replaced by the Workforce Investment Act (WIA) of 1998, which mandated localized retraining to pair workers with local industry needs. The WIA significantly widened the scope of who could participate in public retraining programs in the United States. While the JTPA focused just on the most disadvantaged workers, WIA made core services, such as job search assistance, career counseling, and labor market information, available to all individuals, regardless of income or employment status. 

Today, U.S. federal training is primarily run through the Workforce Investment and Opportunity Act (WIOA), which streamlined the WIA by more firmly integrating existing programming, emphasizing private sector engagement and further widening access. WIOA comprises six core programs: the Adult, Dislocated Worker and Youth programs; Wagner-Peyser Employment Services; Adult Education and Family Literacy Act programs; and the Vocational Rehabilitation State Grant Program. And it recognizes an additional 13 programs as partners in WIOA’s “one-stop” delivery system.  

What is retraining? 

Public worker retraining programs typically focus on providing new skills to individuals who have lost their jobs, are at risk of job loss, or are struggling to find new jobs due to factors like technological change, trade exposure, or industry shifts. In practice, the majority of participants in U.S. public retraining are classified as low income. Under WIOA, the adult funding stream largely serves individuals with limited work history, low earnings, and, at times, deficient basic skills. Meanwhile, the dislocated worker stream focuses on those who have lost stable employment through layoffs, plant closures, or foreign competition. 

Demographically, dislocated workers tend to be somewhat older and are more likely to have substantial prior work experience, while participants in the adult stream may have lower formal education credentials and come from households with incomes below or near the poverty line. Burt Barnow and Jeffrey Smith provide an effective breakdown of the kinds of activities that occur in these retraining programs. We can identify at least seven core types of programming spanning from classroom trainings to apprenticeships. 

These methods of supporting worker reskilling and labor adjustment vary considerably across U.S. states. This may be a function of divergent needs—or local perceptions of needs—across geography, but it may also reflect significant cross-state variation in retraining providers. Since programs bid for federal funding, the nature of training can vary based on existing local capacity and political considerations, as opposed simply to labor demand. For instance, just under half of training program participants across the U.S. participate in classroom training, yet this ranges across states from a low of 14% of participants to a high of 96%. 

There are similar variations in the kinds of occupations workers retrain into. Across the country, the top destinations for WIOA enrollees pursuing occupational skills training are health care and transportation. A significant portion of retrainers also enter skilled trades while others enter office administration, accounting, and IT, depending on local industry needs. Still, these broad clusters consistently dominate WIOA training enrollments nationwide. 

Evidence on the effectiveness of worker retraining programs is mixed 

Drawing conclusions on the effectiveness of worker retraining programs and their use amid rapid technological change remains difficult because of the significant methodological and data challenges we face. The key problem: In standard government datasets, we do not know whether outcomes are a consequence of a retraining program or something specific about the people that self-select into retraining (i.e., we have a problem of non-random selection into training.) Positive outcomes could be the result of a program being run effectively, or it could be a consequence of characteristics about retainers (for example, a willingness to take initiative).  

The best research on U.S. public worker retraining programs consequently relies on occasional randomized control trials and quasi-experiments, or studies that attempt to approximate randomized control trials, often through matching. Matching involves the creation of an artificial control group that didn’t receive training. For instance, if we had data on 10,000 middle-aged men from rural districts who participated in retraining, we would attempt to find data on people with the same characteristics who did not participate in retraining during that same period. Through this method, though, we don’t know how many characteristics are relevant and therefore worth matching. For instance, does prior education background predict retraining success? What about proximity to a nearby city? And what about the plethora of social, demographic, and psychological characteristics that simply aren’t captured in any datasets? Meanwhile, the significant variation in programming quality, style, and resources across the U.S. provides an additional confounder. These challenges can partly explain why many researchers contend that existing evidence on public training is unable to sufficiently overcome this problem of non-random selection through matching. The consequence is a body literature on retraining that is mixed and often unable to produce decisive conclusions. 

Nonetheless, programs exert considerable effort working to capture data that may point to potential positive outcomes. Historically, these metrics include employment rates, earnings, job retention, and earnings replacement (i.e., how much of one’s former wage is recouped upon reemployment). Under WIA and WIOA, measures such as entered employment rate (the fraction of participants who find work shortly after exit), retention rate (the fraction of those who remain employed for at least six months), and average earnings (mean earnings for employed participants) are common. In doing so, these programs aim to measure the job quality for retrainers, as opposed to simply job status.  

I believe the evidence suggests we should be skeptical of retraining programs as an effective policy response to support labor force adjustment to AI.  

The National JTPA Study, conducted from 1987 to 1992, involved the construction of a genuine randomized controlled trial. It showed that JTPA participants did not see a statistically significant improvement in employment rates, earnings, or continuous employment. And any gains that did appear were generally short lived. A national randomized evaluation of WIA found that while intensive one-on-one job counseling improved employment and earnings outcomes, WIA Adult and Dislocated training services did not have positive impacts on earnings or employment in the 30 months after participant enrollment—though the evidence remains inconclusive. In 2023, 70% of Core WIOA program participants are employed in the second and fourth quarters after exiting programming, yet these outcomes are not measured against a control group, and so we are reliant on previous studies to identify potential causal impacts.  

Meanwhile, a national quasi-experimental study of TAA found that TAA participants had significantly lower employment in the first couple of years after layoffs compared, through matching, to similar dislocated workers not in TAA. The most obvious explanation for this is that TAA participants incurred significant shorter-term costs of retraining into sectors like health care, trucking, or IT. Yet, even four years after job loss, TAA participants remained underemployed relative to non-TAA workers and earned slightly less. These apparent challenges supporting labor adjustment for displaced workers are inauspicious signs for managing potential AI shocks. And they provide our best guess as to how existing U.S. public policy infrastructure might be able to assist workers whose labor is replaced by technological automation.  

The limits of retraining 

Even if one were to discount the weak record of U.S. public retraining programming, social science provides at least three very strong theoretical reasons for policymakers to look beyond these kinds of programs.  

First there is simply the question of whether there will be enough jobs for workers to retrain into. Although there is no evidence to suggest that technological shocks result in a systemic and prolonged increase in the unemployment rate, there is evidence suggesting that it may result in short- to medium-term reductions in the number of “skilled” occupations for workers to retrain into. The question for workers looking to retrain is typically not, “How do I find employment?” but rather “How do I upskill to access more desirable employment with job security?”  

Literature suggests that technological change can result in a scenario where the supply of “skilled workers” is higher than the number of “skilled” middle-wage jobs that are available. For instance, there is evidence that displaced robotics workers ended up in lower-paid service jobs. These periods of mismatch can be damaging to livelihoods, leaving long-lasting impacts on families and communities. This is also why it is important not to read too much into the evidence on private sectoral training programs). Although the evidence on these sectoral initiatives is overwhelmingly positive, we do not know if these impacts will generalize. And while they may shift who has access to available “good jobs,” they may not have any impact on the availability of “good work.”  

A second challenge for worker retraining is that many people simply may not be able or willing to reskill. Worker training invariably involves workers incurring the costs—in terms of time and/or the opportunity cost of forgoing labor income—of technological change. Consider, for instance, the multiyear costs TAA participants experience. For vulnerable families such as those with no savings that remain employed, even government payment for retraining participation may be insufficient to warrant leaving the labor market. This is particularly problematic for workers who are vulnerable and who would theoretically benefit from retraining yet believe it will be too risky to leave a current labor arrangement.  

And indeed, participants in worker retraining programs in the U.S. are disproportionately likely to come from the most vulnerable backgrounds, such as homelessness, criminal offenses, and/or single parenthood. Evidence suggests classroom time learning new skills may not be an effective policy response, particularly if workers experience other serious social and health concerns. And older people—a group very likely to be overrepresented in jobs at risk of automation due to digitalization—may not be interested in retraining, particularly for those closer to retirement. Moreover, literature has previously suggested these obstacles to retraining only increase as the economy becomes more specialized, as this entails greater barriers to entry in new professions as well as skillsets that may not be as easily transferrable to other occupations. 

A final challenge confronting worker retraining programs is uniquely problematic amid AI. Reskilling program organizers frequently cite issues anticipating future labor market demands; very often, workers appear to retrain from one automation-susceptible occupation to another. This is a problem that is ubiquitous in research attempting to anticipate AI’s impacts. Measurements of potential automation or “routine task intensity” are notable examples of how social scientists have previously attempted to estimate the possible trajectory of AI-enabled labor replacement. Yet such measurements remain highly speculative, often relying on matching occupation task descriptions with task-level estimates of what occupations could, in theory, be automated. Yet the capability to automate work does not mean employers will be able or willing to automate work.  

Genuine empirical evidence on AI’s economic impacts currently takes two forms: industrial robot data and, more recently, large language model (LLM) workplace exposure. Industrial robots are generally not driven by frontier AI, and in cases where AI is present, it is much more preliminary forms of machine learning and digital automation. And LLM infusion in the economy remains early, confined to experimental contexts that may not generalize to the broader economy. And so there simply is not enough broad macroeconomic data pointing to a diffuse impact of LLMs, even if one exists.  

As a result, even retraining organizations have a foggy understanding of AI’s future economic impact. And it makes it difficult for retraining organizations to identify areas to focus for reskilling, potentially leading to investment in the wrong types of training. Indeed, local retraining programs consistently confront the perilous task of needing to work with employers to anticipate the jobs that will be in demand, not just today and in coming months, but also in the next several years.  

As AI advances, things are likely to get only more complex, especially if one is willing to entertain the notion that AI could present a uniquely disruptive shock. Although previous technologies induced higher inequalities and often painful transitory periods, they ultimately created more jobs than they destroyed. This may not be the trajectory that AI follows. The potential for more advanced machine learning systems to automate core human cognitive functions could kindle extremely rapid labor substitution, resulting in a world where there is a dramatic machine replacement of decent work. Even if new good work were to eventually emerge, a scenario with rapid labor substitution could make retraining extraordinarily difficult. 

Beyond retraining: 4 lessons from 63 years of US programming 

Policymakers should view retraining programs as only one part of the broader economic response needed to support workers as AI continues to develop. And I believe literature provides four lessons for us to build upon. 

First, we should avoid assuming public retraining programs will be useful in helping people protect their current roles or find new sources of work. The evidence at best shows inconclusive evidence on retraining efficacy. Policy memos, consulting reports, and other documents on AI economic impacts should refrain from clichés about the need to “reskill” workers and root prescriptions in genuine empirical evidence about the policies that work. 

Second, we should recognize the vast uncertainty about how AI will diffuse in the economy and what its effects will be. There are scenarios where widespread adoption could lead to rapid labor displacement. And there are scenarios where adoption of AI lags, and the effects on workers are more muted than often predicted. Policy responses should be in place to protect workers in all these potential outcomes.  

Third, there is a clear need for better data, both on AI economic impacts and on how retraining can help workers adjust to disruption. Beyond the limitations of existing evidence mentioned in this piece, there is currently scant empirical evidence on how retraining specifically impacts workers displaced by technological automation. Future randomized experiments should focus on targeting this group, indicating to policymakers how technology-displaced workers are currently served by existing programming. This includes also gathering evidence on the kinds of programming that best serve particular demographics. 

And finally, it may be time to reevaluate the role of work altogether. This includes challenging our conception of employment as a requirement for government benefits as well as the primary means through which people create value in society. In scenarios where AI has more dramatic economic effects, it will be especially important for workers, policymakers, and community leaders to think about ways people may be able to contribute. Perhaps this could take the form of helping improve American community life, warding against the uptick in atomization and loneliness of modern life. Or perhaps it might involve supporting more personalized medical, educational, or psychological care.  

By having an active societal discourse on the type of society we would like to live in, we will stand the best possible chance of navigating the AI economic transition in a way that preserves worker livelihoods, opportunity, and dignity.   

Author

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).