Every day brings new breakthroughs in artificial intelligence—and new fears about the technology’s potential to trigger mass unemployment. CEOs predict white collar “bloodbaths.” Headlines warn of widespread job losses. With public anxiety growing, it can feel like the economy is already hemorrhaging jobs to AI. But what if, at least for now, the data are telling a different story?
To find out, we measured how the labor market has changed since ChatGPT’s launch in November 2022. Specifically, we analyzed the change in the occupational mix across the labor market over the past 33 months. If generative AI technologies such as ChatGPT were automating jobs at scale, we would expect to see fewer workers employed in jobs at greatest risk of automation.
Our data found the opposite. In a new report from the Budget Lab at Yale, we share our findings of a labor market characterized broadly by stability, rather than disruption, since ChatGPT’s release. Despite fears of an imminent AI jobs apocalypse, the overall labor market shows more continuity than immediate collapse. The percent of workers in jobs with high, medium, and low AI “exposure” has remained remarkably steady over time. (Jobs that are highly “exposed” to generative AI technologies have the highest percentage of tasks that ChatGPT can be used for to save significant time.)
Similarly, we looked at whether AI-displaced workers were visible in unemployment statistics. Again, we found no pattern of increasing AI exposure among the unemployed.
These findings do not suggest that AI hasn’t had any impact at all over the last three years. Our analysis complements and is consistent with emerging evidence that AI may be contributing to unemployment among early-career workers. (It could also be consistent with evidence that a weakening labor market is hurting those same workers.) Our paper differs from more granular analyses that detect occupation-level impacts on isolated jobs or subpopulations—e.g., studying if writers or translators have lost jobs. There is still considerable uncertainty about AI’s early impact on these narrower sets of jobs and workers, which might be harbingers of wider labor market disruption in the future.
Overall, our approach takes a broader lens and looks for economy-wide turbulence. Our methodology might miss the labor market equivalent of a small fire starting on the stove, but would clearly detect if the house was burning down.
Generative AI’s workplace impacts are comparable to earlier technological shifts
These patterns might surprise those expecting rapid labor displacement due to AI. While the results contradict the most alarming headlines, they are strikingly consistent with past precedent. Even transformative technologies such as the computer and the internet took decades, not mere months, for their impacts to fully materialize in the workplace. That’s because technology adoption requires complementary investments, cultural shifts, and regulation.
Generative AI has followed a similar trajectory in its first few years. We compared the pace of occupational change since ChatGPT’s launch to similar periods of change following the introduction of computers and the internet. We found that the occupational mix has changed marginally faster during the early ChatGPT era compared to previous technological shifts. However, these changes predate ChatGPT’s launch, suggesting AI may not be the primary driver.
Overall, the pace of labor market change following ChatGPT’s launch nearly three years ago appears consistent with historical trends. AI’s early stages look less like a revolution and more like a familiar, gradual evolution. This suggests that the trajectory of AI more closely resembles Arvind Narayanan and Sayash Kapoor’s thesis of “AI as normal technology,” rather than the more dramatic predictions of rapid takeoff that the AI 2027 project and others have forecasted.
Why generative AI workplace adoption has been uneven
Why have AI’s workplace impacts so been so different than what was predicted? At first glance, generative AI seems better positioned for faster workplace diffusion than past technologies. In contrast to the incremental spread of computers, access to ChatGPT and other models such as Google Gemini, Microsoft Copilot, and Anthropic’s Claude feels instantaneous, more akin to social media diffusion. There are far fewer frictions in deployment: Access to cutting-edge generative AI models doesn’t require buying an expensive piece of hardware or installing cables, or for some users, even paying any money at all. When new generative AI capabilities are released, users can immediately access them, like the flip of a light switch.
Despite this ubiquity and ease of access, actual workplace adoption to date has been highly uneven across sectors and occupations. The best data source for generative AI usage among workers comes from Anthropic, one of the leading AI labs. We compared Anthropic’s data on usage of its Claude chatbot with data on AI job exposure, and found little correlation between the two metrics. Our analysis revealed striking gaps between where AI could be useful (exposure) and where it is actually being used (Claude data).
Claude chats were dominated by coding tasks, with writing tasks also overrepresented. This pattern reflects two realities: Claude’s specific reputation as particularly strong in those tasks, as well as those tasks’ ease of use across AI products. Recent ChatGPT usage data from OpenAI also tell a story of lagging and leading sectors, with information technology at the top and uneven usage across other sectors.
These patterns reveal the messy reality of workplace technology adoption. Generative AI workplace adoption has been constrained by a range of practical hurdles, from privacy and security concerns to liability risks, data availability, and governance challenges. These pain points vary in degree by sector: While software developers and freelance editors can quickly turn to AI tools with relative ease, professionals in highly regulated and risky sectors—such as law, finance, and medicine—are more constrained in their usage. Take radiology, a medical discipline that seems tailor-made for AI: While AI can already outperform doctors, radiologists today are busier and better compensated than ever. Across sectors, employers are now confronting the move from isolated pilots to large-scale adoption, requiring time- and resource-intensive change management.
Future labor market impacts may stem not only from wider adoption, but from different types of adoption. The early, low-hanging fruit of chatbot usage reflected in today’s data may not be the biggest risk to jobs. Indeed, much of this chatbot usage may reflect augmentation more than automation, with workers turning to chatbots and finding ways to improve their efficiency and performance.
The biggest risks may not come from chatbots speeding up tasks, but from firms re-engineering entire workflows to automate them. New enterprise (i.e., business) usage data from Anthropic illustrate this risk. While about half of Claude chatbot usage was for augmenting purposes, the overwhelming majority (77%) of the tasks that business clients using Claude’s API deployed were for the purpose of automation.
We need to be vigilant about monitoring AI’s ongoing impacts
Our data show stability, not disruption, in AI’s labor market impacts—for now. But that could change at any point. Generative AI may well join the ranks of transformative, general purpose technologies, but it is too soon to know how disruptive it will be, or at what pace. Without clear, timely analysis on AI’s impacts, we risk both overreacting to imagined crises and underreacting to real disruptions. Policymakers need evidence, not speculation, to steer the future of work.
While our early findings are reassuring, the future requires vigilance. That’s why the Budget Lab at Yale will monitor AI’s labor market impacts on a monthly basis. But vigilance requires better data. We need comprehensive usage information from all major AI companies at both the individual and enterprise level. Anthropic has set a precedent by releasing Claude usage data, including at the enterprise level, and OpenAI has shared summary statistics. But these offer only a partial view.
To truly understand AI’s trajectory, Google, Microsoft, OpenAI, and other leading AI labs should share usage data transparently and responsibly. Without this, policymakers, researchers, and the public will be flying blind into one of the most significant technological shifts of our time.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).