The recent advancements in Large Language Model-based systems (LLMs) such as ChatGPT have generated immense excitement for the possibilities they offer. However, these developments have also triggered renewed concerns about job displacement and the potential decline of human labor demand in a wide variety of occupations, from software developers to screenwriters. A recent study conducted by researchers at OpenAI and UPenn highlights the extensive exposure of white-collar jobs to automation driven by LLMs and related tools. And yet, to the best of our knowledge, these growing concerns have not prompted the inclusion of risks to labor demand into the scope of AI impact assessment frameworks being developed by standard-setting and international governmental organizations, nor in the “responsible AI” practices followed by AI companies.
Neglecting the risks to labor demand in AI governance frameworks is perilous. AI at times “frees” workers from the most fulfilling parts of their jobs rather than the dull, dangerous, or repetitive tasks that many hoped AI would take on. This leaves workers to correct machine errors in highly surveilled, algorithmically managed workplaces. Leading economists have warned about AI potentially exacerbating the hollowing out of the middle class by reducing the availability of stable, well-paying jobs that don’t require advanced degrees, and accelerating task automation while lagging in creating new tasks for human workers. Both trends predate the recent AI advancement and could worsen because of it.
Before proposing interventions to address AI’s labor market risks and harness its opportunities, we must understand why the current AI trajectory leans towards excessive automation and worker displacement. The incentives for AI innovators and path dependency in AI research provide insights into this issue.
Policies often unrelated to AI regulation indirectly incentivize overly rapid labor automation. For instance, the tax code in the U.S. favors investments in software and automation over labor. High labor mobility barriers artificially limit labor supply in the U.S., encouraging “so-so technology”—tools that shift tasks onto consumers, eliminating jobs without significantly increasing productivity, such as self-service checkouts or robotic customer support. The global spread of such automation risks reversing decades of poverty reduction, eliminating formal sector jobs in regions where jobs, not labor, are scarce.
Historically, worker collectives helped balance the effects of technological change when it led to worsened working conditions. Their struggle contributed to shared benefits from the Industrial Revolution, which initially impoverished workers. However, current U.S. private sector union participation is only 6%. In combination, massively distorted incentives and a weakened union movement make AI more likely to be put into uses that emphasize labor cost cutting over job quality improvements.
This context makes the role of AI developers and choices they make all the more important. Motivated by this thought, the Partnership on AI recently released Guidelines for AI and Shared Prosperity, to which we have both contributed. The Guidelines are intended to equip interested stakeholders with the conceptual tools they need to steer AI in service of shared prosperity and improved job access and job quality. They can inform both voluntary labor risk management in AI-creating or AI-using organizations and policy frameworks for AI governance. They can help unions and worker advocacy organizations pinpoint key risks and opportunities presented by the introduction of an AI system into a workplace. With those identified, the Guidelines can then suggest provisions to be included in collective bargaining agreements.
A comprehensive analysis of the risks and opportunities of AI for the labor market can help stakeholders build or vet a strategy for jobs creation for a given region; evaluate the merits of tax breaks that promise to create sustainable employment; and to identify when a detailed scenario analysis and quantification of the effects of AI on wages and employment would be particularly useful.
In particular, the Guidelines for AI and Shared Prosperity offer two tools intended to help close the gap most AI governance frameworks have around labor risk assessment and mitigation: (1) a high-level Job Impact Assessment tool which can be used by any interested stakeholder, and (2) a collection of responsible practices specific to AI-creating organizations and AI-using organizations. The Job Impact Assessment offers a way to systematically evaluate the presence of both signals of opportunity to improve workers’ well-being and shared prosperity with AI as well as the presence of signals of risk indicating that introduction of an AI system may harm workers. The collection of responsible practices suggests, for example, to give workers a real say in AI system’s origination, development, and deployment; provide meaningful explanations of the AI system’s functioning; ensure transparency about worker data collection and use; and commit to neutrality towards worker organizing, including not using AI to identify possible organizing efforts.
Importantly, signals of opportunity to advance shared prosperity with AI do not necessarily “offset” the risks posed by an AI system. The reason is simple: All too often, the benefits and costs are borne by different communities. Through adoption of responsible practices, organizations can maximize the likelihood that the benefits AI brings will be broadly shared1 and put in place mitigation strategies for each of the risks identified. Risk mitigation strategies could range from eliminating the risk or reducing the severity of potential impact to ensuring access to remedy or compensation for affected groups.
Predicting the effects of a new technology on labor demand is difficult and involves significant uncertainty. Some would argue that, given the uncertainty, we should let the “invisible hand” of the market decide our technological destiny. But we believe that the difficulty of answering the question “Who is going to benefit and who is going to lose out?” should not serve as an excuse for never posing the question in the first place. As we emphasized, the incentives for cutting labor costs are artificially inflated. Moreover, the invisible hand theorem does not hold for technological change. Therefore, a failure to investigate the distribution of benefits and costs of AI risks invites a future with too many “so-so” uses of AI—uses that concentrate gains while distributing the costs. Although predictions about the downstream impacts of AI systems will always involve some uncertainty, they are nonetheless useful to spot applications of AI that pose the greatest risks to labor early on and to channel the potential of AI where society needs it the most.
In today’s society, the labor market serves as a primary mechanism for distributing income as well as for providing people with a sense of meaning, community, and purpose. It has been documented that job loss can lead to regional decline, a rise in “deaths of despair,” addiction and mental health problems. The path that we lay out aims to prevent abrupt job losses or declines in job quality on the national and global scale, providing an additional tool for managing the pace and shape of AI-driven labor market transformation.
Nonetheless, we do not want to rule out the possibility that humanity may eventually be much happier in a world where machines do a lot more economically valuable work. Even despite our best efforts to manage the pace and shape of AI labor market disruption through regulation and worker-centric practices, we may still face a future with significantly reduced human labor demand. Should the demand for human labor decrease permanently with the advancement of AI, timely policy responses will be needed to address both the lost incomes as well as the lost sense of meaning and purpose. In the absence of significant efforts to distribute the gains from advanced AI more broadly, the possible devaluation of human labor would deeply impact income distribution and democratic institutions’ sustainability. While a jobless future is not guaranteed, its mere possibility and the resulting potential societal repercussions demand serious consideration. One promising proposal to consider is to create an insurance policy against a dramatic decrease in the demand for human labor that automatically kicks in if the share of income received by workers declines, for example a “seed” Universal Basic Income that starts at a very small level and remains unchanged if workers continue to prosper but automatically rises if there is large scale worker displacement.
We welcome the White House Office of Science and Technology Policy requesting public suggestions on best practices that can help mitigate risks to workers in the context of AI-enabled workplace surveillance. However, we want to point out that there is an urgent need to include AI’s broader labor impacts into the scope of both voluntary and legally mandated AI development and deployment norms. We hope that initiatives like the Guidelines for AI and Shared Prosperity will help change today’s dangerous status quo where AI’s jobs-related risks are in practice often overlooked. Beyond that, we must also investigate how to address a possible permanent decrease in human labor demand and its potential impact on income distribution and democratic governance sustainability.
-
Acknowledgements and disclosures
The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.
-
Footnotes
-
A number of influential AI-creating organizations declared intentions to ensure their AI is broadly socially beneficial or benefits “all of humanity.” See, for example: https://ai.google/responsibility/principles/, https://openai.com/charter, https://www.deepmind.com/about
-
A number of influential AI-creating organizations declared intentions to ensure their AI is broadly socially beneficial or benefits “all of humanity.” See, for example: https://ai.google/responsibility/principles/, https://openai.com/charter, https://www.deepmind.com/about
Commentary
Unleashing possibilities, ignoring risks: Why we need tools to manage AI’s impact on jobs
August 17, 2023