Sections

Commentary

A people-first vision for the future of work in the age of AI

Sorelle Friedler, Serena Booth,
Serena Booth Assistant Professor of Computer Science - Brown University
Andrew Schrank, and
Andrew Schrank Olive C. Watson Professor of Sociology, Brown University; Co-Director, CIFAR Innovation, Equity, and the Future of Prosperity Program
Susan Helper
Susan Helper
Susan Helper Carlton Professor of Economics - Case Western Reserve University

March 25, 2026


  • While many Americans associate AI with mass layoffs and less satisfying work, an AI future that puts people first and supports workers is possible.
  • Work has gradually become “enshittified” as employees are routinely underpaid and overworked. Confronting an AI future allows an opportunity to grapple with these realities and meet the moment with a transformative vision.
  • Policies to support this future can include developing institutions to support training, protecting and increasing the role of people in the care workplace, and creating tripartite institutions that encourage the co-design of AI.
Within the frames are people bound to their office cubicles; beyond them, individuals work freely from diverse locations, connected through digital signals.
Yutong Liu & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Many American workers associate artificial intelligence (AI) with layoffs, less satisfying work, and tech billionaires becoming ever more wealthy at their expense. They may be right.

But one can also imagine a world in which the fruits of AI are instead invested in society. What if every K-12 student was taught by well-trained teachers in small classes, every patient interacted with unharried nurses, elders had the opportunity to age with dignity in their own homes or in high-quality residential care facilities, and everyone could find an affordable therapist when needed? And what if care workers were all people, not AI systems or robots, doing meaningful well-paid work to rebuild our communities and health, supported and paid for by the fruits of the AI economy? While tech companies target already underfunded care professions for replacement by AI, undermining our shared humanity and connections, we describe below a vision for an AI future that puts people first.

On March 26, the AFL-CIO will host a national Workers First AI Summit to ensure workers help shape the policies governing AI in the workplace. The summit arrives at a moment when debates about AI and jobs often center on a single question: Will AI destroy jobs, change jobs, create new jobs, or leave work largely unchanged? There are doomsayers and champions on both sides of the debate. 

But this framing misses a more immediate reality that was self-evident at a regional AI summit sponsored by the Cleveland AFL-CIO, Case Western Reserve University, and the Canadian Institute for Advanced Research (CIFAR) last month. Americans are already experiencing a steady decline in job quality: increased monitoring and surveillance, algorithmic scheduling, and declining autonomy—all against the backdrop of stagnant wage growth and a growing affordability crisis. Whether or not AI ultimately eliminates millions of jobs, many workers already feel that their work is being degraded or, to use the language of the day, “enshittified.”

AI may accelerate this process, but it’s not the root cause. The decline in worker bargaining power, enervation of enforcement agencies like the National Labor Relations Board (NLRB), and collapse of union membership all began decades ago, long before modern AI. Meanwhile, institutions like schools and hospitals—that both employ and serve millions of Americans—remain chronically understaffed. The result is a system with strong protections on paper but limited impact in practice.

As a society, we have gradually come to accept the enshittification of work and associated degradation of both public and private services. Public servants are routinely underpaid and overburdened, working in institutions that lack the people and resources they need to deliver key services. Teachers, nurses, and social workers face growing administrative burdens and constant monitoring, while receiving stagnant or declining pay and benefits.

Even sectors once known for worker autonomy are beginning to feel the pain. Silicon Valley was long seen as a bastion of professional agency, as tech companies routinely offered software engineers generous salaries and perks in an effort to attract and retain top talent. But those days are ending. Instead, tech companies are using both the fruits and threat of AI to push workers harder, resulting in longer hours, fewer jobs, and higher expectations for remaining workers. Some firms are even flirting with a version of China’s infamous “996” schedule, where work is required from 9 a.m. to 9 p.m., six days a week. And engineers are thus gripped by anxiety about AI’s potential to transform or eliminate their jobs.

They’re hardly alone. We’re all confronting a future in which many jobs fall victim to the AI revolution and many remaining roles are rendered unappealing by the ongoing demise of regulatory institutions, growth of insecurity, and normalization of management practices that would have seemed unthinkable to our grandparents. It’s no wonder that more than half of surveyed Americans fear that AI will take their jobs and replace their face-to-face relationships. We must not accept these imagined futures. 

People-first labor policy proposals for the age of AI

Protect and increase the role of people in the care workplace

Imagining a better future for workers in the age of AI means protecting the role of people in the workplace. Some jobs should be done by humans: jobs that build human relationships and are important to society, like teaching and care-economy professions. These professions have long suffered from the enshittification of work, facing overcrowded classrooms, hospital closures, and outsized caseloads that are good for neither providers nor the people they serve. Instead, people-first policies, such as licensure and minimum staffing requirements, could expand the professional workforce and provide fruitful employment while improving job quality.

Teaching offers an example. A key lever of student success is a small teacher-to-student ratio; small class sizes are a critical differentiator at most private schools. Yet public schools remain chronically underfunded with far too many students per teacher, leading to degraded learning experiences and teacher–student relationships. AI companies did not create this problem, but they could exacerbate it. A recent report found that students in classrooms using AI felt less connected to their teachers and peers. This is especially concerning given that teacher–student relationships are vital to many student outcomes, and many educators are raising concerns about the risks AI in the classroom presents to student learning.

Rather than replacing teachers with AI, we should boost their numbers. Laws that mandate minimum staffing levels and allocate funding to train the school workforce could bring more teachers into the classroom and lower class sizes. This is not unprecedented: Minimum staffing levels are required in air traffic control and nuclear power plants, and many states already mandate a maximum number of students per teacher in child care and K-3. Similar requirements could be put in place for other care workers. Such workers could still use AI, but in ways they control and that benefit both workers and the people they serve. Indeed, many health care systems already use AI systems to aid notetaking—a use that has the potential to help nurses spend more time with patients.

Develop institutions to support training, professionalism, and worker rights

It’s not enough to create and protect these people-first jobs; if supply is to meet demand, we need training pathways that will allow more people to enter these careers at different points in the lifecycle. One hypothetical example of a mid-career transition involves software engineers. Many engineers, whose jobs are currently threatened by AI, have the knowledge and degrees required for math and science education, where there are already persistent teacher shortages. With targeted retraining programs and public funding, experienced engineers could transition into teaching careers that draw on their expertise. 

But training alone also isn’t enough. Engineers went into tech rather than teaching for many reasons, including differences in income, prestige, and autonomy. A serious effort to build a pipeline from engineering to teaching would have to address these gaps by elevating the incomes and status of people-first professions. That means raising salaries, guaranteeing adequate staffing levels, and restoring professional autonomy so teachers and other care professionals are trusted as experts. By simultaneously investing in retraining pathways to expand the supply of qualified workers and strengthening these professions to increase demand for their expertise, policymakers could turn the threat of AI displacement into an opportunity to address longstanding shortages in critical public-facing fields. In interventions like these, we find an optimistic vision for the AI future. 

Some of the institutions needed to facilitate such a transition already exist, albeit in diminished form, and could be adapted to serve this new vision. A revitalized NLRB, for instance, could once again help workers negotiate “the terms and conditions of their employment or other mutual aid or protection,” in line with its original mandate. The Fair Labor Standards Act might be amended to include minimum staffing levels in some industries, and the Wage and Hour Division of the Department of Labor could be tasked with their enforcement. We could even use AI to support these enforcement efforts, treating it as a force multiplier in agencies that have always been understaffed.

But other institutions will likely have to be built from scratch. The U.S. has never embraced post-employment training, let alone developed institutions to link the supply of trained workers to the demand for their skills. On the contrary, training in the U.S. has typically occurred either in schools, prior to employment, or on-the-job; neither approach works for mid-career workers in transition.

Fortunately, other countries offer models of lifelong learning on which to draw, and we have many local experiments that might be scaled. If properly governed, new information technologies could facilitate the adaptation of these models, especially in conjunction with greater worker organization. It’s no coincidence, after all, that the most successful models of lifelong learning are found in European countries with powerful trade unions; by aggregating the interests of their members, and carrying out productive dialogue with employers, European unions are indispensable partners in the training process, and in so doing, keep their workers secure and countries competitive. Thus, reinvigorated regulatory and collective bargaining structures could go hand-in-hand with the development of a more robust approach to lifelong learning.

Create tripartite institutions that encourage the co-design of AI

Tripartite institutions that bring government, business, and labor unions to the table could support interventions like these by identifying additional jobs that need minimum staffing. But they could also serve as venues for the productive, participatory design of AI systems themselves. When AI systems are introduced from above, they tend to degrade working conditions to everyone’s detriment. At last month’s Cleveland AFL-CIO convening, for example, utility workers described client management software that reduces both their productivity and quality of work life by sending them on inefficient routes down unsafe streets. Their employer’s general-purpose systems simply weren’t designed for their use cases; employers by themselves are insufficiently familiar with the specifics of downstream use cases to design for them. If management and labor could collaborate on the design of bespoke software, however, they could achieve mutual gains.

This is not wishful thinking. Computer scientists at Carnegie Mellon University and the UNITE HERE union have co-designed an app that facilitates communication and record-keeping by guest room attendants in ways that ease communication about issues like missing supplies and minimize labor-management conflict. The result is a win-win that speaks to broader possibilities, like shop-floor problem-solving by trained manufacturing workers armed with data flows from automated equipment.

Consultation of communities affected by AI use is already required for high-impact systems used by the federal government, including procured systems. But the post-hoc consultations required by the current rule do not allow workers the opportunity to shape these systems before deployment. Effective participatory design requires understanding and compensating it as work, giving workers voice, and balancing the tension between co-design in the context of specific use cases and the scale of AI systems. Government facilitation of tripartite participation (through grants and demonstration projects in the public sector) can help stakeholders achieve these goals and encourage the development of worker-centered AI.

Addressing AI and labor displacement requires ambitious policy

The American public is worried about the impact of AI in the workplace. But industry leaders are touting the technology’s potential while making enormous profits and doing everything in their power to bring innovations to market as fast as possible, with little regard for the people who may be harmed. The gap between the wealth of the elite and the average American is large and growing; AI has the potential to make this disparity even greater. Policymakers must meet the moment with a transformative vision for the future of work that puts people first.

Authors

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).