Sections

Commentary

We should all be Luddites

October 6, 2025


  • Though the term is often used to invoke technological backwardness, Luddites were actually concerned with the consolidation of control and the impact of technology on people.
  • As AI becomes more integrated into every part of society, we should not accept that its deployment be dictated unilaterally by corporations or in cahoots with the government.
  • Those whose work shapes public understanding or policy—such as journalists, academics, educators, or lawmakers—have a special responsibility to demand that technology serves everyone, not just corporate or state interests.
Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

For the past two centuries, invoking the term “Luddites” was shorthand for technological backwardness or fear of innovation—a sneer aimed at anyone who dared question the march of progress. But the real Luddites weren’t afraid of machines; they were afraid of the social and economic impacts of the new technology on people—and of who controlled the terms of technological change. 

They were skilled workers: Craftsmen and artisans with deep technical expertise. The Luddites smashed the industrial weaving machines, not because they hated technology, but because they saw it being used to extract wealth, consolidate control, and concentrate power over their livelihoods in the hands of a few business owners backed by the state. In “Blood in the Machine,” Brian Merchant reminds us: The Luddites were not fighting technology but the enclosure of their future.  

We are now facing a similar moment. As artificial intelligence reconfigures every dimension of our societies—from labor markets to classrooms to newsrooms—we should remember the Luddites. Not as caricatures, but in the original sense: People who refuse to accept that the deployment of new technology should be dictated unilaterally by corporations or in cahoots with the government, especially when it undermines workers’ ability to earn a living, social cohesion, public goods, and democratic institutions. 

Journalists, academics, policymakers, and educators—people whose work shapes public understanding or steers policy responses—have a special responsibility in this moment: To avoid reproducing AI hype by uncritically acquiescing to corporate narratives about the benefits or inevitability of AI innovation. Rather, they should focus on human agency and what the choices made by corporations, governments, and civil society mean for the trajectory of AI development. 

This isn’t just about AI’s capabilities; it’s about who decides what those capabilities are used for, who benefits, and who pays the price.

Journalists: Stop reporting from inside the hype machine 

Too much AI coverage focuses on the stages of innovation, reciting hallucination rates or benchmark scores. But journalism shouldn’t merely chronicle innovation; it should interrogate it. 

Journalists must help people and policymakers understand technology. This means avoiding AI industry jargon that lets those in power control the narrative and avoid scrutiny. The term “hallucination” is one example, since “error” would be more accurate. Hallucinations are predictive errors, and error rates can be measured, debated, and regulated. Just as journalists avoid obscuring terms like “collateral damage” to refer to civilian victims of armed conflict, they should avoid the anthropomorphic industry term hallucination so they can improve our understanding and governance of AI systems.  

Every time a news story frames AI competition as a race between companies or countries, it obscures the fact that ordinary people have been drafted into that race without their consent. Their jobs, data, and cognitive space have all been put on the line. The new weaving machines are being installed—infinitely more capable of performing a far vaster set of tasks than any prior technology—at the behest of the state and the tech corporations whose platforms we use every day. Meanwhile, workers in white- and blue-collar jobs alike are watching the terms of their employment erode. Entire sectors of the economy risk being devalued, destroying livelihoods and upending families.  

AI is not just a tech story; it’s a story about people, their choices, who benefits, and why. For these reasons, journalists must understand and resist the pressure to normalize corporate narratives about “inevitability,” parroting nebulous claims about innovation without asking about its objectives and human impacts.   

Academics: Look beyond the sector to the system 

Academics studying AI’s labor effects often focus on narrow metrics: firm-level productivity, task replacement probabilities, skill-biased technical change. But these frames miss the forest for the trees. They treat technology as something that “happens to” firms and workers, rather than something that is strategically deployed by capital. 

What if the questions were instead: What kinds of labor markets are being designed around AI? What is shaping the choices about adoption? Who has bargaining power in those decisions? How will the value created by as yet unfounded promises of productivity growth be distributed? Who has power and how is it wielded to promote certain interests and negate other futures? 

The Luddites understood that mechanization wasn’t just an economic shift; it was a political one. The loss of control over their tools meant the loss of autonomy over their livelihoods. It meant more monitoring, less agency, and new precarity for laborers. It also meant alliances between business owners and the state that sentenced those who protested to death. If academics are to offer useful insights today, they need to ask not just how technology is transforming work, but also how it will impact employer-provided health insurance, the social safety net, and profit-sharing arrangements. 

Policymakers: Regulate deployment, not just development 

Despite the push by some Republican members of Congress to restrict regulations on AI, state and foreign policymakers are scrambling to write rules for AI development, including transparency mandates, safety protocols, and risk-assessment frameworks. These are necessary, but they don’t touch the core challenge: What is the system that we want to create and who has the power in that system? Today, AI is being integrated into workplaces, schools, and public services with little democratic oversight and even less attention to economic justice or its environmental trajectory. 

The real policy questions aren’t just technical; rather, they are distributive. Will the value generated by AI productivity be used to deskill and disempower workers, or to enable new approaches to how we value data labor and forms of collective governance? Will it centralize decision-making, or support pluralism and human autonomy? Will the U.S. and the global community make choices about AI that deepen inequality or help rectify it? 

Regulation already applies to technologies deployed in the public interest, from zoning laws to environmental impact assessments. The same should be done for AI, starting with clear rules about where and how workers can be monitored, where AI systems can replace human decision-makers or creatives, and strong protections for workers in sectors facing AI-driven disruption. 

Educators: Don’t hand the mind to the machine 

The explosion of AI in education—such as tools that promise personalized tutoring, automated grading, and AI-generated curricula—has been driven by tech companies like Google and Microsoft that already dominate the EdTech market, and presidential proclamations to promote rapid integration and adoption in K-12 classrooms. But what’s at stake isn’t just educational outcomes: It’s the formation of minds and how we learn to reason, to discern truth, to engage with one another. 

When human inquiry and creativity are offloaded to anthropomorphic AI bots, there is a risk of devaluing critical thinking while promoting cognitive offloading. If we turn the intellectual development of the next generation over to opaque, probabilistic engines trained on a slurry of scraped content, with little transparency and even less accountability, we are not enhancing education; we are commodifying it, corporatizing it, and replacing pedagogy with productivity. 

What’s lost is not efficiency, but encounter. Not content, but context. A generation educated by AI may gain convenience, but at the risk of losing curiosity and creativity. 

A Luddite ethic for the 21st century 

It’s time to rehabilitate the Luddites as guides for the present. They understood that the future is not written by the machine, but by those who wield it. To be a Luddite today is to refuse the fatalism of techno-inevitability and to demand that technology serve the many, not just the few. It is to assert that questions of labor, agency, and justice must come before speed, efficiency, and scale. 

Journalists, academics, policymakers, and educators must stop asking only what AI can do and start asking what it should do and for whom. 

If they don’t, someone else will answer for them. And as the Luddites knew too well, they may not like the answer. 

  • Acknowledgements and disclosures

    Google and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).