Sections

Commentary

We should all be Luddites

September 3, 2025


  • Though the term is often used to invoke technological backwardness, Luddites were actually concerned with the consolidation of control and impacts of technological change on people.
  • AI is becoming more integrated into every part of society, and we should not accept that the deployment of new technologies be dictated unilaterally by corporations or in cahoots with the government.
  • This is especially important for those whose work shapes public understanding or policy—such as journalists, academics, or lawmakers—to demand technology serves everyone, not just corporate or state interests.
Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

For the past two centuries, invoking the “Luddites” was shorthand for technological backwardness or fear of innovation, a sneer aimed at anyone who dared question the march of progress. But the real Luddites weren’t afraid of machines. They were afraid of the social and economic impacts of the new technology on people—and of who controlled the terms of technological change. 

They were skilled workers. Craftsmen. Artisans. People with deep technical expertise who smashed the industrial weaving machines not because they hated technology, but because they saw it being used to extract wealth and consolidate control. To concentrate power over their livelihoods in the hands of a few business owners backed by the state. In “Blood in the Machine,” Brian Merchant reminds us: The Luddites were not fighting technology. They were fighting the enclosure of their future.  

We should be, too. 

Especially now, as artificial intelligence (AI) reconfigures every dimension of our societies—from labor markets to classrooms to newsrooms—we need more Luddites. Not in the caricatured sense, but in the original sense: people who refuse to accept that the deployment of new technologies should be dictated unilaterally by corporations or in cahoots with the government, especially when it undermines people’s ability to earn a living, social cohesion, public goods, and democratic institutions. 

If you’re a journalist, an academic, policymaker, or an educator —someone whose work shapes public understanding or steers policy responses—you have a special responsibility in this moment. Because this isn’t just about AI’s capabilities. It’s about who decides what those capabilities are used for, who benefits, and who pays the price. 

Journalists: Stop reporting from inside the hype machine 

Too much AI coverage reads like breathless dispatches from a trade show: GPT-this, Gemini-that, hallucination rates, benchmark scores. But journalism shouldn’t merely chronicle innovation; it should interrogate it. 

Every time a news story frames AI competition as a race—between companies, between countries—it obscures the fact that ordinary people have been drafted into that race without their consent. Their jobs, their data, their cognitive space have all been put on the line. The new weaving machines are being installed—infinitely more capable of performing a far vaster set of tasks than any prior technology—at the behest of the state and the tech corporations whose platforms we use every day. Meanwhile, workers in white- and blue-collar jobs alike are watching the terms of their employment erode. Entire sectors of the economy risk being devalued, destroying livelihoods and upending families.  

AI is not just a tech story. It’s a story about people, about their choices, and about who benefits and why. We need journalists who understand that and who resist the pressure to normalize corporate narratives about “inevitability.” They should not just parrot nebulous claims about innovation without asking what the objectives and political-economic implications of that innovation are.

Academics: Look beyond the sector to the system 

Academics studying AI’s labor effects often focus on narrow metrics: firm-level productivity, task replacement probabilities, skill-biased technical change. But these frames miss the forest for the trees. They treat technology as something that “happens to” firms and workers, rather than something that is strategically deployed by capital. 

What if we asked instead: What kinds of labor markets are being designed around AI? What is shaping the choices regarding adoption? Who has bargaining power in those decisions? How will the value created by as yet unfounded promises of productivity growth be distributed? Who has power, and how is it wielded to promote certain interests and negate other futures? 

The Luddites understood that mechanization wasn’t just an economic shift; it was a political one. The loss of control over their tools meant a loss of autonomy over their livelihoods, more monitoring, less agency, and new precarity for laborers. It meant new alliances between business owners and the state that sentenced protesters to death. If academics are to offer useful insights today, they need to ask not just how technology changes work, but what will be done about health insurance, the social safety net, and profit sharing.

Policymakers: Regulate deployment, not just development 

Policymakers are scrambling to write rules for AI development: transparency mandates, safety protocols, red-teaming exercises. These are necessary, but they don’t touch the core challenge— what is the system that we want to create and who has the power in that system? AI is being integrated into workplaces, schools, and public services with little democratic oversight and even less attention to economic justice—much less the environmental trajectory we are taking. 

The real policy questions aren’t just technical. They are distributive. Will the value generated by AI productivity be used to deskill and disempower workers, or to enable new approaches to how we value data labor and forms of collective governance? Will it centralize decision-making, or support pluralism and human autonomy? Will we make choices about AI that deepen inequality or help rectify it? 

We already regulate how technologies are deployed in the public interest—from zoning laws to environmental impact assessments. We should do the same for AI, starting with clear rules about where and how workers can be monitored, where AI systems can replace human decisionmakers or creatives, and strong protections for workers in sectors facing AI-driven disruption. 

Educators: Don’t hand the mind to the machine 

The explosion of AI in education—from tools that promise personalized tutoring, automated grading, AI-generated curriculum—has been driven by tech companies like Google and Microsoft that have already cornered a big part of the EdTech market, and presidential proclamations to promote rapid integration and adoption. But what’s at stake isn’t just educational outcomes. It’s the formation of minds. It’s how we learn to reason, to discern truth, to engage with one another. 

But when we offload human inquiry and creativity to anthropomorphic corporate AI bots, we risk devaluing critical thinking while promoting cognitive offloading. If we turn the intellectual development of the next generation over to opaque, probabilistic engines trained on a slurry of scraped content—with little transparency and even less accountability—we are not enhancing education. We are commodifying it, corporatizing it, and replacing pedagogy with productivity.

What’s lost is not efficiency, but encounter; not content, but context. A generation educated by AI may gain convenience, but risks losing curiosity and creativity.

A Luddite ethic for the 21st century 

It’s time to rehabilitate the Luddites—not as heroes of the past, but as guides for the present. They understood that the future is not written by the machine, but by those who wield it. 

To be a Luddite today is to refuse the fatalism of techno-inevitability. It is to demand that technology serve the many, not just the few. It is to assert that questions of labor, agency, and justice must come before speed, efficiency, and scale. 

In journalism, academia, policy, and education, we must stop asking only what AI can do and start asking what it should do—and for whom.

If we don’t, someone else will answer for us. And as the Luddites knew too well, we may not like the answer. 

  • Acknowledgements and disclosures

    Google and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the authors and are not influenced by any donation.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).