In a rerun of a Silicon Valley fable, the leaders of companies building new AI now have celebrity status. They’re cover-of-the-magazine fodder, testifying before Congress, penning their takes on how to govern AI for all our benefit, appearing on Oprah, even winning Nobel Prizes. In the first week of the new Trump administration, they’re already beside the president at the podium.
There’s an assumption here: that the power to shape AI’s impacts rests overwhelmingly in the hands of those at the top of the corporate org chart, and sometimes the investors who back them. What if that’s not true?
What if the power actually rests more in the hands of the people building the technology—the talent or, if you prefer, the workers—as opposed to the CEOs? As a venture capitalist who’s been investing in AI for more than a decade, that’s exactly what I see happening.
At the labs creating the foundation of modern artificial intelligence—startups like OpenAI, Anthropic, Mistral, Databricks, and xAI, and groups within big tech companies including Google, Microsoft, and Meta—the talent regularly does things that are rare in the rest of Silicon Valley. They push back against military contracts, ask the U.S. Securities and Exchange Commission to investigate their own company, write open letters requesting more whistleblower protection, and revolt to get their executives to create new oversight bodies. When their board fires the boss, they threaten to leave en masse and succeed in bringing the boss back. (Of course they also regularly leave and start their own companies, sometimes to compete with their prior employer.)
Why does AI talent hold such power?
The answer is simple: Supply and demand. The risk of losing staff is potentially fatal for AI labs, much more so than for the rest of the tech industry. At the Airbnbs, Snowflakes, and Snaps of the world, there are thousands of people, or tens of thousands even, who could perform most roles.
AI is different. An accomplished AI technologist once told me there are maybe 100 people in the world who know how to make a state-of-the-art AI model. And while that number has grown with the greater popularity of AI research and open-source model technology, there are still very few people who can extend the frontier of what a model can do—which is, at the moment, where the most value seems to lie. Star AI talent earns pay significantly higher than others in tech, with standout researchers receiving packages of $5 million or more a year.
The work at an AI lab is different from that at other tech companies: it’s more like being a scientist than an expert craftsperson. At an AI lab, you may need a Ph.D.-level academic background. The teams do more than just traditional software engineering. They manage never-seen-before clusters of tens or hundreds of thousands of chips, command precise details about how hardware accelerators work, intuit mathematics deeply, and more.
And if you need the best Ph.D.’s in a certain discipline, it’s hard to recruit more talent quickly. Those doctorates take years to earn. The job titles at AI labs—“research scientist” or “research engineer,” as opposed to “software developer”—reflect this difference. So the supply of talent remains small, and the demand is enormous, giving an unusual degree of influence to these researchers, members of technical staff, and others at the AI labs.
Why does it matter that these AI talents hold more power than workers at other tech companies?
At any company, talent often cares about different things than leadership does. They’re more exposed to critical voices, have less ego at stake in changing their minds, and less incentive to maximize the value of company stock at all costs. They often have reason to think more critically about the ethical dimensions of their work. They come from a wider range of backgrounds than company leadership and may spot risks and flaws before leaders do. (To be sure, in AI labs, like in all tech companies, the line between being a “worker” vs. owner or “boss” is more of a gradient.)
To put it simply, when it comes to addressing the risks of AI, these employees may care more—and may be more willing to make sacrifices for their values.
These talents are closest to the technology, so they know what fixes might help and which ones will backfire. They can—and do!—move much faster than government can legislate or regulate. Even relatively new hires, from a variety of disciplines, can have an immediate effect on everything from what features to prioritize, to how testing a new model works, and more.
For all these reasons, the talent at AI labs is, ultimately, both society’s first line of defense and its last. Either they shape the future of AI to be as safe as possible, or we may have no protection at all. They may be the most powerful workforce in modern history. In private conversations, the executives at the AI labs agree.
What could these technical staff do with their power?
The talents building AI technology can shape its nature in a thousand subtle ways—and some less subtle, such as deciding what to build in the first place). They can prioritize among concerns including discrimination, privacy, misinformation, job loss, and national security. They can influence the hiring of new colleagues, and they can resist the urge at some AI labs to only hire “true believers” who will be less critical. They can apply pressure on leadership to deliver on their promises, which is already happening. And because creating anything new requires many hands, all it takes is a small number of committed employees to influence development (if, say, they worried their next large language model might unleash a science fiction-inspired doom scenario).
Of course, their power has limits. There may be no employee who can single handedly stop the release of a new model they worry might have too many risks. The costs to train a single model can run to hundreds of millions or even billions of dollars, and those costs continue to grow—ultimately, the leaders of these companies control those budgets. Some people at the AI labs have resigned when they discovered that they lacked enough of an ability to affect their company’s direction, though others will take their place, often with similar ideas.
Power does not deploy itself. It takes effort to exercise. These highly prized AI builders could organize (as has happened in certain moments at Amazon and Google, and elsewhere in tech), and together insist on structures beyond whistleblower protection. Together, employees can shape how models get evaluated, what research directions get resources, and more. The potential in this has barely been recognized, let alone tapped, so far.
Imagine, if talent at AI labs lobbied leadership to elect a representative to their company’s board, how the conversation might unfold differently about firing a CEO. Imagine if these workers had formal structures—sometimes called “works councils” in other countries—for advising executives. They could insist on consultation with affected parties at the right moments, or on technical procedures that could safeguard from some harms. They can shape the culture of ethics within their company.
Indeed, the people already at AI labs who care deeply about AI having as positive an impact as possible, and managing the risks, can share those beliefs with new hires, who may be more enamored with their new employers and more hesitant to raise critical questions and concerns. They can also work between companies to create a shared set of approaches that might influence an entire industry.
Some of the talent may worry that if they have more power, they may slow their company down, making it less likely to succeed. These companies are in a race for survival, after all. As we’ve seen in other industries, the fear that worker power slows companies isn’t a law: when talent shares power effectively with leadership, companies can become more competitive—even in tech. Companies that include their workforces in a thoughtfully structured way can accelerate the pace at which they build.
What’s more, the talent at today’s AI labs are the leaders of tomorrow’s AI companies. Some have already left to build their own new labs and startups. Witness those who quit OpenAI to found Anthropic and, more recently, Safe Superintelligence and others. Some say they left specifically to create AI that would reflect different values in terms of safety, and designed their new startups accordingly. These startups demonstrate that new approaches are viable, and they pressure the original startups from which their founders departed.
These departing talents are living out a longstanding Silicon Valley tradition of valuable employees leaving to create their own startups—starting with the so-called “Traitorous Eight,” who left Shockley Semiconductor, back in 1957, to found Fairchild Semiconductor and set the foundation for modern computing.
The hype around generative AI would have us believe that this technology changes everything. Depending on who you ask, artificial intelligence might be either a disaster or our deliverance. But as more and more observers are coming to realize, many of the fundamentals—from the dynamics of competition to the expansive role of top talent—still apply.
Today’s talent shapes tomorrow’s technology, and that’s likely to be even more true for AI builders—in addition to the workers who, more broadly, harness AI (as our colleagues at Brookings Metro have written in TIME). These AI talents already have their hands on the wheel of a class of technologies that is, no exaggeration, remaking our world. Their power will only grow over time as the power of these technologies grows. So if we want to safeguard AI, it’s time to focus a bit less on the boldfaced names leading these companies, and more on the people working at them.
-
Acknowledgements and disclosures
Thank you to Zak Stone, Divya Siddarth, Amanda Ballantyne, James Cham and the team at Bloomberg Beta, and the talents I spoke with at several of the AI labs I mention here. Those talents confirmed the thrust of this piece, though of course some people who read drafts disagreed with some parts of my perspective, and the views – warts and all – are my own.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Researchers United? If you care about the future of AI, focus more on the people who actually build it, less on their CEOs
January 27, 2025