Artificial intelligence dominates today’s headlines: trillion-dollar productivity forecasts, copyright lawsuits piling up in court, regulators scrambling to tame frontier models, and warnings that white-collar work could be next. Yet behind the headlines sits a bigger question: not what AI replaces, but what it can amplify.
Jacob Taylor, once a professional rugby player and now a Brookings CSD fellow, argues that the 21st century may be less about machines outpacing us, and more about how humans and digital algorithms learn to work together. In this conversation, we explore how pairing human insight with artificial intelligence could reshape collaboration and help organizations large and small—from the World Bank to local NGOs—tackle complex global issues. And we ask, at the end, what it means to be human in the age of AI.
Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are.
Jacob Taylor
From the rugby scrum to the policy scrum
Junjie Ren: Jacob, you’ve had one of the more interesting career arcs I’ve seen, from pro rugby to cognitive anthropology. Now you’re shaping how we think about collaboration itself. Let’s start with the thread that ties together performance, teams, and meaning. Tell us more about that.
Jacob Taylor: I’m someone who’s been on an endless search for the holy grail of team performance. Athletes and other elite performers can feel when something bigger than them is happening, when the team is producing what no individual could achieve alone. I’ve also been in teams where the opposite has been true when performance has completely fallen apart.
These experiences have driven my research into the science of team performance and collective intelligence. I spent several years doing ethnographic research with professional rugby teams in China, trying to figure out if and how formal models of group performance hold across cultures. Rugby served as a controlled field experiment. Watching vastly different teams across cultures playing the same game taught me a lot about constant and variable ingredients of human behavior and performance.
Junjie Ren: How did that experience in China shape your view of how humans coordinate meaning across context, whether these teams are on the field, in policy rooms, or in digital ecosystems?
Jacob Taylor: I learned that teams are ultimately very similar in their structure, but that structure plays out in different shapes and sizes in different cultures or contexts. Following my PhD research, my interest in China led me to do some policy work in Australia on multilateral trade and security cooperation in Asia. That all sounds a bit wonky, but for me, intuitively it became a question of: Where is the “team” in Asia? How can different countries in the region collaborate toward shared outcomes that align with—and maybe even exceed—the self-interest of all countries?
One way to pair it back is to think about a canonical experiment in social psychology called the hidden profile task. In a small team of four to six people, each individual has a unique piece of information needed to solve a shared puzzle. For the team to solve the puzzle, each person must bring their piece forward into the team context, thereby surfacing the team’s “hidden profile.” International cooperation is rarely framed so explicitly in terms of performance or collective intelligence, but I believe this “hidden profile” logic of performance applies across scales, from sports teams to policymaking bodies to digital networks.
Junjie Ren: What sparked your interest in AI and team collaboration?
Jacob Taylor: In my PhD research, I applied new algorithms for understanding brain activity to model team interaction and performance. From there, I went to work on a DARPA (Defense Advanced Research Projects Agency) program developing an AI teammate, which drew me deep into the technical side of artificial intelligence and how it could be designed to enhance team performance and collaboration. That work shaped many of my current ideas on how to design both the technical systems and policy incentives needed to strengthen collective intelligence across scales.
The hour of collective intelligence
Junjie Ren: You’ve said that if the 20th century was the economists’ hour, the 21st may be the hour of collective intelligence. What do you mean by that?
Jacob Taylor: It’s an idea that builds on a great book called “The Economists’ Hour” by New York Times journalist Binyamin Appelbaum. He charts how, in the second half of the 20th century, economists went from being largely absent from political conversations in the 1950 to becoming the primary evidence base for policymaking by the century’s end. That expertise was well-suited to the challenges nations and firms were facing then.
But today, the issues we face are multidimensional and span communities of every scale. They can’t be solved by economics alone. Nor by law alone. Nor by any single discipline. What’s needed is a collective, transdisciplinary effort that draws on multiple evidence bases and scientific approaches. And that’s where the emerging science of collective intelligence comes in. It’s an unusually diverse field that includes computer scientists, social scientists, behavioral scientists, anthropologists, working together to understand how different mechanisms of collaboration and collective action can produce outcomes greater than any individual or institution could achieve alone.
I see a real opportunity to pull these insights and innovations together, not only to inform policy and accelerate progress on issues embodied in the Sustainable Development Goals (SDGs), but also to advance other areas of human flourishing and societal value creation.
Junjie Ren: You have been a driving force in the 17 Rooms initiative at Brookings. Tell us about the 17 Rooms approach, and specifically, how the “teams of teams” approach shifted your focus toward collective intelligence as a framework, or even a new science for solving global problems?
Jacob Taylor: The basic premise embedded in 17 Rooms is that the world’s toughest challenges—from eliminating extreme poverty to preserving ecosystems, advancing gender equality, and ensuring universal education—are problems no single actor can solve alone.
17 Rooms is a practical response to this challenge of how to catalyze new forms of collaboration that cut across institutions, sectors, and silos. It uses the SDGs to create a “team of teams” problem-solving methodology: Participants first gather into small teams, or “Rooms,” to collaborate on ideas and actions within an issue area. Proposals are then shared across Rooms to spot opportunities for shared learning and—where appropriate—shared action.
So, 17 Rooms aligns perfectly with my intuition that change often boils down to people collaborating and connecting in small, mission-driven teams. And with the right infrastructure, it might be possible to scale teaming as a powerful unit of action for driving societal-scale outcomes.
Why AI alone won’t save us
Junjie Ren: AI now sits at the center of how we think about scaling ideas, innovations, decisions, or even creativity. How do you see AI both amplifying and complicating our ability to solve problems collectively?
Jacob Taylor: Generative AI is exciting because it combines generalized intelligence with natural language capability. You can now just talk or type to a generative AI system and expect a legible response. This has drastically reduced the friction of human-machine interaction and massively lowered the barrier to human participation in AI systems. And because these models are generalizable, they can be applied to many different problems at once, offering huge potential for a full range of challenges facing people and planet.
But there’s a big “but.” Realizing the positive societal impact of these technologies will depend a lot on how we design these systems and to what end. As I’ve written recently with Tom Kehler, Sandy Pentland, and Martin Reeves, for AI to work for people and planet—and not the other way around—we need to talk about AI as social technology built and shaped by humans and figure out how to use AI to amplify—rather than extract—human agency and collaboration. The design choices we make today will determine whether AI strengthens collective problem-solving or deepens existing divides.
Junjie Ren: Could you tell us more about the schisms or gaps you see in current AI discourse?
Jacob Taylor: Current AI conversations tend to split in two. One side is tech-first—focused on algorithms, frontier model capabilities, and conjecture around Artificial General Intelligence (AGI) and whether it will save us or take all our jobs. The other is policy-first—centered on risk and rights, aimed at protecting humans from AI’s harms. Both leave out the bigger question—and the bigger opportunity—which is how to combine human and artificial intelligence to unlock new forms of collective intelligence.
Some colleagues of mine have suggested reframing generative AI as “generative collective intelligence,” or GenCI, because at its core, there’s a human story throughout. Foundation models are trained on the human collective intelligence embedded across the internet. They’re refined through reinforcement learning with human feedback, hours of human labor spent curating data, training, and conditioning these systems. Even after deployment, much of their improvement comes from ongoing human user feedback. At every stage, humans are part of the value chain.
Yet, that story is not being elevated and articulated in public discourse or policy debate. If we position these frontier AI systems correctly, they can elevate and amplify human potential in teams, in organizations, and in communities. Yes, there may be labor market disruptions and creative destruction, but there’s also the possibility of new ways of working and expanding human potential. That’s the part of the conversation we need to develop and elevate with innovative approaches and the right policy incentives.
When humans and AI team up: Vibe teaming defined
Junjie Ren: Let’s shift to vibe teaming, a term you coined with Kershlin Krishna. What is it? How does it work in practice, and how does it differ from traditional prompt and response or copilot models?
Jacob Taylor: Vibe teaming is a new approach to what we call human-human-AI collaboration. It’s a way to combine AI tools with human teamwork to create better outputs. In our case, we’ve been exploring its application to challenges embedded in the SDGs, asking: How could a new model of human-AI teaming help advance progress on something like ending extreme poverty globally?
The idea came from “vibe coding,” a term popularized earlier this year by software engineer Andrej Karpathy. He described a workflow where he talks to an AI model describing the “vibe” of an idea for a software product and the model produces the first draft. The human expert then iterates on the first draft with the model—giving feedback on bugs or tweaks—until the product is complete. The process is quick, conversational, and low-friction, with the AI handling much of the lower-level work.
We wondered: What if we did this collaboratively? So Kershlin and I sat down together in front of a phone, talked through what we wanted to create (in this case, a PowerPoint presentation) and ended up with a 20-minute transcript. We fed that into our AI model, and it quickly produced a draft presentation. That was the starting point for vibe teaming, and it felt like we were onto something.
Pairing decades of human expertise with AI’s speed feels like a special sauce worth understanding.
Jacob Taylor
When world-class strategy takes hours, not years
Junjie Ren: Walk us through a concrete use case—like the SDG 1.1 experiment with Homi Kharas?
Jacob Taylor: We wanted to test vibe teaming on a real outcome, and we brought in our colleague Homi—a leading expert on global poverty eradication—and asked: What if we used this approach to design a global strategy for ending extreme poverty by 2030?
In a single 90-minute session, we produced what we considered a “Brookings-grade” strategy—high enough quality to publish, which we did, along with a related blog. Our 17 Rooms team spent a fair amount of time thinking about what sequence of questions might get the most out of an expert conversation. Then the process was straightforward: start with rich human input, in this case a 30-minute recorded conversation with one of the world’s leading thinkers on global poverty. Feed that transcript into our customized AI models. Then engage in a careful, iterative process of human review and validation—you were part of that, Junjie—to refine the output for publication.
The AI played a supportive role, handling tasks like transcription and first-draft generation, but the quality came from the depth of the human input and the decades of expertise behind it. Homi has been working in this space for over 40 years; we were drawing on his lifetime of insight and combining it with our own. Pairing that kind of wisdom with AI’s speed in iterating, automating, and structuring outputs feels like a “special sauce” worth understanding.
Junjie Ren: What’s next for vibe teaming? Is it validation or scaling?
Jacob Taylor: So far, we’ve had positive engagement with the approach—from AI teams at major U.S. automakers to government agencies around the world, and of course our colleagues here at Brookings, who are excited to experiment with this approach. We think it could become a practical tool for helping people integrate AI into the knowledge work they’re already doing.
Since these initial tests, we’ve been exploring how to scale up and validate the approach in different contexts. On one hand, that means bringing more people into policy conversations to inform the strategies and outputs that come from processes like this. On the other, it means testing whether the method itself can be validated as a source of enhanced collaboration, creativity, and even team flow—relative to more individual work or other team formats.
Why ‘team human’ still matters
Junjie Ren: In policymaking spaces, where AI can already synthesize, summarize, and even simulate, what exactly is the role of humans?
Jacob Taylor: There are a few parts to that. Big picture, what we were able to produce in 90 minutes (or a few hours total) was, by all accounts, world-class work. One of our Brookings colleagues thought it compared favorably with anything the World Bank has published on the topic. That raises big questions: If a small group of humans, plus AI, can produce something like this so quickly, what does that mean for large institutions and the traditional process of knowledge creation?
This could signal an early disruption to policymaking. AI isn’t replacing knowledge creation, it’s an amplifier handling lower-level work (transcribing, drafting) so humans can focus higher up the value chain: judgment, collaboration, decisionmaking, brainstorming, creativity.
That shift frees up capacity for the real game, which is building the architectures that let people work across silos, translate between institutional languages, and act collectively on big challenges. In our team’s anecdotal experience, through vibe teaming, we’re already spending less time buried in spreadsheets or documents and more time in conversation and quality control.
Junjie Ren: What does success look like in practice when AI is a cognitive amplifier and not a replacement of humans?
Jacob Taylor: Success is when we can measure human-AI collaboration actually improving collective intelligence. The science here is advancing fast. We can now identify causal mechanisms of collective intelligence in groups, ecosystems, and organizations.
One simple framework breaks it into three components: collective memory (what we know together), collective attention (what we’re focused on together), and collective reasoning (what we have the potential to act on together). The question is: Can we use these factors to assess the outputs of human-AI systems? Can we say, “this collaboration increased our collective attention on a problem” or “this process expanded what we know together”?
That’s the next frontier: tying experiments with these tools directly to measurable outcomes, especially on real-world challenges like the SDGs, so it’s not just novel process, but progress we can track and prove.
Human embodiment and cognitive atrophy
Junjie Ren: You’ve talked about cognitive atrophy as a risk. How do we guard against this trend in high-AI environments?
Jacob Taylor: Obviously, with any new technology like this, humans and technology co-evolve, and cognition co-evolves. We are going to see atrophy in certain skills overall, and this is a particular risk for younger staff entering the workforce, or younger folks who are earlier in their skill development for knowledge work.
But there’s also the opportunity to develop new cognitive competencies, skills, and attributes. Human-AI interaction—vibe coding, vibe teaming—is, over time, going to become a new muscle in itself, a bit like writing or reading, with its own set of commands. So there’s a balance to strike here: What needs protecting, and what we should lean into. In that spirit, I’m very much a “team human” kind of guy in the age of AI, and what is most human, meaningful, and core to us is our embodiment.
Junjie Ren: Do you see embodied practices (such as Tai Chi, which you are known to lead at our staff retreats) having an active role in shaping how we design and interact with technologies like AI?
Jacob Taylor: You know, the fact is that we’re in a physical body, and we use that to navigate the world, relate to others, and cultivate energy, creativity, and connection. I think that coming back, literally, to the in-breath and the out-breath that we as biological creatures have uniquely, and can share with others, is key to grounding the human ingredients in the AI story.
Frankly, I think we’ll see that being human is going to matter more than ever in an age of AI. It’s going to force us to really clarify what being human really means. For the hopeful among us, it’s time to really speak out for what those human characteristics are. I think a lot of them are embodied in our most visceral, grounded practices that we enjoy together in community with others.
One big takeaway
Junjie Ren: Last question, if you were talking to a policymaker or an NGO leader or a CEO tomorrow, what is the one principle of vibe teaming you think they should try?
Jacob Taylor: Yeah, there’s no free lunch. It’s the basic upshot with AI, I think. Humans shape the inputs and outputs of AI systems at every step. With this in mind, it’s so important to capture and elevate what makes us human—ingredients of shared purpose, story, motivation, and priorities—and build hybrid human-AI systems and tools with these ingredients as starting points.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Why humans matter most in the age of AI: Jacob Taylor on collaboration, vibe teaming, and the rise of collective intelligence
September 9, 2025