Sections

Commentary

Three principles for growing an AI ecosystem that works for people and planet

Jacob Taylor, Thomas Kehler,
TK
Thomas Kehler CEO - CrowdSmart.ai
Sandy Pentland, and
SP
Sandy Pentland Senior Research Fellow and Professor - Stanford Center for Human-Centered Artificial Intelligence
Martin Reeves
MR
Martin Reeves Chairman - BCG Henderson Institute

August 1, 2025


  • An Artificial Intelligence (AI) ecosystem that works for people and the planet would complement “LLMs” with “localMs” that give communities equity in AI and agency to self-organize around shared priorities.
  • Authors present three actionable principles for this alternative ecosystem: frame AI as a social technology, design AI that is loyal to human agency, and coordinate around big-bet applications.
  • A moonshot-style effort for a breakthrough application could prove the existential need for a different approach to building AI.
Shutterstock: metamorworks

By now, many investors, organizations, and entrepreneurs are deeply committed to building an Artificial Intelligence (AI) ecosystem that prioritizes agency, equity, and sustainability for people and the planet. Yet current investment options remain limited. Governments and funders looking to support public benefit AI face an unsatisfying choice of investing in costly “sovereign AI” infrastructure (high-end compute, foundation models, energy) without clear paths to securing strategic autonomy or matching frontier capabilities and applications of U.S. or Chinese hyperscalers, or a scattered portfolio of downstream “AI for good” applications—many of which feel like solutions in search of problems. Neither option helps communities generate the context-rich data needed to tackle shared challenges, and both fail to adequately address the fundamental concentration of power that persists in a handful of private companies.

An alternative approach to building AI systems, grounded in the science of collective intelligence (CI), can address these shortcomings at once.

As we explored together recently in discussions with entrepreneurs and investors at Human+Tech Week, it is now technically feasible (with advances in privacy-preserving modelling techniques) and inexpensive (due to ever-decreasing costs of computation and software) to shift from building centralized compute clusters and large language models (LLMs) to building smaller, decentralized local language models (or “LocalMs”) that capture and amplify—rather than extract—the intelligence of individuals, teams, and communities. The ultimate vision for this approach would be complementing efforts to achieve monolithic Artificial General Intelligence with a bottom-up movement to grow ecosystems of intelligence: thousands or even millions of intelligent communities with sovereignty over their data and culture, meaningful equity in AI infrastructure and applications, and agency to use these systems to self-organize around shared priorities.

Elements of this vision already exist in technical prototypes, policy proposals, and committed communities of technology entrepreneurs, scientists, and investors. What is needed is a concerted effort across these actors to stitch together a more coherent field capable of innovating impactful use cases that cut through the noise. To this end, we distilled three guiding principles for framing, designing, and coordinating an AI ecosystem that works for people and planet.

1. Frame AI as a social technology

How we talk about AI is important. AI that works for people must recognize human contributions to AI. LLMs are pre-trained on vast troves of human-created and human-tagged internet content; refined through reinforcement learning with human feedback, and further tuned through human usage patterns. AI consistently performs best as a hybrid system combining machine speed and scale with collective human expertise and know-how.

And yet, these human contributions to AI are absent in mainstream conversations. New narratives are needed to shift AI policy debates from a paradigm of protecting humans from AI to ensuring that human contributions to hybrid human-AI systems—data, deliberation, judgement, intuition, and social context—are protected and fairly valued. As some of us (TK, AP, MR) have recently proposed, there is an opportunity to reframe generative AI as generative collective intelligence or “GenCI”: a social technology that combines algorithmic capacity with human expertise to address complex, real-world challenges that humans or machines could not address alone.

2. Design AI as loyal to human agency

Humans are agents and human agency should be the central concern in AI development—and yet, amid today’s enthusiasm for autonomous AI systems or “agentic AI,” these fundamental truisms require explicit defence. Investors, policymakers, and end-users should insist on AI algorithms, architectures, and approaches that amplify rather than extract human agency and social intelligence.

It is possible to build algorithms that capture and elevate shared beliefs, purpose, and action potential in groups and organizations—as developed by platforms like Common Good AI. In a similar vein, approaches like Pol.is or Deliberation.io use summarisation models and adaptive polling to scale inclusive, grounded dialogue while preserving nuance and diversity of voices. Approaches to human-AI teaming like vibe teaming can position AI tools to support creativity and quality of human-to-human problem-solving.

Emerging AI agents, meanwhile, can and should be “loyal by design”—treated as fiduciaries for human individuals, teams, or communities (rather than for companies alone)—curating data and training LocalMs on their behalf. Innovative data governance (following models like the Human Genome Project) and privacy-preserving machine learning techniques can help aggregate LocalMs into larger, community-governed ensembles and enterprises like trusts or cooperatives.

3. Coordinate around big-bet applications

Innovative applications can demonstrate why growing an alternative AI ecosystem matters. AI systems grounded in CI science and design principles will have a natural competitive advantage addressing challenges that no single actor can solve alone: like a regional or global green‐energy trading platform that uses LocalMs to transparently validate and exchange carbon‐intensity data from the ground up; trusted AI‐driven public services that use LocalMs to federate sensitive personal and government datasets; or scale-up of pioneering prototypes like Interspecies Money—which uses CI design principles to build AI that represents and values the agency of non-human life. To help mobilize the necessary scale of infrastructure (high-end compute, data, and talent) required to develop world-leading use cases, an “Airbus for AI” model—a public–private consortium of middle powers’ national AI labs—could collaborate to take these ideas to market as public utilities (see initial proposals for Asia and Europe).

Where to next?

Robust social and policy collaboration is needed to support this vision. A neutral hub could unite academic and practitioner communities at the frontier of AI and collective intelligence to develop tools and benchmarks that assess not only performance and safety, but also the quality of human-AI collaboration and community outcomes at all scales. A moonshot-style effort for a compelling breakthrough application could prove the existential need for an AI ecosystem that serves people and planet—not the other way around.

Authors

  • Acknowledgements and disclosures

    Sandy Pentland is a leader of Deliberation.io, a nonprofit research platform developed in collaboration with the Stanford Digital Economy Lab and the MIT Gov Lab. He is also affiliated with Loyal Agents, a research partnership between Stanford and the Consumer Reports Innovation Lab. He does not hold a formal governance role in either initiative.

    Thomas Kehler is a co-founder of Common Good AI (CGAI), a registered 501(c)3 organization. He does not currently hold a paid or unpaid position within the organization. Common Good AI is a fiscally sponsored project of Aspiration, a registered 501(c)3 non-profit organization.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).