Sections

Commentary

How AI’s future will echo the rise of the PC

Michael J. Ahn
Michael J. Ahn Associate Professor, Department of Public Policy and Public Affairs - University of Massachusetts Boston

January 8, 2026


  • While discussions around AI tend to revolve around scale, a growing share of intelligence is beginning to shift outward toward personal devices.
  • Personal AI reflects an architecture where core capabilities remain centrally trained, while meaningful personalization occurs locally on user devices.
  • More personalization can create a more distributed, resilient landscape with models that reflect different perspectives and training environments.
A brightly colored office populated with all kinds of people working at connected desks. There are computer screens and networks in the air in clouds. The image shows the connectivity of a digitally transformed workplace. It was drawn and painted using guache and pencil.
Jamillah Knowles & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Public discussions about artificial intelligence (AI) today often focus on scale—massive data centers, rapidly expanding energy demands, and a race for ever-greater computing capacity. This narrative conveys an image of AI as a technology that will remain concentrated in the hands of major companies operating sophisticated, power-intensive facilities. In many ways, this mirrors the earliest era of computing, when mainframes defined the limits of what was possible and who could participate. 

While this centralized model reflects the current phase of AI development, history suggests that it is unlikely to be the final stage. Technologies that begin in highly centralized forms often evolve toward more personal and distributed ones. The rise of the personal computer (PC) is a good example of this progression. Once computing power moved from institutions to individual devices, innovation accelerated, access broadened, and society integrated technology in new and meaningful ways. AI may now be approaching a similar turning point. 

A growing share of intelligence is beginning to shift outward toward personal devices. This emerging trend signals the rise of personal AI (PAI) where AI becomes more localized, context-aware, and tailored to individual users. Like the PC revolution, this transition holds significant implications for innovation, governance, privacy, and the broader AI ecosystem. 

Mainframes to PCs: A useful historical parallel 

The evolution from centralized mainframes to personal computers helps illuminate where AI may be headed next. Mainframes concentrated computational power in specialized environments, and users accessed these systems through terminals that relied entirely on central resources. This model enabled powerful capabilities, but it limited participation and constrained the direction of innovation. Personal computers fundamentally changed that dynamic. With advances in microprocessors, memory, and software design, computing power shifted directly into the hands of individuals. Users gained agency. Creativity expanded, and entire industries were built around software and applications that developed closer to the people who used them. 

Today’s AI resembles the mainframe phase of that earlier era. Training large models requires vast data sets, high-performance accelerators, and tightly managed data center environments. These demands naturally steer AI development toward centralized systems. Yet recent advances point toward a diffusion of intelligence. Devices increasingly include neural processing units capable of supporting sophisticated on-device inference. Smaller models are becoming more capable, and retrieval-augmented generation allows devices to use local information to enhance model performance. These developments echo the historical moment when PCs began supplementing, and eventually transforming, the mainframe-centered computing world. 

What personal AI means 

Personal AI does not imply that individuals will train foundation models on their smartphones or laptops. Instead, it reflects an architecture in which core capabilities remain centrally trained, while meaningful personalization occurs locally on user devices. The AI that understands a person’s habits, preferences, and routines is increasingly shaped by data that never leaves the device. Personal AI relies on fine-tuning, task-specific adapters, retrieval of personal knowledge, and context-aware inference powered by on-device hardware. 

There are some benefits to this approach. Local processing improves responsiveness, strengthens privacy by minimizing the transmission of sensitive information, and reduces dependence on continuous connectivity. In fields such as health care, education, and public services, this combination of privacy and context-awareness creates opportunities for more trustworthy and effective applications of AI. 

Yet the broader implications of PAI extend beyond convenience. As intelligence becomes more distributed, we should expect a significant increase in algorithmic diversity—a natural consequence of individuals and organizations shaping their own models. This diversification is healthy. Centralized AI systems tend toward uniformity: one model, one worldview, one set of assumptions shaping behavior across millions of users. Personal AI introduces variation, nuance, and differentiation, much like the explosion of creativity that followed the arrival of the personal computer. 

The financial sector offers an example. If every investment firm were to rely on the same “best” centralized AI model to make portfolio decisions, markets could become highly correlated and unusually fragile. A shared blind spot or flawed assumption could propagate instantly across the system, amplifying volatility and increasing the likelihood of large-scale breakdowns. In contrast, an ecosystem of diverse PAI systems—each trained on different data sets, risk tolerances, and institutional priorities—would produce a healthier spread of strategies and perspectives. Just as ecological diversity strengthens resilience, algorithmic diversity can help stabilize financial markets and reduce systemic risk. 

This diversification will matter just as much at the organizational and individual levels. For organizations, competitive advantage will increasingly hinge on their ability to cultivate effective personal (organizational, in this case) AI systems: models tailored to their operations, data environments, and institutional knowledge. The capability to train, refine, and govern these personalized systems will play an important role in determining organizational success. 

At the individual level, PAI may become a source of agency and creativity. Professionals, students, and creators will be able to shape AI systems that reflect their work patterns and personal preferences. Over time, individuals may come to view their AI not as a generic tool but as a partner embedded in their daily routines—a companion tuned to their knowledge, style, and objectives. In this sense, PAI represents not merely a technical evolution but a structural one. It shifts AI from a centralized utility toward a distributed landscape where learning, creativity, and competition occur at the edges—closer to where people live, work, and innovate. 

Implications for governance, privacy, and the AI ecosystem 

The shift toward PAI also raises important considerations for governance, privacy, and the broader AI ecosystem. On the governance side, a decentralized model reduces the risks that come from relying too heavily on a small number of large, centralized, and dominant AI systems. When most institutions depend on the same models, any blind spot or design assumption embedded in those systems can spread widely and create vulnerabilities at scale. Personal AI offers a different path by allowing agencies, organizations, and communities to adapt AI to their own needs, including situations where sensitive or mission-critical data cannot easily be shared. In this way, PAI can strengthen institutional autonomy and reduce the concentration of power that often accompanies centralized intelligence. 

Privacy may also be enhanced in a PAI environment. Much of today’s AI depends on sending personal or organizational data to remote servers for processing, but PAI reduces this need by keeping more computation and learning on the device itself. Instead of transmitting data to a central system, AI is brought to where the data resides. When personalization and inference happen locally, individuals and organizations maintain greater control over their information. This is especially important in sectors such as health care, education, and public services, where privacy expectations are high and the risks of exposure are significant. Local processing aligns with data-minimization principles—the idea that only the data strictly necessary for a task should be used—and can help create a more trustworthy foundation for AI adoption. 

The shift toward PAI may also reshape the broader AI ecosystem by supporting a healthier level of diversity. Centralized AI tends to encourage uniformity: one dominant model, optimized for a global average, shaping decisions across millions of users. Personal AI, by contrast, creates room for variation as users and organizations adapt systems to their own data, contexts, and priorities. This diversity can serve as a stabilizing force. As the financial-market example earlier suggests, reliance on a single dominant model can create systemic fragility. A more distributed landscape, in which different models reflect different perspectives and training environments, helps avoid that outcome. In this respect, PAI mirrors the rise of the personal computer, which expanded computing from a single centralized architecture to a more diverse, innovative, and resilient ecosystem. 

Conclusion 

AI is often viewed through the lens of centralized data centers and rapidly expanding infrastructure. While this remains an essential element of AI development, it reflects only one phase of the technology’s evolution. Just as the personal computer reshaped computing by shifting capability closer to individuals, PAI has the potential to redefine how intelligence is experienced, governed, and integrated into daily life. 

Centralized AI will continue to be vital for training and updating the most advanced models. But the intelligence people rely on every day—the intelligence that understands personal routines, engages in context-specific tasks, and supports individual decision-making—will increasingly reside on personal devices. Policymakers who recognize this development will be better prepared to guide AI’s evolution in ways that enhance privacy, strengthen trust, and expand the benefits of AI across society. 

The next chapter of AI may not be written solely in large data centers. It will emerge across billions of devices, in homes, workplaces, and communities. This is the promise of PAI—a future in which intelligence becomes more capable, more accessible, and more deeply connected to the people it serves. 

Author

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).