As the world reflects on Pope Francis’ legacy following his death last week, his leadership on social and economic justice is rightly being celebrated. But far less known—yet still profoundly prescient—is his visionary leadership on artificial intelligence. At a time of rapid technological advancement with few guardrails, Pope Francis emerged as the world’s leading moral voice, insisting that AI be developed and deployed with human dignity at its core.
Today, AI is poised to reshape nearly every dimension of life—from work to governance to human relationships—at a pace few institutions are prepared to confront. In this context, Pope Francis’ moral guidance spoke beyond religious boundaries. Rooted in a deep concern for the common good, his message has resonated with policymakers, technologists, and citizens alike, regardless of religious belief or views on organized religion. Leading architects of the AI era—including Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, and xAI CEO Elon Musk—have sought audiences with the pope, recognizing the need for ethical frameworks that match the power and reach of the technologies they are building.
This essay focuses on one critical dimension of Pope Francis’ ethical approach to AI: his vision for the future of work. I draw on remarks that I delivered last month at a global convening at the Vatican. For two days, I joined 60 participants—judges, former government ministers, cardinals, academics, and policymakers—in the Vatican gardens to examine AI through the lens of justice, democracy, and ethics.
As a researcher focused on AI’s impact on work and workers, I consider how Pope Francis’ teachings can guide a future where technological progress enhances, rather than diminishes, human dignity. His teachings challenge us to evaluate progress not by technical capability or profit, but by how well technology serves humanity—placing people at the center of innovation, not its margins.
Molly Kinder presenting at the Vatican’s workshop on Artificial Intelligence, Justice and Democracy on March 4.
An unlikely visionary for the digital age
It may seem counterintuitive to look to an 88-year-old religious leader at the helm of one of the world’s oldest institutions for guidance on today’s most cutting-edge technologies. And yet it is Pope Francis and the Holy See, the headquarters of the nearly 2,000-year-old Catholic Church, that have issued some of the most thoughtful and morally grounded reflections on humanity in the age of AI.
During his papacy, Pope Francis brought this moral perspective on AI to the world stage, shaping dialogue well beyond the Vatican. Last year, he delivered a powerful address on AI to world leaders at the G7 summit, noting the dawn of a “cognitive industrial revolution” and calling for “ethical inspiration” to guide AI. In remarks to the World Economic Forum in Davos, he urged the global elite to prioritize human dignity over efficiency, and to ensure AI progress benefits all. And through thought-provoking essays such as Antiqua et Nova (Latin for “old and new”), the Vatican under Pope Francis emphasized that AI must serve humanity, not substitute for it.
It is not surprising for a faith leader to emphasize the human stakes of change. What set Pope Francis apart was his embrace of AI’s full complexity. At a time when many debates narrow AI to either boundless opportunity or inevitable harm, Pope Francis called for a more nuanced understanding that recognized both the promise and the peril, and underscored society’s ability to shape the direction of AI.
In his speeches and writings, he made clear that the Church welcomes the potential of AI to enhance opportunity and well-being. He also invoked the history of human progress. The pope challenged the framing of man versus machine, and especially machine over man, arguing that AI can be a useful tool for humans to deploy to enhance their God-given capabilities, just as other tools have done across centuries. He questioned the term “artificial” intelligence, noting that AI is not as much mimicking our human intelligence as it is derived from it—built by human hands and trained on the vast corpus of human creativity and knowledge.
But Pope Francis also warned of the dangers of AI if used for negative ends. He was especially critical of what he called in his encyclical Laudato Si’ (“On Care for Our Common Home”) the “technocratic paradigm”—a worldview that treats technology as the solution to every problem, subordinates human beings to efficiency, and views labor merely as a cost to be minimized rather than an essential expression of human dignity.
Ultimately, Pope Francis framed the Church’s objective not as stopping AI’s advances, but as harnessing its extraordinary potential to serve humanity, especially the most vulnerable. He stressed that AI’s success should be measured not in conventional technological benchmarks, market share, or productivity gains, but in whether it improves the quality of life for all humanity.
Wonder and worry: AI’s dual promise for workers
How might AI live up to this potential? What are the risks and opportunities for workers? Last month, I had the opportunity to explore these questions firsthand at the Vatican. I was invited to participate in a workshoporganized by the Pontifical Academy of Social Sciences (PASS), a body of top experts and scholars convened by the Church.
I had the privilege of leading the workshop’s session on AI and the dignity of work. Drawing on my research at Brookings, I framed both the wonder and worry of AI, highlighting two key opportunities and two significant risks.
First, AI holds real potential to enhance human capabilities. Thirty years ago, Apple CEO Steve Jobs famously described computers as “bicycles of the mind”—an apt metaphor for AI today. Jobs referenced a study showing that humans, compared to animals, are woefully inefficient at movement—except when they are riding a bicycle. Just as bicycles dramatically increase human efficiency for travel compared to our humble walking, AI can greatly enhance our human cognition, creativity, and abilities—if it is designed to augment humans, not simply replace us. That is inherently the opportunity of AI.
Second, AI could—if deployed with intention—deliver special gains to workers long excluded from technological progress and economic opportunity. Unlike previous waves of innovation such as computerization, this technology could potentially benefit workers with less education—if AI systems are designed to lower barriers to opportunity rather than raise them.
But the risks are real and growing, particularly around job displacement and de-skilling. Continuing a decades-long trajectory, AI is poised to automate middle-wage, repetitive work, especially in office settings. In advanced economies, this threatens clerical and administrative jobs that have long been stable employment for millions of high-school-educated women. In countries such as India and the Philippines, AI could replace humans in call centers, IT support, and business processing—jobs that have powered economic growth and lifted families into the middle class.
Strikingly, AI also presents new, unexpected risks. In research I conducted with my Brookings colleagues Mark Muro and Xavier de Souza Briggs, we analyzed data from OpenAI and found a reversal of past patterns: Today’s generative AI is likely to most impact knowledge-intensive jobs in law, business, finance, and creative fields—the very professions that were previously considered resistant to automation. While the technology today mostly augments human work, a shift toward substitution, if it comes, could carry sweeping social, political, human, and economic consequences.
The second risk lies in the quality of what remains: Even when jobs aren’t eliminated, the conditions of work could be degraded. Historically, vulnerable workers have borne the brunt of workplace harms, including surveillance, discrimination, work intensification (being expected to do more work in less time), and bias. While we don’t yet have good evidence of the impact of newer AI technologies, it’s easy to see how risks such as surveillance could be significantly exacerbated. The challenge is not just preserving jobs, but preserving—and more than that, enhancing—their quality and meaning.
The balance between these competing forces—automation and augmentation, job loss and growth, innovation and fairness, and, fundamentally, capital and labor—will be determined not by the technology itself but by our market and policy choices in the years ahead.
A fundamental tension: Artificial general intelligence versus the dignity of work
In addition to these broader risks and opportunities, in the workshop I highlighted a particularly stark tension: between the central role of work in human dignity and AI labs’ pursuit to create an artificial general intelligence (AGI) that could replace human work entirely.
The world’s top AI labs are engaged in fierce competition to create ever more powerful AI systems. Their end goal is described in several terms (AGI, superintelligence, transformative AI), but they share a common inspiration: to develop AI that surpasses human capabilities across nearly all forms of work.
Technology leaders have presented this future not merely as a bold aspiration, but as an imminent reality. Figures such as Elon Musk and Bill Gates envision a world where AI systems outperform humans at virtually every task, rendering much of human labor obsolete. Silicon Valley visionaries often speak of explosive growth and a new era of “unparalleled abundance,” where goods and services are essentially free and work is no longer a necessity. Yet amid these sweeping predictions, they offer strikingly little reflection on what becomes of human purpose, contribution, or fulfillment in such a radically transformed world.
Today, while much of society debates whether—or when—AI will replace humans at work, Catholic teaching pushes us to ask a deeper question: What are the risks if AI deprives humans of the work that makes us human?
This vision of human-replacing technology stands in direct conflict with centuries of Catholic teaching on the necessity of work for humanity. The Church emphasizes the “dignity of work”—the notion that work is not just a means to a paycheck, but a source of meaning, personal fulfillment, and contribution to family and the common good.
In the Judeo-Christian tradition more broadly, the theological value of work has roots in the opening pages of the Bible, in the story of Genesis, when man was placed in the Garden with the express purpose to “till it and keep it.” Over centuries of writings and encyclicals—notably, Pope Leo XIII’s Rerum Novarum (“On the Condition of Labor”) in 1891 and Pope John Paul II’s Laborum Exercens (“On Human Work”) in 1981—the Church emphasized that work is fundamental to the human experience and “man’s existence on earth.” Pope John Paul II called work an “obligation” and duty to both God and humanity. It is through work, he asserted, that we become more human.
Today, while much of society debates whether—or when—AI will replace humans at work, Catholic teaching pushes us to ask a deeper question: What are the risks if AI deprives humans of the work that makes us human?
In his 2020 encyclical Fratelli Tutti, Pope Francis warned about the potential harms, writing, “There is no poverty worse than that which takes away work and the dignity of work.” In a 2024 speech, he cautioned thatsuch displacement leaves those without a job lacking the dignity that comes through work and risks concentrating wealth “for the few” while impoverishing the many. He urged a different goal for AI development: not creating AI that competes with and even surpasses humans, but creating an AI that serves “our best human potential and our highest aspirations.”
At stake: An age-old cornerstone of life well lived
Pope Francis’ emphasis on work’s centrality stems from both Church doctrine and lived experience. In his memoir, “Hope,” published earlier this year, he recounts his family’s immigrant journey from Italy to Argentina, where economic mobility shaped their lives. He witnessed firsthand how lost jobs during wartime and economic crises devastated families, while his grandparents’ hard work and commitment to education opened new possibilities. Through his own lifelong work ethic—evident until the final hours of his life—Pope Francis embodied the personal fulfillment that meaningful work can provide.
These stories reflect a truth at the heart of Catholic teaching: For centuries, work has been the foundation of how we care for families, cultivate our gifts, and build lives of dignity and purpose.
Writing in his encyclical letter, Fratelli Tutti ( “Brothers and Sisters All”), Pope Francis captured work’s multidimensional meaning:
In a genuinely developed society, work is an essential dimension of social life, for it is not only a means of earning one’s daily bread, but also of personal growth, the building of healthy relationships, self-expression and the exchange of gifts. Work gives us a sense of shared responsibility for the development of the world, and ultimately, for our life as a people.
That story resonates powerfully with my family’s immigrant journey. My grandmother Mary left Ireland at 16 with limited formal education, working as a domestic worker in Chicago to send money home. Her earnings saved her family’s farm when her parents died and kept her 13 younger siblings together. My grandfather Coleman’s proudest achievement wasn’t his own status or wealth—it was that his blue collar wages as an elevator operator and barber helped send his children to college to become lawyers and teachers. What gave their work meaning was the possibilities it created for the next generation.
My identity is inseparable from that legacy. It was my grandparents’ grit and sacrifice that opened doors for my mother, and eventually for me—doors that led to professional opportunities they could never have imagined. A century after my grandmother arrived in America, work provides me with a vocation, a source of personal growth, creative expression, a chance to make a difference, and the foundation on which I am raising my three children.
These stories reflect a truth at the heart of Catholic teaching: For centuries, work has been the foundation of how we care for families, cultivate our gifts, and build lives of dignity and purpose.
But this vision remains out of reach for far too many. Across the globe, millions of people face exploitation, instability, or exclusion in their working lives. Even in wealthy nations, dignified, decent work is not equally accessible. The Church has long stood not only for the dignity of work, but for the dignity of workers—and for the moral imperative to expand access to decent, meaningful work for all. In the age of AI, this remains the Church’s North Star: not to eliminate work in the name of progress, but to extend its promise and ensure that more people, not fewer, can find purpose, stability, and hope through work.
Meaning, mobility, and dignity in a world without work
This context raises profound questions about the trajectory of AI development. Billions of dollars are now being invested in AI companies whose stated end goals could fundamentally reshape one of society’s oldest structures: human work. If the visions of leading AI labs are realized, they might not just change how we work, but whether we work at all.
In a society where AI systems can perform many traditional functions of human labor, what would take the place of work in providing purpose, dignity, structure, advancement, and fulfillment? Commonly proposed solutions such as a universal basic income address material needs, but cannot replace what work has long offered: a sense of competence and contribution, the discipline of daily effort, the hope that our children will do better, and the relationship and community that form around a shared endeavor.
To be sure, AI holds the potential to enhance human dignity by freeing up time for care, creativity, civic engagement, and personal growth. But that future is neither clear nor guaranteed. Realizing it would demand far-reaching changes to how we distribute resources, design institutions, and define success—not to mention a cultural shift in how we understand identity and contribution.
As we navigate these complex questions, the teachings of Pope Francis offer a compass: the conviction that human dignity must remain the central consideration, even as AI transforms the landscape of work.
From possibility to responsibility: Charting a path forward
A future without human work is not inevitable, but a world where AI has transformed the nature of work is already underway. The real question is not whether AI will reshape our economy, but how we will make sure that the dignity of workers is at the center of these changes? What kind of future do we want to build and how do we get there? What values will guide us?
There is no single solution to the challenges ahead. But there is a path—one that keeps human dignity at the center of technological change. Whether AI becomes a force for shared prosperity or deepening inequality will depend not on the technology itself, but on the choices we make now.
There are models we can build on. Guided by key examples, I propose five priorities to help ensure AI serves workers—and not the other way around. Many of the examples focus on work in America, but the lessons are applicable more broadly.
‘Genius in their pocket’: Uplifting marginalized workers who are excluded from opportunity
Pope Francis consistently called for AI to benefit all of humanity, but he placed particular emphasis on ensuring that it uplifts the most vulnerable workers—noting that the “true measure of our humanity” will be whether the use of AI includes the “least of our brothers and sisters.” This emphasis reflects a long Catholic tradition of a special concern for the poor and marginalized, captured in the words of Jesus: “As you did it to one of these, the least of my brethren, you did it to me” (Matthew 25:40).
But meeting this standard presents a profound challenge for AI. For decades, technology has widened inequality—rewarding those with education and capital while leaving other workers and even entire regions behind. Generative AI risks follow this same troubling trajectory—or, with intention, investment, and innovation, it could help reverse it.
Currently, the technology points to reinforcement, not reversal. In the U.S., workers with a bachelor’s degree or higher are twice as likely to use generative AI at work as those without. Meanwhile, the lowest-paid workers—in fast food, care work, and other essential roles—are adopting it the least. These patterns of inequality are mirrored globally. Research from the World Bank found that usage of ChatGPT skewed toward higher-income workers. Across countries, low-income nations—where significant digital divides limit access to technology—accounted for just 1% of global usage.
Dario Amodei, CEO of Anthropic, describes a future where AI creates a “country of geniuses in a data center.”Yet the opportunity for workers lies in something even more radical: distributing that genius to empower humans, including those too often excluded from technological progress and decent work.
What if AI could equip even the least-skilled workers with a “genius in their pocket”—an on-demand coach, world-class expert, and guide to help master new skills, expand their roles, and pursue opportunities once reserved for the privileged few? What new possibilities might have opened for my grandparents—who built better lives in America through manual, blue collar work— if they had carried that kind of support with them?
With the right design and investment, a line cook could move beyond food preparation to inventory management or menu development. A home care worker might use AI to receive live coaching, medical expertise, translation support, and emotional care strategies—becoming a more integral and valued part of a clinical team and opening new career growth opportunities. And in developing regions, a street vendor might harness AI to access business advice, marketing strategies, or supply chain guidance to grow their earnings and scale their enterprise.
Today, little of the energy, investment, or innovation in AI is directed toward realizing this possibility. The current trajectory overwhelmingly favors knowledge workers and high-skilled industries, risking a deepening of old divides rather than bridging them.
Seizing the opportunity of a genius in every worker’s pocket demands urgent, deliberate action. It will require designing AI tools for—and with—those most often left out. It will require investing in inclusive innovation: closing digital divides, expanding access to training and learning, and rethinking the structure of jobs to enable greater opportunity and mobility. As Pope Francis reminded us, the measure of success will not be how powerful technologies become, but how widely and justly their benefits are shared.
Enhancing worker voice and agency
Too often, technology is something that happens to workers, not with them. But workers should not just be passive recipients of AI’s impact; they must have agency in shaping how these tools are designed, deployed, and integrated into their jobs and lives. Ensuring worker voice is not only a strategy for more equitable outcomes, but also a moral imperative rooted in dignity, participation, and justice.
Pope John Paul II affirmed this principle in his influential encyclical Laborem Exercens (“On Human Work”), emphasizing that when they act together to advance justice, “workers will not only have more, but above all be more: in other words, that they will realize their humanity more fully in every respect.”
The consensus statement from the March workshop captured this imperative: “The future of work will depend partly on whether AI is designed with and for workers, enhancing human dignity rather than diminishing it.” Giving workers a seat at the table isn’t just good design—it’s a foundation for building a future of work that reflects shared values and shared power.
The 2023 Hollywood writers’ strike offered a glimpse of what’s possible when workers organize around AI. Writers secured groundbreaking protections not by banning the technology, but by negotiating guardrails that preserve livelihoods while enabling creative professionals and studios to responsibly benefit from AI. It was a rare and hard-won example of workers shaping the future of their own industry.
But such success will be difficult to scale without systemic reforms. Myself and my colleagues Xavier de Souza Briggs and Mark Muro identified a troubling “great mismatch”: The occupations most vulnerable to generative AI have the lowest union density—as low as 1%. Without addressing this mismatch, AI development and deployment will continue to prioritize efficiency and profit over worker well-being.
Even absent traditional unions, new models of worker voice can give employees a seat at the table. Promising models include California’s fast food council, Nordic sectoral bargaining, German works councils, and emerging innovations such as technology assessment forums. Some employers are launching labor-management partnerships, which can include designing the best mechanisms for a given industry or workplace. These are all signs that better approaches are possible—if we’re intentional.
Pope Francis has modeled a powerful example of how to engage new technology—not by halting its advance, but by insisting that it serve the common good. In that same spirit, workers must be at the center of envisioning and shaping the AI-enabled future. Whether through unions, works councils, professional associations, workplace committees, or new forms of collective voice, workers need a seat at the table—not just to safeguard against harm, but to help design how AI can enhance human dignity, expand opportunity and capabilities, and support meaningful work. Whatever the means, without deliberately amplifying worker voice—especially in vulnerable sectors—we risk entrenching power imbalances rather than harnessing AI for broadly shared prosperity.
Building on the ‘Rome Call’: Developing ethical standards for employers
As AI moves from research labs into workplaces, who will be responsible for ensuring that technology serves human dignity, not just corporate efficiency?
In 2020, the Vatican spearheaded the “Rome Call for AI Ethics”—an innovative, multistakeholder commitment to embed ethics into AI, focused on six guiding principles. The Rome Call was launched in partnership with major technology companies such as IBM and Microsoft, and has included leaders from across the faith spectrum, from Judaism to Islam.
Designing ethical technology is a critical first step. As generative AI moves from research labs to workplaces, the ethical burden expands to those who choose how the technology is used: employers. In the age of AI, the dignity of work depends as much on how algorithms are deployed as on how they are written. What is needed is a comparable moral framework, perhaps building on the Rome Call and its multistakeholder approach, to guide employers’ choices with a commitment to human dignity.
While no universally accepted standard yet exists, we can begin to define what it means to be a “high-road employer” in the age of generative AI. Such employers take a proactive approach to both the risks and opportunities AI presents, placing workers at the center of implementation decisions. They engage employees in shaping how AI is deployed, ensure benefits are shared (such as through productivity gains and redesigned work structures), and make meaningful investments in upskilling and career transitions. Importantly, they uphold commitments to early-career workers and those at risk of displacement, recognizing that human talent, not just efficiency, should drive technological progress.
Without a shared moral vision, workplace adoption of AI risks becoming another arena where short-term profit trumps long-term human flourishing. Building standards for ethical deployment is important for ensuring that technology strengthens, rather than erodes, the dignity of work.
Shaping policy for a human-centered AI era
Of course, we cannot rely solely on employers to voluntarily uphold high-road standards. Public policy has a crucial role to play in shaping the incentives of both employers and technology companies—encouraging the development and deployment of human-enhancing, rather than human-replacing, technologies. The lessons from past technological and industrial revolutions are clear: Policymakers must enact guardrails, standards, and supports to ensure AI’s gains are broadly shared and its harms mitigated.
There is already some movement in the U.S., most notably at the state level. A growing number of states have enacted targeted regulations to curb the most immediate workplace risks of AI, including algorithmic bias,excessive surveillance, privacy violations, and unreasonable productivity quotas. But these measures focus largely on job quality, not on job quantity. The more difficult frontier lies ahead: addressing the risk of large-scale displacement, which is a key concern for voters and workers alike.
So far, there is little in the way of scalable policy models to address systemic automation risk. Several structural factors contribute to this gap. Generative AI is developing faster than lawmakers can understand or regulate it. Its bottom-up, diffuse, and general-purpose nature makes it challenging to target. And distinguishing between productive augmentation and harmful substitution is often context-dependent and subjective.
Some promising concepts are emerging, such as protections against self-replacement training. But these modest proposals only begin to address AI’s potential economic disruption. Encouragingly for the U.S., workplace protection measures haven’t yet become entrenched in partisan divisions, creating space for thoughtful cross-spectrum policymaking.
As AI continues to evolve and reshape the workplace, the policy questions grow more urgent. How should the safety net adapt to a world of ongoing disruption? What systems of workforce development and lifelong learning are needed, and how must education evolve to prepare young people for an uncertain future? How should tax and redistribution policies shift to reflect new patterns of productivity and profit? And most fundamentally, how can we ensure that the economic value AI generates is broadly shared, so that technological progress serves society rather than deepening inequality?
Defining durable human skills and what jobs AI shouldn’t do
A core challenge in the age of generative AI is not just identifying what humans can still do better, but what tasks and roles should remain human by design. As AI systems have become increasingly capable, tools such as ChatGPT have upended long-standing assumptions about the boundaries of human skill—crafting creative writing, making persuasive arguments, even mimicking empathy in fields such as medicine, therapy, and education. The key question is not just what is left for humans, but what is essential to preserve for humans.
Here, Church teaching offers valuable insight. In the recent Vatican doctrinal note Antiqua et Nova, Church scholars reflected on AI’s limits and named three essential human capacities that AI cannot replicate. First, the wisdom of lived, embodied experience that comes from physical existence in the world. Second, relationality, which is grounded in authentic connection with others through shared vulnerability and presence. And finally, moral agency: the ability to make ethical choices and be held accountable for those decisions in ways that arise from human consciousness and free will.
These insights provide a guide to the kind of professional roles where humans should remain paramount. For example, these capacities are essential in roles that carry moral weight and demand accountability: judges who weigh justice with empathy, elected officials who have earned the public’s trust, or ethics officers responsible for confronting the harms of unchecked automation.
They are equally vital in roles defined by deep human connection: a social worker guiding a teen through foster care, a preschool teacher helping children navigate conflict, or a fundraiser building trust through faithful presence.
And they matter profoundly in domains where wisdom shaped by experience reveals hidden value: a union steward who’s lived through layoffs, a product designer raised in public housing who sees what others miss, or a filmmaker whose story expands who belongs in the frame.
The Church’s scholars also highlight fields such as health care and education, where AI can support but not replace. In medicine, AI has enormous potential to help diagnose and extend care, but it should not erode the human bond between patient and provider. In education, teachers cultivate more than intellect; they form character and meet students as whole people.
In a future shaped by artificial intelligence, it is important to be intentional about the roles where humans should serve. Jobs rooted in wisdom, conscience, and real human connection are the ones not just where humans excel—they are where humanity itself must remain at the center.
Conclusion
Amid the frenetic pace of AI’s development—propelled by profit, competition, and geopolitical rivalry—Pope Francis lifted our gaze in ways that will matter for the choices we make now and in the long term. He invited us to look beyond the race for dominance and instead ask: Who is this technology serving and what are its goals?
His enduring message transcends religious boundaries, offering not just critique, but moral clarity. By elevating worker voice, embedding guardrails, expanding opportunity, and preserving what must remain human, we can shape a future where AI serves us, not replaces us. In this pivotal moment, Pope Francis offered more than warning; he offered a vision that AI, rightly guided, can amplify our dignity instead of diminish it. The path ahead demands collective wisdom and moral courage so that in building AI, we do not lose what it means to be fully human.
-
Acknowledgements and disclosures
I am deeply grateful to Cardinal Peter Turkson, Chancellor Marcelo Suárez-Orozco, Sr. Helen Alford, and Hon. Roberto Andrés Gallardo for the invitation to participate in the Workshop on Artificial Intelligence, Justice, and Democracy at the Vatican in March 2025. I extend special thanks to Matthew Fenton and the staff of the Pontifical Academy of Social Sciences for their thoughtful coordination, to all of the workshop participants for the rich and stimulating discussions, and to Sofia Estupiñan Gomes for her loving care of my children during my time in Rome.
The research underpinning this essay was made possible through the generous support of the Omidyar Network, Ford Foundation, James Irvine Foundation, W.K. Kellogg Foundation, and Google. I am especially grateful to Mike Kubzansky, Anmol Chaddha, Anamitra Deb, and Michele Jawando, whose early support catalyzed our project on generative AI and work, and whose vision helped keep workers at the center of the conversation.
I also wish to thank Xavier de Souza Briggs and Matthew Fenton for their substantive feedback and invaluable suggestions to strengthen this draft. I am equally grateful to Leigh Balon, Michael Gaynor, Carie Muscatello, and Erin Raftery for their creativity, talent, and collaboration. Figures 1 and 2 are from the Brookings publication, ‘Generative AI, the American worker and the future of work’ by Molly Kinder, Mark Muro, Xavier de Souza Briggs and Sifan Liu.
Finally, I would like to acknowledge my grandparents, Mary and Coleman Connolly, and my parents, Mary Therese and Drew Kinder, for gifting me not only the opportunities that made this work possible, but also the values that shaped it.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
The unexpected visionary: Pope Francis on AI, humanity, and the future of work
April 29, 2025