... this should not just be something that Silicon Valley just decides our future, but if we are moving into a world of [artificial general intelligence] in some distant future ... we need moral voices, we need ethical voices. And it’s not just the Catholic Church. It could be other faith leaders, other ethicists, social scientists. This is such a deep question.
Molly Kinder
Molly Kinder, fellow in Brookings Metro, reflects on the moral and ethical dimensions of artificial intelligence, work, and workers, as she lays out in her recent paper, “The unexpected visionary: Pope Francis on AI, humanity, and the future of work.” Kinder addresses the late Pope Francis’ leadership on this issue, and look ahead to how Pope Leo XIV will continue the Church’s attention to this fundamental challenge.
Transcript
[music]
DEWS: Hi, I’m Fred Dews and you’re listening to The Current, part of the Brookings Podcast Network found online at Brookings dot edu slash podcasts along with our other public policy shows including the recently launched Metro Blueprint where scholars and guests discuss ideas and actions to create more prosperous, just and resilient communities in America.
One of those scholars is Molly Kinder, a fellow in Brookings Metro who will be the host of an episode of Metro Blueprint in the coming weeks. But she joins me now in the studio to talk about her research on artificial intelligence and its impact on work and workers, and in particular about her recent paper, “The unexpected visionary: Pope Francis on AI, humanity, and the future of work,” a link to which you can find in the show notes of this episode. Molly will also explain why the new pope, Leo XIV, chose his pontifical name and what it has to do with workers and AI.
Molly, welcome to The Current.
KINDER: Thanks, Fred. Thanks for having me.
DEWS: Your work sits at the intersection of a really important public policy issue— technology—and also faith, which your article is about. Before we get to that, I want to tell listeners that in March, a little more than a month before Pope Francis died, you delivered remarks at a workshop at the Vatican on artificial intelligence, justice, democracy, and ethics. First, can you share what was it like to be at the Vatican from a personal level?
[1:30]
KINDER: Oh, Fred, that’s a great question. You know, I have to say being at the Vatican this spring was one of the most meaningful professional experiences I’ve had for a few reasons. While I was there, Pope Francis was in the hospital, and it was really moving. For three nights I stayed at the Vatican guest house, which is where Pope Francis chose to live. It’s right in the backyard of St. Peter’s Basilica, very sort of humble, lovely spot.
And, you know, as I stayed in a guest room in the floor above where Pope Francis lived, he was in the hospital. And in St. Peter’s Square, thousands and thousands of people were gathering to pray for him. And it was really moving to see how this leader of the Catholic Church means so much to billions of people around the world. So it’s just a very moving time.
And then on a personal level to be invited to lead the workshops session in the Vatican gardens, around this topic that I think about all the time through my job at Brookings, AI and work, was such an honor. I went to Catholic nursery school, grammar school, high school, college, and I did the Jesuit Volunteer Corps. So, you know, my upbringing was really all about Catholic social teaching. My grandparents were uneducated Irish immigrants. Came to Chicago. And for them, their faith and family meant everything. And so as I started my remarks at the Vatican, I referenced my grandparents and how everything I learned about the dignity of work came from them. And it was just an honor to have that background and that sort of personal experience and family history and to feel I had this moment to share my research and my insights at that level.
DEWS: Listeners should know that in the paper, there is a bit of Molly’s family history, which I personally, as a genealogist and family historian, love. I, Molly, am also the product of a Jesuit education in high school—
KINDER: —did not know that—
DEWS: —and college.
KINDER: Wow, I did not know that, Fred. Well, this is going to be a great conversation then.
DEWS: So you’ll also be back at the Vatican in the fall as I understand it.
[3:34]
KINDER: Yes, I’ve been invited back to another workshop. What’s amazing about the Vatican is they have these pontifical academies, one around sciences and I’ve involved in the social sciences. And it’s this very extraordinary group of world experts. They don’t have to be religious in any way, shape, or form. They are there to provide the Vatican with the best state of knowledge in the world. And the Vatican, with the Pope’s leadership, chooses topics of interest. And the mechanism is for two days they convene members of this pontifical academy and then chosen experts from around the world to meet in this beautiful palace in the Vatican Gardens behind St. Peter’s Basilica to deliberate on these important topics. And I’m going back in October for a similar discussion really centered on AI humanity and work.
DEWS: That’s amazing. So, Molly, to the paper, why do you call Pope Francis “an unexpected visionary”?
[4:28]
KINDER: You know, I said the word “unexpected” because when I would tell people that I was going to the Vatican to talk about AI and that Pope Francis is one of the greatest minds on AI, people were always surprised. And I think it was because to think that an 88-year-old leader of a 2,000-year old ancient church could be so forward-looking on a technology that’s defining our future sounds counterintuitive. People consistently said to me, really, the Vatican? Really, the Pope?
So what I wanted to catch the reader’s attention to say, that might seem counterintuitive. And yet, actually, when we think about the ways in which AI is disrupting so much about what does it mean to be human? What does it mean to be in relationship? Who are we? What will the future of work. The Catholic Church, and I would argue, really religion overall, offers a lot of time-tested wisdom on some of these deep, moral, philosophical, and life questions. And Pope Francis gripped this topic. I mean, the Vatican has been incredibly forward-leaning. Nicholas Thompson is the CEO of The Atlantic, who has this wonderful “Most Interesting Thing of Tech” videos he puts out every day. Twice, he has made his most interesting thing in tech a brilliant Vatican essay on AI that came out earlier this year, “Antiqua et Nova.” And Nicholas Thompson is not Catholic. He just based on the brilliance, the wisdom, the insight, the Vatican and Pope Francis are really ahead of their time.
DEWS: The Pope, no matter who he may be, is the leader of the Roman Catholic Church, but he’s also in this instance speaking to the world.
[6:10]
KINDER: Yes, and you know, part of the reason why I wrote this for Brookings and not a Catholic magazine is it doesn’t matter your faith tradition or even whether you have faith. I can think of no other world leader that has the stature of a Catholic pope when it just comes to basic morality. We have political leaders, all sorts of political leaders. But there’s a lot of different faith traditions, but they don’t have a figurehead as sort prominent as the Pope.
So I think in a time like this, people have a lot of big feelings about AI, as they should. They’re both excited. They’re worried. This is something that feels threatening to a lot of aspect of our lives. We want to bring an ethical and moral lens, and the Pope has the stature kind of singularly. You know, there are other religions, but we probably can’t name who the sort of heads of those institutions are. I think the Pope stands for a broader sort of desire to see some kind of human-centered moral and ethical voice. And the Pope has really risen to it. So I think the message matters even if you’re Catholic or not. I think there’s wisdom.
DEWS: Well, so as we get into some of the actual impacts for workers and work of AI, let’s start with this question. Why is AI a moral issue?
[7:31]
KINDER: Oh, wow, that’s a great question. Well, let me take it from where I sit. So I’m very fortunate at Brookings that I lead a multi-year project exploring how AI, but specifically generative AI and agentic AI, the sort of newer forms of AI that have caught all of our attention, how is that impacting work and workers, and what do we do about that? So my my expertise at Brooking’s, I think about this, I’m not exaggerating, night and day, is how are these tools impacting work and workers?
You know, there is a moral dimension of this. And something that I have found a lot of wisdom in some of the writings of the Church is even from the opening pages of the Bible, the language of the Bible is that Adam and Eve were put on earth to till the land. That was the opening sort of premise of the Book of Genesis. And, you know, in the New Testament, Jesus is referred to as a carpenter. You know, there’s a lot about even in an ancient bible, the notion of work is core. And there’s a sort of a religious sense that to work is to serve God, is to use your talents in service of your family, your community. There’s almost like a spiritual dimension of that.
And I think what the Catholic Church has really done, like many other sort of moral voices, has framed there’s a moral way to construct work, to treat workers with dignity, to pay them fairly. There’s a sort of morality in sort of some of these choices. Of course, when you think of AI, it goes much broader than that. You know, what is our relationships? What are the ethics of the way we’re using AI? I have children, I have three children, and I think about AI companions. I mean, there’s so many moral and ethical questions that are steeped in this basically a digital species that we’re gonna interact with almost like a human. So, huge moral questions. My angle has really been that on work, and I there’s a real religious and moral dimension.
DEWS: Well, let’s now dive into AI’s impact on work and workers, very specifically. I’ll start with a quote from Dario Amodei. He’s the CEO of Anthropic, which is a world-leading company on artificial intelligence. He said recently that AI could eliminate half of entry-level white-collar jobs. And you and other Brookings scholars have noted not only that, but that AI will challenge other kinds of jobs and workers in the near future. So can you talk about AI’s impact on jobs today?
[9:53]
KINDER: Sure, my framing, and this is what I in my remarks at the Vatican, I framed it this way. You know, I think when it comes to work and workers, AI brings both wonder and worry. And I think both are important things for us to contend with. On the wonder side, I am a power user of AI. I find it an extraordinary tool to extend our human capabilities and our productivity. The kinds of things I’m able to do by using AI are so much greater than if I didn’t. And I think there’s, you know, very exciting aspects to imagine how AI might be a coach to enable us to do greater things, unlock greater creativity or productivity. So that is the wonder side.
And Steve Jobs has this great framing from the ‘80s with the computer where he talked about computers being bicycles of the mind. And what he meant by that was there was a Nature study that graphed out. Humans versus animals and how efficient they were at movement and locomotion, and humans were terrible compared to animals, except a human on a bicycle. If you put a human in a bicycle, they become very efficient at movement. I I liken AI to that today. The real upside, the possibilities of AI at work is it makes us so much better.
But of course, there’s a huge amount of worry, and people are feeling it. There’s a lot of anxiety right now, in part because we stand potentially at the precipice of something akin to a cognitive industrial revolution. These technologies hold the potential, they’re not there today, but hold the potential in the future to be extraordinarily good at a lot of things humans have long excelled at, that frankly just a few years ago we didn’t even think of as things computers could do. Knowledge work, diagnosing diseases, creativity, being persuasive, coding even.
You know, the range of things that AI is now capable of is immense, and there is real risks inherent in that to both displace workers and potentially de-skill jobs. And the future, we do not know. There’s a lot of decisions. Nothing is inevitable. I think there’s choices that will be made. We don’t know how good this technology will get, the pace of it. There is both wonder and worry. And I think for work, the task at hand is to make sure workers can really benefit from the upsides of this technology and that we work together to avoid harms.
DEWS: You’ve also talked about the risk to a certain category of, say, entry-level jobs that new college graduates or new people getting certificates in trades might face, not be able to get those jobs, and then they’re young, what experience are they going to get?
[12:33]
KINDER: Fred, I’m really glad you raised that because this is an issue that I’ve been thinking about and writing about for the past year and just in the last week, we’ve seen an explosion of interest in this topic. You know, about a year ago, Fred—I worked with you on this—I did a case study about Hollywood writers, and I wanted to start with the Hollywood writers because it was so counterintuitive to think that the face of resistance to this technology was not any of the sort of blue-collar job strikes that happened last year. It was NYU-educated paragons of creativity, not the folks you would normally think of at risk of automation and technology. And yet they were.
And that that comports with analysis my colleagues Mark Muro and Xav Briggs and I have done at Brookings. We got this incredible treasure trove from OpenAI. It was a data set looking at every task, every job across the entire economy, and what was the exposure to ChatGPT4 technology. And what we found in there, it was knowledge jobs, college-educated jobs, some back-office clerical customer service, more high school-educated jobs.
And in talking with workers out in the real world, starting with the Hollywood writers, but then I’ve talked to so many other folks in different industries, I kept hearing this concern about the potential for AI to substitute for the kind of jobs you have when you start your white collar career. The Hollywood writers were saying, we normally have a writers’ room in television with 10 to 12 writers starting with your very first writing job up to your show runner. Well, if ChatGPT can spit out a draft of a script, maybe you just need a seasoned writer or showrunner to just polish that draft. Why do you need the young person just starting out? And I saw versions of that both in the data, but also in conversations with law and finance and graphic design. And, you know, think about somebody at the State Department in their first job is not negotiating the hostage release, they’re probably taking notes in a meeting. And the kinds of things AI is good at are those more repetitive, less interpersonal, lower stakes tasks.
And I started to see this and I wrote an essay in November for Bloomberg really calling attention to how this could really destroy the career ladder. And just in the last few weeks, this has really taken off. I was quoted in an FT column and a Washington Post column and then Dario Amodei, the CEO of Anthropic, dropped this bomb this week saying that upwards of maybe 50% of entry level jobs in white collar roles, like, you know, consulting and finance and law and tech could dry up. And one of my favorite tech reporters, Kevin Roose, wrote about this in the New York Times a few days ago and quoted me.
I think my my two cents on this is right now unemployment rates for college graduates, early college graduates, are elevated. My sense from the data is that tech unemployment for young people is driving that. That could be a warning sign because coding jobs and tech jobs are the bellwethers. They were the first movers to really embrace this technology. I think we should all watch with caution to see is the result of that that AI tools complement a mid-level coder and a software engineer, and they don’t need to hire young people anymore. If that’s the case, that’s really worrying.
DEWS: The essay on the Hollywood writers in AI is beautiful. It’s illustrated with photos and audio and your terrific writing. You can find that on our website, Brookings dot edu, but also listeners should know, Molly, that they can find a lot of your thinking about these issues on your LinkedIn page, so I encourage listeners to go there and subscribe to your work. That’s where I learn a lot. I learned a lot.
KINDER: Thanks, Fred. Thank you.
DEWS: Let’s talk about some ideas to address the AI challenge to work and workers. In your paper, you identify five priorities to help ensure that AI serves workers and not the other way around. Could you address just two of them? One, something simple that workers can do or think about. And two, what is the role of public policy? What should policymakers be doing?
[16:38]
KINDER: Great, that’s a great question, Fred. Yes, I have, I think, five very concrete ideas for targeting different sectors. I’m going to say two things. One is I really hope that we are going to rethink jobs with a genius in your pocket. That’s this framing I have. Dario Amodei, who’s the CEO of Anthropic, has this framing that it’s sort of a national security spin where he says, AI is going to enable a country of geniuses in a data center. And I said, you know, why don’t we flip that on its head? What does it look like to live in a country full of workers with a genius and a coach in their pocket?
And historically, lower wage workers have been left out of the gains of technology. It’s really accrued to knowledge workers, college-educated workers. And in the past few decades, we’ve seen a real widening of inequality. Hollowing out the middle and really boosting knowledge workers. And this is something that I gleaned some inspiration from some of Pope Francis’ writings saying that the real litmus of success for AI should be whether everyone benefits, and especially the least among us. That’s a very Catholic framing.
And it really prompted me to think a little bit deeper about, well, what would that even look like? And it’s actually something I’m going to be doing some hands-on work through Brookings, partnering with some worker groups to actually try to model this and experiment and demonstrate this. Could you take a home care worker, which is one of the lowest paid jobs in the economy, growing extraordinarily fast, has almost no cognitive components—if you break apart what a homecare worker does, you know, they’re doing very manual things. Well, what about this genius in your pocket? What if through AI it enables somebody in a lower-paid role to do much more value-added work with the coaching and that genius in their pocket. So that’s something I just want to put out to listeners.
I’m meeting with the World Bank tomorrow, I think, and you think of developing countries, you have vast numbers of young people who need to find gainful employment. How are we thinking about that in exciting ways?
The second thing is just from a reader or a listener perspective. I’m speaking with the New York Times about potentially writing an essay on this question of, well, what should young people do if early career jobs might be negatively impacted? And I, frankly, I talk to folks at Brookings. I try to listen to people and understand from different perspectives, how are young people thinking? How are they feeling? There’s a lot of mixed feelings amongst young people. And you’d expect that the survey show a lot of frustration. Here they are just starting out their working lives and they worry that this technology is looming and could roll up opportunity for them. You know, they’re worried about climate, worried about sort of its impact on the creative community.
I will say, I don’t really see a future where young people at mass aren’t able to thrive in this world without embracing the technology. Figuring out what is your passion, what is you discipline, what is your craft, and figuring out how to apply these tools to do it even better. I I I do think that that’s actually an important message for listeners is our world is going to change a lot and sort of figuring out how to deploy it responsibly and achieving what it is you want to achieve through it I think is really important.
[20:02]
Last public policy is extraordinarily important. I was very gratified to see this week Barack Obama took to social media to post that New York Times column that I just referred to as well as some of the Axios coverage about job loss. On the right, Steve Bannon has been suddenly talking about these issues. You’re seeing left and right political leaders starting to really voice their concern and saying we really need to do something. There’s a huge amount of scope for public policy. I had a piece on LinkedIn that I recently wrote that I’m worried … I’m more worried about America’s lack of preparation, our weak institutions and our weak setup to help workers transition than I am the technology itself. And I’m happy to get into some of those areas, but we have very low-hanging fruit to sort of shore up our institutions and policies to help workers navigate this.
DEWS: One very specific policy issue that is before us now is that the reconciliation bill that’s under consideration in the U.S. Senate has a provision that would put a moratorium or prohibit state-level regulation of AI for 10 years. What would that mean in the context of this discussion?
[21:11]
KINDER: Fred, I’m really glad you mentioned that. I think that’s a devastating proposal. I think there’s a very unfortunate narrative right now in Washington that paints a zero-sum game where on one hand you have the goal of sort of winning the AI race and beating China and these sort of geopolitical goals, which I don’t mean to suggest are not important, and anything else that could possibly slow down as just a hindrance that’s going to stop America from winning this race. And I think that’s much too black and white.
I think America, more than any other country, we are leading on AI. It is our invention. We are setting the pace. We must do this responsibly. We cannot let this frenetic race to achieve AGI and sort of dominate world’s AI stop us from finding meaningful, reasonable ways to put in some safeguards. And that includes workers. Right now, it’s very unlikely we’ll see much regulation in Washington. But states are taking a lot of meaningful steps, whether it is to protect children or protect workers.
The next 10 years is a critical window. We should learn the lessons from social media. I’ve got three children. My eldest is about to start middle school. I’ve seen the impact of social media on the generation ahead of his and the total lack of any safeguards. We cannot do that on AI.
DEWS: Molly, as we wrap up this conversation, let’s return to the Holy See. As the world saw, Pope Francis passed away in April. His successor, formerly Cardinal Robert Prevost of Chicago, becomes Pope Leo XIV. Two questions there. What’s the significance of the new pontiff’s name? And what, if anything, does that have to do with artificial intelligence?
[22:55]
KINDER: Well, it turns out it has everything to do with artificial intelligence. And the Pope clarified a few days after he was named the Holy See, the pope, that he chose Leo XIV because of the last Leo. Leo XIII was the pope during the second industrial revolution.
DEWS: Around the end of the 19th century.
KINDER: Yes, correct. And what’s so famous about Pope Leo XIII is he wrote an extremely influential papal encyclical called “Rerum Novarum.” And “Rerun Novarum” was really important because it was, you know, imagine all the disruption of the Industrial Revolution, huge child labor and sort of terrible working conditions. And there was this at the time, this kind of sense of capital versus workers. And in “Rerum Novarum,” Pope Leo XIII, kind of for the first time squarely put the church on the side of workers. And this encyclical, which talked about the moral ways in which workers should be treated, became the foundation from what we now think of as Catholic social teaching.
And lots of popes on anniversaries of “Rerum Novarum’s” publishing, like Pope John Paul II, put out a really wonderful encyclical a hundred years after “Rerum Novarum” furthering some of this church thinking around workers. And so it’s very significant that this new pope likened that time period to today’s AI and said he chose that name because of the importance of labor and human dignity.
And I see that as a very hopeful sign, a very clear sign, that Pope Leo XIV and the Vatican is going to continue this legacy of Pope Francis on this sort of humanity and moral core of AI but deepen the Vatican’s focus specifically on work and workers, which I’m really excited about.
And, you know, I’ve been thinking about it, and what I think is so striking about Pope Leo is he connects Chicago to the poorest parts of Peru. And I think these deep questions of who is going to benefit from this technology? Who’s going to see those gains? How do we avoid harm? You know, the vision that we should be gunning for is a world that young people growing up in Peru can see this as a tool that’s going to give them a brighter future. A better education, better work opportunities, a better health care. You know, and that in Chicago we see a broadening of opportunity, not a separating with inequality. So I was so excited to see that decision, and in fact, when the Vatican invited me shortly after to this October workshop, they cited the naming as a signal that this will be a really important topic.
DEWS: Well, let’s end this discussion, Molly. I wish we didn’t have to, but we do, so let’s send it with a quote from your paper on Pope Francis as an unexpected visionary. I quote, “Catholic teaching pushes us to ask a deeper question. What are the risks if AI deprives humans of the work that makes us human? Can you unpack that?”
[25:58]
KINDER: So that was something I wrote when I was reflecting on what I see as this stark tension between the express goals of the major AI labs in Silicon Valley. So think OpenAI and Anthropic. Their express goal is to create, quote, an artificial general intelligence, an AGI, that success is defined as when it’s better than humans at work. That’s the sort of goal.
And the stark tension between that goal of an AGI that surpasses humans at work, that’s the definition of it, and what has been this longstanding Catholic tenet on the dignity of work, this idea that work is so much more than a paycheck. It provides meaning and purpose, daily discipline. It’s the way that we contribute to our community and provide for our families. For a lot of us, I mean, I say this about myself, it’s my vocation. I mean I’m deeply, deeply passionate about my work. I don’t even watch television or movies. I mean I’m just thinking about and reading about these topics all the time.
And what worries me is that there’s this utopian narrative in Silicon Valley that once we get to this radical abundance that AI, AGI will unlock, humans won’t need to work, will have no material needs, will have—now the new term is not just UBI but universal high basic income—we’ll be in this world of plenty, and that’s perfect.
But, you know, I think the Catholic Church with all this teaching about the dignity of work, reveals what is lost in that that is so essential. And it’s not just the Catholic Church. I think this is common sense to a lot of people. Even just getting a universal basic or high basic income check, where is our purpose going to come from? You know, I’m struck that Richard Reeves from Brookings has written so eloquently about this crisis of masculinity in this country. And part of that comes from the loss of meaningful work and a loss of a role in your family of provider. And here we are talking about this radical abundance agenda where people lose a sense of purpose.
And, you know, that is the kind of existential moral question that I think the Church has so much wisdom to offer. And there was a line that I pulled from, it was Pope John Paul II’s encyclical from a few decades ago reflecting on the meaning of work and saying that we become more human through work. And so the provocative question I was raising is what would happen if AI deprives us not just of work, but the work that makes us human.
And if that is going to happen, which, you know, I think society should weigh in on this, this should not just be something that Silicon Valley just decides our future, but if we are moving into a world of AGI in some distant future, we need moral voices. We need ethical voices. And it’s not just the Catholic Church. It could be other faith leaders, other ethicists, social scientists. This is such a deep question. And so this is exactly the kind of question I hope Pope Leo XIV is able to put into our conversation so that when we’re thinking about AI, it’s just a relentless race to finish line of winning to some extent, but really one that humanity benefits from.
[music]
DEWS: Well, the paper by Molly Kinder is on Pope Francis, the unexpected AI visionary. You can find it on our website. Molly, thank you so much for this fantastic conversation and for your very important research.
KINDER: Thanks, Fred, and thanks for your interest and your terrific questions.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
PodcastThe moral dimension of AI for work and workers
Listen on
The Current Podcast
June 5, 2025