Sections

Commentary

Podcast

Are students and workers ready for AI?

Fred Dews,
Fred Dews Senior Multimedia Project Manager - Office of Communications
Molly Kinder, and Rebecca Winthrop

December 5, 2025


  • Cutting-edge scholarship shows that when democracies come under autocratic pressure, they can perform U-turns to redemocratize.
  • The fourth edition of the Democracy Playbook will unpack and recommend strategies pro-democracy actors can use to advance and sustain U-turns.
  • Recent research and fieldwork from across the world reveal the tools, narratives, and reforms that strengthen democratic resilience—insights the Democracy Playbook will convey through rolling publications.
Introduction to GPT-5 is seen on OpenAI's website on the screen of a smartphone
Introduction to GPT-5 is seen on OpenAI's website on the screen of a smartphone. (Tada Images)

Artificial Intelligence (AI), is heralding a profound shift in how we learn, work, and live. To gain insight into how AI is reshaping the American workforce and economy, two Brookings experts join this episode of The Current. First, Molly Kinder, senior fellow in Brookings Metro, examines how AI is impacting the American workforce today; and then Senior Fellow Rebecca Winthrop, director of the Center for Universal Education at Brookings, looks at how we can prepare our students to thrive in the future workforce.

Learn more from Molly Kinder and Rebecca Winthrop on their LinkedIn channels.

Transcript

DEWS: What about people who are currently in the workforce? What can they do to adapt to ongoing developments in artificial intelligence?

KINDER: I don’t think this is just up to individuals. I do think policymakers and our institutions, they really need to be leading to make sure it’s not just me, individual worker in this workplace, you know, this is all up to me to kind of navigate this potentially transformative change.

[music]

DEWS: Hi, I’m Fred Dews, and this is The Current, part of the Brookings Podcast Network. AI, or artificial intelligence, is heralding a profound shift in how we work, learn, and live. To help understand some of the shifts that AI is causing in our workforce and economy. I’m having two conversations on this episode of The Current. First, I’ll be speaking with Senior Fellow Molly Kinder of Brookings Metro on how AI is impacting work and workers. And then I’ll talk with Senior Fellow Rebecca Winthrop, the director of the Center for Universal Education at Brookings, about how to better prepare students to thrive in the future workforce.

Molly, welcome back to The Current.

KINDER: Thanks for having me, Fred.

DEWS: So you’ve co-authored new research with Martha Gimbel, Joshua Kendall, and Maddie Lee, who are at the Budget Lab at Yale, on generative AI’s impact on the labor market. It’s titled “New data show no AI jobs apocalypse– for now,” and it was published in October. Can you give a top line of your findings?

[1:22]

KINDER: So our top line is if you look at the period of time since Chat GPT was launched, it was actually three years ago this this past month, we looked at whether or not the labor market, when you zoom out and look at the labor market as a whole, are we really seeing disruption yet? I’m going to emphasize yet. Our answer is actually “no.” We are not yet seeing a discernible impact at a really macro scale.

Now, that doesn’t mean there aren’t individual jobs or individual people that have been impacted. We’re really looking at is the house on fire? And you might expect it to be given the headlines in the newspaper. And our answer, at least for now, is a reassuring one.

There were some exceptions. We did see greater disruption for the youngest workers entering the job market. Not yet clear if that’s from Chat GPT or that predates its launch. But that is an area that we’re keeping a very close eye on.

DEWS: Can you unpack that a little bit more? Why the youngest workers entering the job market, which dovetails with the conversation I’m having with Rebecca Winthrop about educating young people to enter the labor market. So they seem to be perhaps the most exposed to AI.

[2:31]

KINDER: Well, first I would say the data is noisy. We know that young people, you know, 25 and under particularly those coming out of college, are facing very high unemployment compared to recent years. It’s a terrible job market to be someone coming out of college.

Lots of factors are probably contributing to that. Interest rates were raised a few years ago. There’s an uncertain macro environment. There was some over hiring in tech. AI though is likely playing some part, unclear yet exactly what.

And I think the reason why we think AI could be contributing is AI is getting pretty darn good at doing the kind of tasks you do at a computer when you first start in a lot of white collar jobs. So desk research, synthesizing, analysis, drafting. These are the kinds of things a lot of white collar employees start their careers doing. And increasingly they’re becoming more susceptible to AI.

I’m particularly more concerned about what this could look like in the coming years than today as AI agents get better and can do longer sequences of tasks. But it is something I think we have to start reckoning with, which is, is AI going to radically reshape what entry level work early in the professional career ladder looks like?

DEWS: Yeah. I wanted to follow up on that last point about the future, because I think it relates to the piece of the title of your research, “[em dash] for now.” Can you unpack that a little bit?

[3:58]

KINDER: Sure. We were very clear that what we were trying to do was a data-driven, rigorous temperature check of how has the labor market been impacted just in this period of time since Chat GPT was launched. And we interestingly we compared it to those early years after the internet and the computer– similar multipurpose, general purpose technologies. And the headline there is we’re really consistent with the pace of previous years.

Now, it is important to note that because our findings feel a little reassuring now, that does not mean that this technology won’t have potentially dramatic impacts on the labor market. Three years is not a lot of time. Even though it feels like the pace of change, every day there seems to be some new technological breakthrough, there’s often a fairly substantial lag for how long it takes for workplaces to really change as a result. So we are very clear that in the first just shy of three years that we looked at since Chat GPT’s is released, we are not seeing evidence of a jobs apocalypse. We are not forecasting the future.

Some of my other research I’ve done at Brookings with colleagues Mark Muro and Xav Briggs suggest a lot of white collar jobs could potentially be disrupted. It just isn’t going to happen overnight.

DEWS: Well, coming up on this episode, I do talk with Rebecca Winthrop, as mentioned, about how parents and educators can prepare students for a world of AI. But what about people who are currently in the workforce, maybe those junior career people you talked about, or even people who are later on in their career, like myself, what can they do to adapt to ongoing developments in artificial intelligence?

[5:33]

KINDER: Well, I think the first thing that is important to note is it’s likely that most white collar workplaces they’re going to change substantially because of this technology. It’s not all doom and gloom. There’s lots of ways that AI makes us more productive, allows us to be more creative and brainstorm and do work better and faster.

But there are some folks where their skillset that they might have spent a long time improving, investing in expertise may find that suddenly AI is quite capable at some of the things they’re doing.

I think the most important thing people can do today is to get familiar with this technology, get good at it. There’s really very few jobs that are in front of a computer that can’t take advantage of this technology. So I think it’s less scary once you use it. I know I use it all the time, even though my job is also to worry about the macro effects. It is also a phenomenal tool.

DEWS: Well Molly, every time I talk to you about AI on this podcast, like we had a conversation in the spring about your visit to the Vatican looking at AI and the moral issues, it always feels very personal. How do you use AI in your own work?

[6:41]

KINDER: You know, I use AI all the time in my personal life as a mother and as someone who cooks in my house. And I had a post the other day on LinkedIn about how I started a fairy club for my daughter. And it taught me how how much teachers can really benefit from this.

And then in my work, I use AI all the time as a thought partner, a deep research partner, a sort of a force multiplier. I find this topic of how AI is impacting work and workers fascinating. I’m passionate about my job. There are so many questions I’m wrestling with.

And, importantly, my job is not just to study how it’s impacting workers, it’s to come up with really brilliant ideas for what we can do to make sure workers benefit and they avoid harm.

AI for me has been an incredible partner, both to help me accelerate my research. It’s almost like I have a bigger team because I’m able to ask deeper questions and go off and sort of have deep research, you know, noodle on a question that I’m really wrestling with.

And then I’ve actually found it to be a terrific brainstorming partner as I’m coming up with what I think are quite novel solutions that are not yet existing. I like to talk out loud and sort of use Claude or Chat GPT as a, as a thought partner.

It certainly doesn’t replace anyone at Brookings, or it doesn’t replace what I bring to the table. I find it very complimentary. I find it helps me do more work, more thoughtful work. And I would say probably more creative work.

DEWS: I invite listeners as always to check out your research on the Brookings website, but also to visit you on LinkedIn, where I know that you spent a lot of time writing about and thinking about AI and its implications. So LinkedIn, Molly Kinder.

To get to that question of what policy can be implemented, are there specific steps that policymakers can make to either mitigate any of the negative impacts or to facilitate some of the beneficial impacts of AI on work, on workers, to help Americans better navigate the intersection of AI and work?

[8:39]

KINDER: I mean, I think there is so many areas where policymakers can really lead to help workers in America navigate this change. If you compare America to Europe, we have far fewer institutional mechanisms that are going to help our workers navigate these changes. We spend a tiny fraction of what other countries spend on workforce training. So we spend a lot as a country on higher education. But once you come out of school, the sort of resources available to help you navigate a career change or study later are very, very small. That’s an obvious low hanging fruit where, you know, more resources and more, frankly, more ingenuity to think about what would be the types of resources and training that workers will need.

We also have a very weak safety net. The American public is wary. About 50% of people polled by Pew felt more negative than positive about AI. It can feel sometimes like Russian roulette: is my occupation or livelihood going to be one that I’m going to wake up one day and some AI breakthrough is going to make vulnerable? And I think that’s magnified in this country because we don’t really have much of a safety net at all if you happen to find yourself in that situation.

And then I’m, I’m really excited to be exploring some novel new ideas, both with governors and with folks in Washington on how do we think specifically about AI? And something I’ve been coming up with some new thinking on is, how might we think about a new way of training young talent when AI can do more of the automatable work? So I have a piece I’ve been working on with the New York Times for several months that’s going to flesh out a bold new idea for this.

So there’s lots of ways I think policymakers can be really meeting workers, recognizing there are both risks and opportunities. And importantly, making sure workers don’t feel they’re left alone.

And the last thing I would say is, right now, not a lot of workers in America feel they have agency in this. It feels like it’s happening to them. And that’s in part because, you know, we’re mostly just hearing headlines, and this is all happening in Silicon Valley, and these things are coming and we hear all these predictions. But there aren’t really good institutional mechanisms in America for individual employees or workers to feel that they have some voice or some say in this.

And, you know, again, when we look at Europe, there are countries like Germany where every workplace has something called a Works Council. So these mechanisms by which workers and management together try to come up with a positive way of deploying this technology.

I’d love to see the United States figure out ways to really put workers in the driver’s seat and give them more of a say in this future.

DEWS: Molly, it’s fascinating to talk to you about this topic all the time. Looking forward to the next time, and thanks for sharing your time and expertise with me today.

KINDER: Great, thanks Fred. I really appreciate it.

[11:34]

DEWS: And now Rebecca Winthrop, director of the Center for Universal Education at Brookings and co-author with Jenny Anderson of The Disengaged Teen: helping kids learn better, feel better, and live better.

Rebecca, welcome to The Current for the first time!

WINTHROP: Great to be here.

DEWS: So we just heard from Molly Kinder about the jobs that are being impacted by artificial intelligence. Is our education system, the pipeline, up to the task of preparing our young people for an AI future?

[12:00]

WINTHROP: Well, whatever people are talking about in the boardroom among companies around the talent pipeline and the workforce they need, they have to be paying attention to what’s happening in the classroom. Because whatever we’ve got going in the classroom and at home, where kids learn a lot, is what’s going to show up in the workforce.

And I would say that the answer currently is “no,” kids are not being prepared. And there’s a couple ways to think about this. One is to use AI well and to be a really great, highly sort of sought after worker, you have to be able to think, you have to be able to manage AI really well, and you have to be in charge of it to carry out your objectives in the workplace.

Now, that takes a lot of skill and we need kids to be able to read well, even in an age of AI because reading and writing is actually a critical thinking process. It’s how kids develop critical thinking and analysis. And that is actually the skill we need young people to be able to develop when they’re in the classroom, again, or outside of the classroom where they’re learning all the time.

And at the moment we have a pretty big literacy crisis. We have a pretty big disengagement crisis. And the vast majority of students are in what my coauthor and I call “passenger mode,” which is basically they’re coasting, doing the bare minimum.

Now, AI comes along, can do their homework for them, can do their math problem sets for them. I’m worried that if we don’t really shift up what we do in education, it could make a lot more kids in passenger mode. And when you’re in passenger mode, you’re not having the learning experiences you need that are going to make you a really good employee in the future.

DEWS: But it’s, it’s not just an instrumental approach, right? We’re not teaching kids how to deal with, handle, engage with AI because they can be better workers. We want to teach kids other skills that help them to be better people, better citizens.

[13:58]

WINTHROP: Absolutely. So education, if you think about it, and we learned this in COVID, does a lot of things. Education helps kids master academic content and that’s what most people think about when they think of schools and education.

But it does so much else. It is the one institution in our country, virtually in every community, where young people have to get to know and work with people who are not like themselves, not in their immediate family or immediate neighborhood. That ability to sort of learn that other people are different, to learn to work together, to collaborate, to try to communicate your ideas, to try to be understood. All of those skills that education does do, whether it’s on the playground or in discussion in in the classroom, lead into the workforce as the competencies that are much sought after by employers. People that can figure things out, can work with other people, can manage conflict, can be creative. Schools are really training grounds for that.

DEWS: And you wrote, and I’ll quote, “we cannot wait until AI is part of students’ everyday lives to create norms that will lead to healthy and productive use of this technology.” What did you mean by that?

[15:04]

WINTHROP: So one of the things that we are worried about at Brookings and through our Brookings Global Task Force on AI in Education work is making the same mistakes that we’ve made with social media when it comes to kids’ learning and development. When social media rolled out, educators really weren’t at the table, parents weren’t at the table, coaches, people who work with children. And we knew at the time that social media was rolling out that things like social comparisons for adolescents is a really bad thing and can and will harm their wellbeing. We already knew that.

So we know a lot about children’s learning and development. And so now that AI is being rolled out around the world, we need to be at the forefront. We need to be at the table and say, how can we make sure that AI is used for good? That it will extend not replace learning. That it will spur better interactions with people, not actually de-socialize young people. So that’s what I mean.

I think of it like this. Imagine you and I, Fred are a hundred years ago perhaps, and we’re in the, you know, horse and buggy era. And then we wake up and one day there’s an automobile. That’s where we’re at. Like, it took a long time to make sure, first of all, that, you know, seven year olds aren’t driving the automobile. There’s speed limits, there’s airbags, there’s seat belts, there’s driving licenses, there’s age, you know, limits, et cetera.

So we’re in an era where, you know, AI is a technology that we don’t want to become embedded in kids’ lives– I’m really focused on kids– and have the norms ossify. What we need is to put sufficient guardrails on it. We need the AI equivalent of seat belts, airbags, driver’s license, speed limits that cars had.

DEWS: If you were to offer some advice to, say, high school students or even college students and their parents about the intersection of learning and AI, what would you say?

[17:00]

WINTHROP: Well, one thing that I think is really important is to note that kids are using AI whether we like it or not. And so when I talk to families and parents and school leaders, I note that, look, 90% of teenagers in the U.S. use AI in their personal life. Two, it’s almost impossible to get away from. Generative AI is a software. You don’t have to download an app, you don’t have to buy a device. It’s embedded in everything.

I had a high school student tell me recently, yeah, my school banned Chat GPT. But no worries, we use Deep Seek and actually I go on Snapchat, and I use my AI friend because there’s these AI companions. And what can my AI friend do? Yeah, it can talk to you, you can have a relationship but can also do your homework for you.

So kids are accessing it all over the place. And I would, you know, really tell families that they have to be wide awake to it and partner with their kids’ schools, because the risk is that AI replaces learning and de-socializes kids.

And kids’ brains develop the way they’re used. So if they are not practicing those social skills, they are not going to be able to be great teammates in the workforce in a couple years. And we already know that AI companions, these idea of AI friends that a third of teens in the U.S. say that they prefer talking to AI companions equally or more than other human beings.

DEWS: Well, now shifting to the policymaker side, what can policymakers do, if anything, to create the systems, rules, or frameworks to help students and their parents navigate today’s AI world to help them best prepare to thrive in the AI present?

[18:42]

WINTHROP: So I think one of the main thing that Congress can do is really set up safeguards around children’s AI use, particularly with AI companions or AI friends. There’s a number of bills on the Hill at the moment. And anything that restricts young kids from using AI companions, for example, Common Sense Media says no kids under 18 should be using them right now. And I would agree with that. Because again, it’s the equivalent of a car showing up in a horse and buggy era. They don’t have seat belts. They don’t have airbags, there’s no driver’s license, there’s no speed limits.

You know, we shouldn’t just let kids use it until we know it’s safe. At the moment, the reverse seems to be true. Everybody go and use it and let’s see if it’s safe. I think we need to to flip that. So anything that safeguards kids’ use, particularly around AI companions, is I think a really smart thing.

DEWS: Okay. Well, it’s super important work, and as a parent myself I’m glad that you and and Molly are working on this. And thank you for your time and expertise today.

WINTHROP: My pleasure.

DEWS: You can learn more about all of the AI related research that Brookings scholars are doing on our website, Brookings dot edu.

[music]

Participants

More information:

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).