Sections

Commentary

Podcast

Do AI’s risks outweigh the benefits for students and schools?

Rebecca Winthrop, Kara Swisher, and
Kara Swisher Editor-at-large - New York Magazine, Contributor - CNN
Rida Karim
RK
Rida Karim Communications Coordinator - National Student Board Member Association

January 20, 2026


  • Generative AI’s risks for students are overshadowing the benefits, but there are powerful ways to harness the technology in the classroom for students’ learning.
  • When kids replace effortful learning with generative AI to shortcut assignments and learning, it is bad for their cognitive development. Students need to make mistakes and deeply engage with content to become independent thinkers.
  • If school districts across the country agreed on criteria for purchase and procurement of this technology, tech companies would respond because they want kids using their products.
  • A few good examples of AI in schools include tools with vetted content that make the student experience more interactive, and tools that help make teachers’ administrative burdens easier.

Are we heading in the right direction with AI in education, or drifting into a “wild west” of privacy risks and lost learning? In this episode, Rebecca Winthrop, senior fellow and director of the Center for Universal Education at Brookings, sits down with tech reporter Kara Swisher to unpack the urgent findings from her task force’s new report, “A new direction for students in an AI world: Prosper, prepare, protect.” The episode also features highlights from the report’s launch event, where student co-author Rida Karim joins the conversation to discuss practical strategies for integrating AI without sacrificing critical thinking.

Transcript

WINTHROP: Your average chat bot is maybe about 70% accurate.

SWISHER: I always use the expression frequently wrong, but never in doubt. You know.

WINTHROP: They’re confident.

SWISHER: I was talking about an argument I had with one about the largest planet, and I was right and the chat bot was wrong. They said it was Mars. I said it was Jupiter. And they would double down on stupid. It was really interesting.

WINTHROP: So if you have a student who doesn’t know what the largest planet is,

SWISHER: correct

WINTHROP: that is the worry right there.

You are listening to The Current, part of the Brookings Podcast Network. I’m Rebecca Winthrop, director of the Center for Universal Education at Brookings, and for the last year my team has been running a “pre-mortem” on students and generative AI. And we have been asking the question, are we headed in the right direction? And if not, what do we do about it?

We convened a Global Task Force on AI and Education, consulting with experts, teachers, and families across 50 countries. And the result is in our new report that has just rolled out. And to help me unpack these findings and to ask some hard questions as always, I’m turning to someone who has chronicled the digital revolution better than anyone else. She’s an influential tech journalist and host. Kara Swisher, welcome to The Current.

SWISHER: Thank you.

WINTHROP: I’m ready for your questions.

[1:16]

SWISHER: Okay, so why don’t you just talk about what was in the report from your perspective.

[1:20]

WINTHROP: So what was in the report was an assessment of what are the risks, what are the benefits, and a question of are we headed in the right direction?

And what we found was no. The risks are overshadowing the benefits. But it’s early days in generative AI and students’ learning and development. And so we can definitely bend the arc. There’s a lot we can do to pivot our direction and really lean into ways to harness generative AI in the classroom and for students’ learning that are very powerful.

[1:48]

SWISHER: So, explain the risk. You know, obviously you’d be called a “doomer” by tech people, right?

WINTHROP: Yes, yes. You’ve been called a “bummer.”

SWISHER: Whatever. They’ve called worse than that. It’s a nice one.

[1:59]

WINTHROP: The risks are from what I call sort of “wild west” use of generative AI by students, which is often out of school, direct interface with chatbots, unsupervised, who knows what the content is.

I keep thinking of it as letting our kids wander around in a wild forest, and they might see a beautiful flower — might be great, they might have a great experience. They might eat a poisonous mushroom, become very ill, they might get eaten by a bear. Because they have no guide with them, and they have no training on what is in a forest.

And so from that experience, what we’re seeing is kids are replacing effortful learning with generative AI. So using generative AI to shortcut assignments, to shortcut learning. And that is really bad for kids’ cognitive development. You need to try, you need to make mistakes, you need to engage deeply with content to learn stuff and to grow. And become an independent thinker. So that’s one big one.

Another piece is we’re seeing that it’s really changing how kids are relating to each other. So lots of AI companion use changes how kids think about what a friend means. So if you are chatting with your AI friend and, and

SWISHER: who’s compliant, who is always compliant,

[3:18]

WINTHROP: sycophantic, always wants to make you feel good. And you say, my parents just asked me to clean my room, they’re so annoying. Instead of your normal friend, who, if you told that to, who would say, well, my parents just asked me to clean my room too. What’s the big deal? AI friend will say, you’re right, your so misunderstood, I feel for you. That shifts how kids relate to each other, to teachers.

Learning is social. And in the classroom it’s we’re beginning to see generative AI really disrupting trust. Teachers don’t trust students because they can’t tell if it’s their real work. Students don’t trust teachers because they think they don’t care about them, they’re just, you know, giving us AI assignments. It’s a big mess. And without trust you really have a hard time engaging in teaching and learning. So that’s some of the risks.

SWISHER: So safety.

WINTHROP: Safety, security,

[4:06]

SWISHER: cognitive challenge is what you’re saying.

[4:08] WINTHROP: Cognitive development, social development, emotional development, degraded trust, dependence, and then safety and security.

SWISHER: There’s also inaccuracy, you know,

WINTHROP: Bias.

SWISHER: Bias, the inaccuracy and things like that. the thing is we have history with technology in classrooms, most of which has been deleterious, I would say. Not even

WINTHROP: or or not had an effect. Yes. Like we’ve seen across 50 countries. Actually, OECD just did research on this not long ago that that where tech was rolled out across systems, it did not actually improve student academic outcomes.

SWISHER: Well, why would it? But there was a lot of promises that this would be the golden age of learning because of technology, rather than just treating it like a tool that it is, the way a light bulb is.

WINTHROP: Right. Or a pencil, or a book, or a ruler or science lab equipment.

[4:54]

SWISHER: So why do we keep getting snookered by these tech companies?

[4:57]

WINTHROP: This is a question you are much better to answer. But from an education standpoint, I think schools and teachers are often accused as not being able to adapt, not being current.

But I think educators have to be confident that we know how kids learn. We know. We in the education community know we parents have to be confident. We know how our kids are doing when they’re not talking to us, they’re not making eye contact, they’re bringing their cell phones to the table. Like, we can bring lessons from the cell phone and social media experience to this rollout.

And my dream is that tech companies put teachers, parents, and students at the center of their design. We recommend sort of tech-educator-student design hubs where it’s the educators leading the developers on use cases

SWISHER: and teacher development too, in terms of using it. Because they

WINTHROP: absolutely,

SWISHER: they often rely on people feeling ignorant about this this stuff that they’re missing something.

WINTHROP: Yeah, exactly. The sort of fear of missing out.

SWISHER: Right. Or that students are cheating or that teachers are not doing their job. They create a a culture of fear and loathing, essentially, between them.

Talk about accuracy too, because a lot of these they’re inaccurate.

[6:14]

WINTHROP: Yes. So I think with the technology we have now, I’m not sure you’ll ever get to a hundred percent accuracy unless you really, really just use vetted content. So I have an entire science textbook that I’m gonna make much more interactive and alive through AI

SWISHER: which has been vetted.

WINTHROP: Which is vetted.

SWISHER: Great.

[6:32]

WINTHROP: Now, just straight up chatbot interaction, not that accurate necessarily. There is a great test of accuracy across 57 different subjects. And the freebie, freemium, just your average chat bot is maybe about 70% accurate.

SWISHER: Yeah.

WINTHROP: The expensive pay a lot for the premium accounts is much more accurate: 85%.

SWISHER: Well, 85. That’s not very good.

WINTHROP: It’s not perfect.

SWISHER: I always use the expression “frequently wrong, but never in doubt.” You know,

WINTHROP: they’re confident.

[6:58]

SWISHER: I was talking about an argument I had with one about the largest planet and I was right and the chat bot was wrong. They said it was Mars. I said it was Jupiter. And they would double down on stupid. It was really interesting.

WINTHROP: So if you have a student who doesn’t know what the largest planet is

SWISHER: correct.

[7:13]

WINTHROP: That is the worry right there. This is why we need kids to be able to think independently, but also to learn stuff. Because if you shortcut your learning and you don’t know what the largest planet is, it’s very authoritative. It’s very easy to believe. We want to believe chat bots.

[7:32]

SWISHER: So how do you resist the sort of relentless marketing and lobbying that the tech industry has done, especially with this administration that you must have AI, you must. And look, there’s other political things happening here and payoffs and things like that.

How do you stop that from, which is important to understand the latest technologies. And at the same time not be dragged into yet another technology disaster of which we get dragged into all the time.

[7:58]

WINTHROP: Right. I mean, there are lots of people sitting in school districts across the U.S. as we speak whose job is to decide, am I gonna procure this tech device and or service or not. So what we really need to do is have our school district leaders, our state leaders, give really good guidance to those procurement folks on what is a safe way to bring gen AI in.

Because there are good, safe ways. Privacy has to be, you know, set on the highest standard. You can’t sell data, you have to be transparent. And it should support the capacity of the educator in the classroom. It could be something that educators use to be stronger, better, make more fun, interactive lessons.

So I think there’s a real need for districts across the country to come together to agree on sort of a key set of criteria for procurement and purchasing. And if they did that, I think the tech companies would respond, because they want to sell into schools, and they want kids using their products.

[8:57]

SWISHER: What about the partisan differences that are happening? Because everything … and tech tries to make that even worse. I mean, they were responsible for a lot of our polarization. How do you get these districts to cooperate? Because I do find parents across the country, no matter what their affiliation, all get the problem. They feel it. They know it. They know it themselves. They know it for their kids.

[9:19]

WINTHROP: Yeah. I think that actually this issue — students, children, learning, health, well-being — is completely bipartisan. There’s two hearings on the Hill today on this topic from, you know, a range of different angles. And it is the one carve out in the state moratorium because so much bottom up resistance was

SWISHER: this is the executive order.

WINTHROP: The executive order,

SWISHER: which will probably be declared illegal.

WINTHROP: Yes. Yeah. I’m not sure if it’s, if it is even constitutional to say you can’t regulate AI, states, for the next 10 years.

SWISHER: Absolutely it will not.

WINTHROP: But the one carve out, as you know, is around children, wellbeing, and safety.

So I actually think we’re in a different place than we were when social media rolled out. I think we have a lot of momentum by parents, teachers, students themselves are much more savvy to be wanting to be at the table and helping guide this. And I think districts have to bring those voices in.

[10:12]

SWISHER: And I think it’s important to, if they try to accuse you of being a doomer, saying I’m actually getting you critical feedback.

[10:18]

WINTHROP: I mean, I, I … It doesn’t bother me. You can accuse me of being whatever. It doesn’t bother you either. You, you have has, has not stopped you. But my argument is, look, you’re gonna want to know this. If if you don’t fix this, the problems with gen AI and students learning and development, you’re gonna get a much bigger backlash.

SWISHER: Right. There are benefits to this. I’m a big proponent, for example, of autonomous cars. I think they’re great. And there’s plenty of safety data to back it up. There’s actual data. It’s actual proof that it’s safe.

WINTHROP: Yeah.

SWISHER: Despite small problems, which are gonna exist.

Talk about what is good about AI in schools. What, what could be good or what is promising.

[10:54]

WINTHROP: Yeah. There’s a lot that is good and promising. Anything that makes the student experience more interactive. There’s incredible use cases where you’re bringing to life through virtual reality or through interactivity where kids get to ask in real time questions. Textbooks. So you put on your virtual reality glasses, you dive deep. You can learn chemistry like you’ve never learned chemistry because you can see and manipulate molecules. It’s crazy. And you don’t do that the whole time. You do it for 10 minutes. And there’s been great pilots of this that showed that massively increases students’ learning.

So those are the examples that I think are gonna be really powerful.

SWISHER: And it’s tools for teachers to not waste time on lots of things.

[11:36]

WINTHROP: Yep. Interactivity. Teachers really love it for their own prep. And it’s things like making their administrative burdens, which are high, much easier.

It’s also things like new forms of assessment. It is actually surprisingly hard to see the learning process. We often see points in time: a quiz, a test, a little presentation. AI can really help us start see the learning process and where kids are hung up in a much more granular way that is very empowering for teachers.

And so those are some of the use cases. Neurodivergent kids, kids with learning disabilities, I think that’s a real area that will be very helped by having generative AI integrated. Again, not just get your dyslexic or kid on the autism spectrum onto a chatbot. It’s through vetted content, through, you know, good lesson plans that it will really help speech to text, making it much more interactive. Being able to ask questions when they can’t. Getting social cues when you don’t quite know.

SWISHER: And I think the way we are doing it now is sort of this spray and pray. Like, let’s hope it all works, which is very dangerous. And at the same time, what you do then get is bans like in other countries, which are drastic,

WINTHROP: right

SWISHER: and you don’t want to see that necessarily, but it may come to that because of the sloppiness that it’s being rolled out.

WINTHROP: Well, in the U.S., unless there’s national legislation

SWISHER: which there won’t be

WINTHROP: which doesn’t seem likely anytime soon you’re gonna have 50 different states. And you’re gonna have increasing parent upsetness and distrust, which is to your point, the momentum that’s gonna get to bans.

I think parents we talked to in our study are struggling with how do I protect my kid from the dangers but prepare them from when they go out into the world? And that is what we need help parents do.

SWISHER: Great.

WINTHROP: All right. Well thank you Kara.

SWISHER: Thank you.

WINTHROP: This is a great discussion of the report. We are now off to the public launch, you and I, which you can find in full on our website. But stay tuned here for a highlight and I’ll be back soon with some closing thoughts.

[13:32]

Hi, Rebecca Winthrop again. I’m just back from the official launch event of the Global Task Force findings here at Brookings with a room full of the very people needed to address these issues, technologists, education leaders, policymakers, philanthropists, NGOs, teachers, students, and parents.

There was a shared recognition that while the risks are very real, the window to act is still open. This is a young technology. I want to take you into that room now to hear some voices from the front lines of this shift. So here are some of the highlights from the launch of the report of the Global Task Force on AI and Education.

[14:31]

 I’m so pleased to have two people join us. Rida Karim, who is the co-author of the report with me and the rest of the team. She is a freshman at UVA. And then of course, the one and only Kara Swisher. So please come join us on the stage.

[14:29]

SWISHER: I first want to talk a little bit about the report itself and and the conclusions. You talk about there being benefits, but risks are more overarching. So talk a little bit, why don’t we start with you as a student, talk about the efficacy of technology in schools.

[14:45]

RIDA KARIM: I think a big part of the report is to protect students. And I think the best way to do that is to inform them of these tools and how to utilize them in an appropriate way. And then two, how to utilize them in a way where you’re continuously learning and building those soft skills that will then transfer into your career in the workforce. And of course, your educational journey.

SWISHER: Talk a little bit about what you find useful and what you’re nervous about as a student.

[15:09]

RIDA KARIM: Yeah, great question. I think students tend to offload their cognitive thinking to generative AI, just because of societal pressures. Like, we’re so pressured to get an A and end with a 4.0. And I think that can lead to the mindset of I have to do whatever it takes to get to that outcome. And so that leads to that outsourcing through generative AI. And that’s the bad part.

But I think the good part of it and technology in general is that it makes learning a lot easier and understanding content in a comprehensive way.

[15:40]

SWISHER: So, but is it learning or reading? Like learning is a, is a, is a friction process, right? Is that a real learning process? Because most people have, I’ve just finished a series with CNN about AI and healthcare. But one of the thing is brain plasticity is really hurt by lack of friction. Very much so. Longevity, everything else, all the studies, the actual scientific studies show that. And when you don’t have cha, where do you learn the most would you say? From other people, presumably, correct? Other students, debate?

[16:11]

RIDA KARIM: Yeah, great question. I think the learning happens when you’re just in an interactive environment, and that can be with people or even with technology. And I think when speaking about generative AI, it really comes down to how you prompt it. For example, if you’re, you know, writing an essay about the Civil War, you can’t just say, write me an essay about the Civil War. You have to give it your original and your authentic ideas and say, you know, I help me refine my thesis, or I think these are my weak parts. How do I fix that? And just having those interactive conversations, I think that’s where the learning occurs.

[16:42]

WINTHROP: The thing that I worry about is not everybody is Rida.  Most kids, again, I’ve done great research with my colleague Jenny Anderson on student engagement in the U.S. Most kids are in what we call passenger mode. They’re coasting, doing the bare minimum. Gen AI is a gift for kids who are in passenger mode and just want to get it done and move on. So to me that’s what I’m really worried about. Losing, totally demotivating kids, losing their engagement, motivation, love of learning.

SWISHER: Where do you get the movement?

WINTHROP: Where’s the leverage?

SWISHER: Really good AI usage, which we can all benefit from innovation and mitigating the negatives.

WINTHROP: Yeah.

SWISHER: It’s not in the interests of tech companies.

WINTHROP: Yeah.

SWISHER: To care about any of the implications.

WINTHROP: Yeah. That, that is the sort of narrow path that we have really tried to chart in this report. That was our question. Like how do you protect and prepare at the same time? How do you harness the benefits without, without really succumbing to the risks?

[17:37]

AUDIENCE QUESTION: What if we can use AI to ignite the learning for those youth that don’t have access to this stuff today?

SWISHER: Absolutely.  That’s always been the dream.

[17:44]

WINTHROP: One of the best examples, we found in our research was for Afghan girls who are banned from going to secondary school in Afghanistan. A great nonprofit organization named Sola is using gen AI and it’s basically just to empower their, you know, diaspora academics who are busily making short, interactive WhatsApp lessons that girls get on their phone, and that they can continue learning with the Afghan curriculum.

Yeah, so there’s lots of great use cases. Again, it’s not plug and play, it’s in these, just like Rida said, it’s gonna be in these really creative educator led, you know, manipulations of gen AI powers.

SWISHER: All right. Anyway, this is a really great report. It’s critically important. It isn’t water under the bridge now, but you’re getting close to having only a small, homogeneous group of people deciding education. And it shouldn’t be that way. And we have plenty of time to do that. And we should push our regulators, our teachers, our school systems to to do more, and rely more on giving students a better outcome, whether they’re from kindergarten to college and, and even even you all need to learn your whole lives and stuff like that.

So it’s a real pleasure to do this. And and it’s a great report. You should spend time looking at it and has a lot of great suggestions. And and congratulations to you two for doing it.

WINTHROP: Thank you. Thank, thank you, Kara. Thank you Rida. Thank you everyone.

[19:13]

You’ve just heard moments from our launch event. The report is not just a warning, it’s a roadmap. If you want to dive deeper, you can find the full report, the complete event replay, and our prosper, prepare, protect recommendations at Brookings dot edu.

I’d also love to hear your perspective on this. Connect with me, Rebecca Winthrop, on LinkedIn, and let me know how you’re seeing these issues play out in your own community.

[music]

Thank you for listening to The Current.

Participants

More information:

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).