Sections

Commentary

Podcast

How should we approach chatbots’ mental health and privacy concerns? | The TechTank Podcast

Josie Stewart,
Josie Stewart Senior Research and Communications Assistant
Sydney Saubestre, and
Sydney Saubestre Senior Policy Analyst, Open Technology Institute - New America
Shae Gardner
SG
Shae Gardner Policy Director - LGBT Tech

April 21, 2026


  • Reports of children using chatbots for companionship have raised concerns among state and federal lawmakers.
  • Addressing mental health risks must also consider protections for users’ privacy and analyze larger problems that extend offline.
Yutong Liu & Kingston School of Art / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
Yutong Liu & Kingston School of Art / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

TechTank, a biweekly podcast from the Center for Technology Innovation at Brookings, explores today’s most consequential technology issues. Moderators Nicol Turner Lee and Darrell West speak with experts and policymakers to share data, ideas, and policy solutions that address the challenges of our digital world.

People are using large language models for more than just efficiency—some users are turning to chatbots for more personal use cases, including health insights or companionship. In fact, one survey indicates that about 13.1% of U.S. adolescents and young adults have used generative artificial intelligence (AI) for mental health advice.

These use cases, along with reports of teens who chatted with a model dying by suicide, have raised concerns among state and federal lawmakers. Researchers are now trying to quantify concerns about safety and effectiveness, but additional questions on users’ privacy, especially when children share sensitive information about themselves, must also be considered.

In this episode of the TechTank podcast, guest host and producer Josie Stewart is joined by Shae Gardner, policy director at LGBT Tech, and Sydney Saubestre, senior policy analyst at New America’s Open Technology Institute, to discuss the current proposals targeting mental health and privacy risks as well as what questions still need answered to fully address lawmakers’ concerns.

Listen to the episode and subscribe to the TechTank podcast on AppleSpotify, or Acast.

Transcript

[00:00:00] HOST NICOL TURNER LEE: You are listening to Tech Tank, a biweekly podcast from the Brookings Institution exploring the most consequential technology issues of our time, from racial bias and algorithms to the future of work Tech Tank takes big ideas and makes them accessible.

[00:00:25] GUEST HOST JOSIE STEWART: Welcome to the Tech Tank Podcast. I’m today’s guest host, Josie Stewart. I’m a senior research and communications assistant for the Center for Technology Innovation at the Brookings Institution. Many of the big AI companies have promoted their large language models across a variety of use cases touting how generative AI can be used beyond work for more personal use cases, such as health questions or companionship. Survey data shows that users are experimenting with chatbots across these domains, though the uses differ by age. Children are regular users of the technology using chatbots for everything from homework health to mental health support. In fact, summary research indicates that about one in eight U.S. adolescents and young adults have used generative AI for mental health advice. This use has raised concerns among lawmakers, especially following the deaths of multiple teens by suicide after chatting with the models, researchers are now trying to quantify concerns about safety and effectiveness and beyond actual mental health support. There are additional questions on how these chatbots respond in serious mental health crises and can obscure privacy risks for users. Today I am pleased to be joined by two guests, Shae Gardner and Sydney Saubestre. Shae is a policy director at LGBT Tech, where she leads the organization’s policy, strategy, and research on LGBTQ+ digital rights and online safety. Sydney is a senior policy analyst at New America’s Open Technology Institute. Her work focuses on privacy and data policy that keeps emerging technology safe and beneficial for vulnerable communities. Thank you both for joining me today.

[00:01:59] GUEST SYDNEY SAUBESTRE: Thank you so much.

[00:02:00] GUEST SHAE GARDNER: Yeah, thanks for having us. Really excited.

[00:02:03] GUEST HOST JOSIE STEWART: Great. So I wanted to start out with more of the, the most visible harms we’ve been seeing, dominate the headlines from these models. Shae, can you walk us through some of the tragic consequences we’ve seen following teens use of chatbots, and what were your initial reactions to these new stories given your work at the intersection of technology in vulnerable communities? [00:02:25] GUEST SHAE GARDNER: Yeah. I, would say the most visible harms, really the most headline making cases right, are the ones where a chatbot is being specifically experienced as this one-to-one complete substitute for a real world person point of support. Whether that’s a, a therapist, a friend, a romantic partner, and I think we’ve all seen in the most tragic cases, these systems go as far as a affirming or validating self-harm ideation in the interest of being systems that are as affirming and validating as possible. so quite honestly, I, my initial reaction to that is twofold. One, those cases are absolutely moments of failure, right? But two, we need to be very careful not to answer one failure with another. If we are watching young people become too reliant on these systems for support, we do have to recognize that there is a reason they are turning to these systems for support in the first place. and everything I talk about is, is grounded in LGBTQ+ perspective. So from the youth, in our community’s perspective, that is especially important, right? For many young people, these tools are not replacing this rich existing network of offline support. So the answer cannot simply be to cut these vulnerable youth off from something they are using to seek connection, information or support that’s, that is otherwise unavailable. So I think that’s a long-winded way to say, my reaction now is welcome to the tightrope.

[00:03:59] GUEST HOST JOSIE STEWART: Yeah, that’s definitely, it’s assessing the larger problem is something that we’ve been talking about a lot in CTI. Sydney, do you have any other thoughts on that? Or what other types of harms are we seeing that might be less visible but still equally as harmful?

[00:04:14] GUEST SYDNEY SAUBESTRE: Yeah. so I think that everything that Shae said was spot on. And often when people want to talk about AI or chatbots, they want to talk about the technology or they want focus in on the technology and I think the thing I always try to center both in my work, but also in these conversations. It’s like these are really the conversations that we’re having about humans, and how humans interact with technology. So it’s a story about people. There’s absolutely like tech failures here. There’s also ways in which the tech is beneficial, but it’s really about especially in this context, how young people are navigating their emotional world, how they’re making sense of themselves, how they’re making sense of others, what tools are they using as Shae said, to really fill some of those voids that they’re experiencing or to try and figure out where they fit in the world. whether or not chatbots should be the thing that they use for that. that’s exactly, I think that’s part of the tightrope that we’re trying to navigate as, Shae said, but there’s still this, the nature of the interaction, there’s still some human element to it. And so I think that’s where we really need to hone in on what that looks like. Some of the kind of components that I think have not been quite as visible, but are still really important. we couldn’t talk about privacy all day, but outside of the privacy realm, which I think there’s like kind of two components that I’m really have been thinking about a lot. So one is the mental autonomy. So Nita Farahany has this call for like cognitive liberty. But really I think when we think about like young users who are still developing their identity formation, who are still learning how to think critically, how these kind of like conversations, this relational approach to AI can like subtly shape their beliefs, their emotional interpretations, their perception of reality, and the kind of what are the legal protections that are in place when we’re thinking about that? Where that, what I really mean is like a lot of, the kind of data extraction, some of the lack of privacy around that. and the other bit, which I think has come up a lot more is this kind of like sycophancy. so the way I put this is that AI is the mirror that smiles back, right? Like it’s meant to mirror your language. It is meant to make it seem like it’s something that is engaging and is reflective of you, but it’s also meant to flatter you. It’s by design. It’s very much focused on validating because that is part of how the models are built. and this is like to be like clear, like we’re acting like this is outside of the realm of, what we’ve experienced before, but it’s actually, it’s very, this is something humans have been doing for a long time. Like we flatter each other, we seek validation from each other. there was an essay that Plutarch wrote in 100 AD that was titled “How to Know a Flatterer from a Friend.” This is something like we’ve been dealing with, outside of the realm of like technology. but I think there’s something like, to be clear, I think there’s something in the way people are interacting with AI, the way that AI presents itself that is like different. and that really has to do with not necessarily like causing any of these delusions or anything like that, but really more reinforcing it.

[00:07:16] So I think like when we’re thinking about, paranoia, mania, those type of things. When you’re interacting with a human, you, there are these kind of like friction points where that human might say whoa, let’s check the facts, or let’s stop here and understand what’s going on. or here are some source that you might have access to. When we don’t build those into a chatbot, that doesn’t happen. And so it’s not necessarily that the chatbot is the reason why people are delusional, it’s just that you don’t have that same kind of like human friction. So those are like some of the areas that I think are concerning. But on the other side too, and we can talk about this more too, is like there are also a lot of like really beneficial uses of AI and so don’t necessarily want to basically say that we can’t solve this problem and so our best approach to solving, it’s just to like ban kids.

[00:08:00] GUEST HOST JOSIE STEWART: Yeah. I like how you talked about kinda the problems, that are we’re seeing with the platform design or the way AI is built. I think one of the bigger conversations that have, we’ve been having for a long time is bias discrimination in models. And I’m curious, Shae, if you can talk about, those are often treated as, separate issues, but where do we see like bias and discrimination intersecting with mental health harms? And especially given your work, what impact could this have on kids with marginalized identities, given everything Sydney was talking about with the ways these specifically can affect children.

[00:08:38] GUEST SHAE GARDNER: bias and discrimination is, both one of my favorite things to talk about and, one of the most frustrating things that I continue to continuously have to talk about, right? but I do think, I think there are two stages to this when it comes to bias and discrimination in this context. to ground my first, right, none of these experiences are happening in a digital vacuum. These are systems that are built by people. They are trained on human language, and at the end of the day, they are being deployed into a world that contains a whole lot of very bad and very real bias and discrimination. Technology is not creating those dynamics from scratch, and I think that is always very important to remember. What it often can do is replicate them. It can scale them, it can amplify them. and it is on the developers and the deployers in that case to push back on that element. but I said there were two stages. So this is my question for the first, Have we built a world where bias and discrimination is going to make a chatbot feel safer or more accessible than a person?

[00:09:43] The second stage of this, I actually, I like to tie my own personal experience into it. I was 15 the first time I used a web browser to search the term, is it okay to be gay? It was a very real world of bias and discrimination that made me feel like I was going to be safer seeking out what I thought at the time was, I know probably wasn’t, but what I thought at the time was going to be an anonymous or at least a lower risk source of support. if I’m being honest, if I were 15 today, I would probably go ask a chatbot. And whatever that immediate generated response is, there is no taking that back. Whether it is affirming, whether it is the opposite, there is no taking back what that does to that young person, and I promise a young person in that moment is not looking for indefinitely, not in need of this neutral breakdown of competing ideological positions on queerness. So it’s, I said the first part of the conversation is, have we built a world where bias and discrimination make the chatbot feel safer than the real world? And I think the second question we need to be asking ourselves about chatbots, is have we built the chatbot to answer well, once the young person gets there. If we are talking about protecting young people for harm, from harm, for marginalized kids, a biased response is harm. It is going to hurt.

[00:11:02] GUEST SYDNEY SAUBESTRE: I wanna pick up on what Shae said there about the neutral breakdown of these potentially, is it okay, is it not okay? I think that the, the bias and discrimination point, like this idea that it’s a neutral, technological approach to it, right? Like when you go ask your, your older cousin and they say something and you have all this contextual information about where they might be coming from, and even if you’re like 10, 11, 12 years old, you might still have some sense of they’re an external entity from you and it might hurt, right? whatever they say, whether it’s you’re talking to them about like your gender identity or your, like body dysmorphia or even just like what you wanna be when you grow up and they’re like, you could never be a doctor. Which by the way is something that, was said to me, the kind of That response from another human hits differently than it does when it’s coming from something that you’re taught to interact with in a way that makes it seem authoritative and technical and quote unquote, like neutral. Because it has this faux perception that giving, two sides is the most important thing. and I think that the assumption that these answers are objective, even when they obviously reflect bias of the data they were trained on. You need to have a lot of kind of sophisticated digital literacy to understand that. And you also need to have a lot of kind of like emotional resiliency to separate out from that, right? so if you are trying to get like diet advice from, I definitely. I don’t know. I had 17 magazine. There were definitely, I, there was a lot of like bad diet advice. There’s still a lot of bad diet advice on the internet and you’re going in and you’re asking these things and you’re saying, sometimes I feel dizzy and I want to make sure that I am losing weight. And the. The chatbot is picking up on that and it will just pull from all of this really terrible, like circa like 2012 diet advice from like Cosmo. I think Cosmo was still a thing in 2012, whatever it was, where it’s just drink more water. and if you don’t know all of the elements of like how that data has been formed in, and it’s also not coming from a person, right? Like it’s not on a, forum where, you’re, you have the sense that you’re engaging with another human that kind of, it can hit differently. and so I think that’s something too that with the bias component, it’s not just that the information that we’re receiving is bias, it’s also that our sense of it is that it shouldn’t be.

[00:13:23] GUEST SHAE GARDNER: Absolutely. Yeah. I think this also speaks to when you’re looking at the way, a regulatory legislative approaches to AI chatbots and to these, technologies like these, I think this is a perfect example of why you cannot look at that with broad moral authority, strokes of legislation or regulation. where some individual, at some level of government, has the authority to decide what is or isn’t either neutral or what is or isn’t right, or what is or isn’t harmful.

[00:13:55] GUEST SYDNEY SAUBESTRE: A hundred percent and we built the internet, like we put all that data on there. It reflects us, unfortunately.

[00:14:02] GUEST SHAE GARDNER: Yeah. the good, the bad, and the worst.

[00:14:05] GUEST SYDNEY SAUBESTRE: Data doesn’t just happen, is what I always like to say. It is created.

[00:14:10] GUEST HOST JOSIE STEWART: Yeah, no, I think you both have singled in on something I find really interesting. For context, I recently turned 23, so I am a child of the internet most definitely. who was Googling things as, a teen. And so I’m really curious on the way that might even just affect interaction between teenagers who might not be as vulnerable with one another, asking these questions like you both were saying. So I’m curious. What you might think about that, but also if there’s any other questions you think we need to be asking or that researchers are still trying to answer that feed into the information we need to know to shape that regulatory environment like Shae’s saying.

[00:14:50] GUEST SHAE GARDNER: So I do think that there is, there is one part of this discussion that I think we could be diving a little bit more, in depth to, which is how young people are coming to these chatbots. These are young people that are used to building relationships and communities through screens. We were just talking about that and, even the development of something like a parasocial relationship is not new here. It’s not new to AI chatbots that’s been around for a very long time. young people have been forming attachments to creators or to influencers for a very long time. But I would love to see more research about the difference in those interactions on social media versus in AI chatbots. For this reason, traditionally the relationships between individuals on social media have been a one to many relationship. What makes chatbots different is that they feel one to one, and Sydney mentioned friction earlier. it’s a friction-less, one-to-one at that. That changes the dynamic. It makes the relationship feel more private.

We know it makes it feel more personal. Most people, the understanding is that may increase in blurring some of the lines of what that relationship is. I would like to see a little bit more of that comparison in the research between social media, versus AI chatbots, because a lot of legislation right now is either trying to hit it both or unintentionally encompasses the other while trying to hit it one. I also think that there is, there is a second and selfishly this is just research. I would really love to see, just like I said, bias does not exist in this vacuum of digital spaces or within these AI systems. This youth mental health crisis is also not just going to disappear if a chatbot does. I am very curious, what is that chatbot replacing? But not only that, why was that thing missing in the first place? I would love to see more research lean into that question. I think understanding what is missing for young people is truly the first step to building a world where they can thrive offline and online. It is, if we only study the chatbot and we don’t study the conditions that are pushing young people towards it, we are missing a huge part of the story.

[00:17:04] GUEST SYDNEY SAUBESTRE: I think that’s like spot on again, which I’ll probably keep saying.

[00:17:09] GUEST SHAE GARDNER: Me too!

[00:17:10] GUEST SYDNEY SAUBESTRE: ….These things, again, they don’t just happen in a vacuum, right? Like it would be part of the reason why I think there’s such a focus on trying to solve like the kids’ online wellbeing thing is that it feels like it’s an, it is both hard and difficult and awful, and it also feels like an easier thing to solve than some of the like real world harms that kids are facing in the real world, right? that are systemic and that have all of the different components and institutions that are attached to them. People wanna separate out the like real world from the online world, and I just, I think they’re part and parcel, right? Like when we talk about cyber bullying, we still also have to talk about in person bullying. And there was a lot of work that was done to shift away from this framing of cyber bullying as a thing that was separate. I think what Shae said about what is driving young people to these platforms. some of the examples that we, like we can point to, it’s like there was, the Trump administration cut $1 billion in school-based mental health support for K to 12 students in 2025. that was actually put in place after the school shooting in Uvalde. And they cut that and it’s we already know that there’s not enough third spaces. We already know that all of these different ways, these different places where kids have been able to form relationships and get to know one another. We know that shifted during the pandemic, and we’re not necessarily trying to build that. But instead we’re just saying the best we can do is ban them from the one space where they’re potentially finding some type of interaction. I think also what Shae said about like this one-on-one thing, it’s. People want to continuously compare, I think, chatbots to social media and they wanna say we missed the boat on social media and so we’re, we can’t miss it on chatbots. I think that’s true, and I don’t think, I think that what is concerning to me about chatbots is that actually the type of interaction that you’re having. What Shae said about like this one-on-one component. It’s very different, right? Like social media. Yeah. You have all these people in your network and you have influencers and you need to understand how they’re trying to sell you some things. And you need to understand that, there’s a certain element of what’s being presented is not actual reality for everyone and might make you feel lonely and all of these other things that we know have impacts on, like people generally, not even just young people but people. But would a chatbot this kind of relationship that you build and it’s really hard for humans not to anthropomorphize things too. This is the other thing, right? if we could have a dispassionate relationship with some of these chatbots, which some people can, this would look different, but it’s people very quickly become like attach some human elements to the things that they interact with, but also this is being done intentionally by these companies to make them feel like more palatable or more friendly.

[00:19:44] There was a interesting report that came out that was looking at how, AI toys for like young kids might actually shift some of their like, early learning development, because it’s all about building that one-on-one relationship. And when you’re at that age, like two to three, a lot of it is about understanding how you fit in within a group. and it’s really like an outward facing, like developmental stage. And so instead, if you have these like toys that interact with you, it becomes much more like. Interpersonal with that, like relational with that toy and what that does to like where we’re going with these things. So what really wanna see is some element of what is different about this technology and then how is it impacting people. [00:20:26] I would love to see that before the tech is released. I feel like with like Character.AI, they basically were like, we’re just gonna put this out there and we’ll figure it out. And then it had some, Impacts that were not great. And then they like rolled it all back and they said we actually can’t promise that the chatbot function is gonna be safe enough for those under 18. So we’re just gonna cut it off. the other component that I would like to see that I think is really important too, is what actually works in helping people understand what this technology is doing. So the digital literacy component is huge. it’s also hard, like critical thinking, all of that. Like we don’t teach it well in school. We don’t always understand how a robot or a chatbot is not a human. how there’s like emotional manipulation, how there’s dark patterns. Like I work in all this stuff and I still sometimes am like, oh, let me, remember that algorithm bias is like a thing, right? and that actually when I like Google search something and I’m like, oh, it came up at the top of my page. It’s yeah, because it knows what I’m looking for. It’s not because it’s like at the top of everyone’s page. yeah, so I think if we could stop acting like AI’s magic and actually understand how it works and what the impacts on people are, that would be really helpful.

[00:21:36] GUEST HOST JOSIE STEWART: And I think something that undercuts everything we’ve been talking about, but especially what you were just getting out there with the interpersonal relationships is the privacy concerns, especially when it comes to children and digital literacy and maybe not understanding what they’re having a conversation with or what they’re, sharing with the entity, for lack of a better term. so yeah, I, Shae I am curious if you can talk a little bit more about, What are you seeing people talk about in terms of privacy considerations and why are those so important, especially for kids?

[00:22:14] GUEST SHAE GARDNER: I hope you did not just hear me quietly gasp, and you said my favorite word privacy. I am. But it’s, when you look at the way some of these chatbots are being used, I’m gonna start with an example here. No matter how much a chatbot may feel like a therapist or a counselor to the person that is using it, it is not bound at all by the privacy or confidentiality rules that would apply to an actual therapist or a counselor without deeply understanding that these systems can very easily become tools for what I like to call a quiet collection of incredibly intimate data about yourself. and that collection is happening without users fully understanding what data is being stored about them, how long it’s being kept, whether that is ever going to be seen by humans, how that could be used against them later, none of that. So while I do, I always like to say a little bit, I think it is very important to state clearly that a person’s data is theirs to hold close to their chest to share, whether that be with. abusive technology or with other people. It, but that comes with, that has to come with a side of meaningful choice. And meaningful choice, absolutely depends on understanding the privacy conditions you’re facing on the other end. So privacy is a tremendous part of this conversation. I also have to say while we’re talking about privacy, We are so far behind the mark on basic privacy protections for everybody, not just youth in the United States. We still do not have a comprehensive federal framework. We don’t have an omnibus, federal privacy law. So every single user in the United States is operating under a system with major gaps in it. and it is something I never want to be lost in this conversation when we’re talking about protecting the specific subset of youth users. None of us are protected right now. That young person is not protected, not, excuse me, their privacy is not being protected now at the age of 15 or 16, and when they turn 18, it’s not gonna be protected then either.

[00:24:24] GUEST SYDNEY SAUBESTRE: as a privacy person, I always wanna talk about privacy. I think there’s a point where, I don’t know, I’m sure you guys have experienced this, but there’s a point when you’re listening to a podcast and whether it’s like they’re talking to people who care about privacy or they’re talking about people who are more on the, safety component, which I’ll talk about how that’s a false dichotomy in a second. But they’ll be like, privacy advocates want this. And I’m like, yeah, privacy advocates do want that. there’s so many things that were, if we just protected people’s information, if we just had A basic federal privacy legislation, you could solve a lot. Not necessarily solve, but you would improve a lot of these problems and some of these problems you would actually solve. So I think this the privacy safety dichotomy, like being positioned as opposites drives me crazy. Like loss of privacy becomes a safety risk, right? Like I actually came into a lot of this work because I was working with, youth who’d experienced commercial sexual exploitation, and we were working on some foster care reform. One of the first things that I had to do was I had to go through all of the different files to understand what information we had. And they had assented to us using this information and it was for a research study and it was very much focused on like system reform. But also one of the things I wanted to do was we would do these interviews and we would do these focus groups and we would also have more surveys, and I didn’t want to continuously collect information from them that they had already given up to people and they had already said that we could use. I wanted to focus in on the things that they felt like they had control over telling us for the purposes of this study, instead of just treating them as, places where we could get more data out of and that like that point, that like privacy as agency, that gets lost so much in this conversation. Like we always, I feel like people think about privacy, especially privacy for kids as like secrecy, right? I don’t know how many people grow up with this. Like we don’t have secrets in this house. like we tell each other everything, and it’s one, that’s not true. And two, it’s totally normal to have secrets. It’s totally normal to have some sense of control over the information that you do and don’t wanna share. We have unfortunately now live in a society where there is so much data extraction that is happening that you don’t have meaningful choice and user control over that. But the the privacy is like a precondition for safety. Like I really believe in that, and that’s a huge part of the reason why I believe in privacy and why I like work in this space. It’s not because I necessarily want to, put my interests over others, which I feel like is sometimes how it comes across, but it’s really ’cause I care about kids and I care about vulnerable users and I want everyone to have some sense of control, which I was lacking at various points in my life over who knew what about me and how that made me feel. but also I’ve seen with other people and so all the different ways in which privacy then becomes something that is intrinsics, right? Like we, it’s not just about the secrecy, it’s about the ability to learn. It’s about the ability to fail. It’s about the ability to develop your own sense of self without all of these different inputs telling you who you are and aren’t. It’s about like your ability to not self chill and to express yourself freely and to That’s just like the kind of like prosocial conditions around like privacy, and then there’s all like the harm that it’s protecting you against, which is like the real world safety risk, right? Like people are always concerned about this kind of like stranger danger element to it. It’s yeah, but also if you’re collecting information on where people are. That’s also a safety risk, right? Like we’ve actually seen some cases where data broker have made that information accessible to people, and it’s led to horrible things. the data persistence, the kind of like how it builds up a profile of who you are over long term. Like how do you, I don’t know. I, when you have like friends who you’ve known for a really long time, or your family, right? Like just people who’ve known you for a long time and you’re like, that’s not who I am anymore. What happens when that’s the chatbot that you’re doing that with, right? Where you’re just like, that’s not who I am anymore. I’ve moved on from that. But it doesn’t let you, this kind of like profile of it doesn’t let you become who you want to be and have some sense of agency over yourself. so that’s yeah, I could talk about privacy all day, but that’s like my, I think privacy and safety are like very much a part of this whole entire story.

[00:28:28] GUEST HOST JOSIE STEWART: Let’s turn to the legislative side of this whole conversation. Shae, can you lay out what the policy landscape looks like at the federal level and what do you make of, how, safety and privacy are being treated as separate entities in the bills that we’re seeing introduced.

[00:28:46] GUEST SHAE GARDNER: I would say that the landscape is moving, but probably unsurprisingly, that movement is not what I would call cohesive or very coherent. what we are watching right now in this AI chatbot landscape is a complicated mix of FTC inquiry of updates to existing children’s privacy rules that may or may not impact their usage of AI chatbots and a few targeted bills here and there. but ones that, that, in my opinion, tend to focus a little too solely on specific use cases, versus tying the safety needs together with the privacy protections in the way we really need. so it’s…It’s a little bit of a mess. We are certainly lacking some sort of comprehensive framework, but as I mentioned before, we are also lacking a comprehensive privacy framework. and the U.S. government has not been able to land on a comprehensive, updated social media landscape for youth either. So I suppose it’s not a surprise. it. If I have to give a short answer, it’s gonna be that the federal government is clearly paying attention. But it is still responding in a very fragmented way and those fragments have, I think, landed into a policy landscape that was, already fairly inadequate to begin with.

[00:30:05] GUEST HOST JOSIE STEWART: You both mentioned earlier also, especially Shae, you talking about social media, how there’s been a turn toward bans in the children’s safety, privacy space. Sydney, can you talk a little bit about that? And I know you hinted at your thoughts toward fans around chatbots or even social media and how those might be connected.

[00:30:24] GUEST SYDNEY SAUBESTRE: Yeah, absolutely. I think one of the things too that would be helpful here is, and this she pointed to, there’s some varied movement at the federal level. But one of the things that’s not always clear is what are we talking about when we’re talking about chatbots, right? there is a difference between Character.AI. There is a difference between ChatGPT that is being used, for, to help kids with homework or that is being used to, I don’t know, as a translation tool, there’s a difference with Siri where it doesn’t have the same level of interaction. But, so yeah, I think that’s a, like a good point to bring it back to. So we’ve seen a lot of like conversations around social media, bans. There’s some countries that are moving forward. What bans for people who are under, Australia is moving forward with a ban for under 16… I think the thing that I always come back to with a ban is like, what is the actual problem that you’re trying to solve, is a ban actually going to accomplish that? And also what are the unintended consequences that are related and associated with that? so I think with a social media ban, generally what we’re saying is we cannot build these to be safe enough. We are not designing for the most vulnerable. And so instead of actually trying to improve on that with things like privacy by design, safety by design. By actually like building user controls into all of this technology, we’re just gonna kick off the people that are most impacted by it and we’re gonna call it a day. I know that’s like a simplistic way of putting it, but that’s sometimes the way that it comes across to me. So I don’t think that like bands are the solution. I think that instead what we need to do is we need to put friction into the system. We need to actually give people like user controls, like user agency. I also think that the other thing with bans, and we’ll see this with whether it’s with social media ban or chatbot ban, is that it pushes people onto less regulated platforms, it pushes people onto, sites that have like potentially worse outcomes, worse impacts. I do think that there’s, with some specific technology, there is probably a conversation to be had around how we actually build it and design it. So I’m thinking here about, If we want kids to be able to access general purpose LLM so that they can use it for homework or just like for whatever, right? They have the right to use it how they see fit. do we need to make it like flirtatious, right? is that something that is actually like benefiting all of us? I don’t know. I don’t really know where that line is, but I think there’s conversations to be had there.

[00:32:44] GUEST SHAE GARDNER: I would say when you when you look at social media bands in particular, a lot of language of which is being barred when you talk about AI chatbot bands, it, it feels a little bit felucian to me, right? A little bit of ban the dancing altogether. However, there are several obvious and immediate problems with these social media bans in a way I imagine, you would probably see replicated if you tried to take that exact approach with AI chatbots. for one, it is essentially a, it is a categorical block between an individual, and lawful online participation, and speech. two, it is absolutely going to have a disproportionate impact on marginalized, or isolated youth, who are in non-traditional or harmful situations. And three, it, the downstream effects of this are not actually just going to be youth, that is, ostensibly a ban on any individual, adult or youth who is either unable or unwilling to prove their identity or their age. So the impacts of that would be tremendous. and this is the, Looking at an issue and recognizing there are harms that need to be addressed and can be addressed on the regulatory level is one thing. Looking at that issue and saying completely ban is not something that has ever worked when it came to young people or adults and something that they found valuable to access. LGBT Tech has spearheaded a, has spearheaded a coalition statement, with a series of other organizations in, direct O opposition to these under 16 bands, or to similar minimum age, access, bills. we see them pop up every once in a while on the federal level. And in the States, in 2026, they’ve been having a run at it. Hawaii, a bill has been moving through the legislature. That’s an under 16 ban. California has a, an act of intent in place to consider an under 16 social media ban. and Vermont and Minnesota are also looking at it. So this is, this is a prospective set of bills that have been moving through different states and occasionally on the federal level. and very happy to go on the record here and say this is unequivocally the wrong response to it.

[00:35:06] GUEST SYDNEY SAUBESTRE: Yeah, I think Shae I think you set that up really beautifully. And, one thing that, okay, so this, I found this really interesting. So in Australia where they’re approaching, they’re doing this under 16 ban, which was being spearheaded by the eSafety Commission, so regulatory body and the Human Rights Commission of Australia. So again, like not a outside UN entity, like within the, within their own country, they came out and they said that this actually violated kids’ human rights, to entertainment, to access, to information, to connection. And I thought that was really interesting, right? there’s a lot of tension here. Like a ban is not the right approach to making sure that people have access to connection to information, to enjoyment. Like when we look at how a lot of people are using this and there’s, and as she pointed out, there’s like also the knock on effects of not just kids who are being like caught up in this, right? So ChatGPT, I think said, came out that it was gonna use probabilistic age estimation to understand like what age people were. And if they were estimated to be under the age of 18, then they wouldn’t be able to access certain types of information, won’t be able to interact in certain type of ways. That’s still like a probability, that’s not like a perfect approach to any of that. But even without all of that too, like I think the other part with bands is if you’re kicking people off of these platforms, and you’re pushing them under ground and you’re cutting them off from like meaningful, necessary support and you’re putting all the privacy considerations that go into that too, and the free expression ones also, you’re not preparing people to interact with these tools in the future. there’s been so much chatter in the last like year about creating like an AI ready workforce. Kids are the future of work, right? that is where it’s going to be happening. And so I think also when you’re talking about we’re gonna keep them off of this, there’s, there needs to be a tiered approach. There needs to be a lot more digital literacy. There needs to be like more friction built into all these systems so that they’re healthy for everyone. and as Shae said, like we don’t need, there are some harms that are happening here. We need to address those harms. But a ban is always, it’s never gonna, if one, it’s not gonna work. And two, it’s gonna create so many terrible consequences that it’s really not worth it in my view. And we could do better.

[00:37:11] GUEST SHAE GARDNER: The growing into AI is actually, that’s really prescient as well. not only for the AI workforce, but also the way so many of these bills treat all youth is if they’re this monolith and then on the magical day they turn 18th are imbued with an entire understanding of digital literacy and participation. Is just not the reality of the situation.

[00:37:30] GUEST SYDNEY SAUBESTRE: No, a hundred percent. And you can totally teach digital literacy, like you can teach critical thinking without necessarily letting people like have complete unfettered accesses. I’m just talking about within the context of learning, but I’ll also say this is, I had a pretty good understanding of, research and stats and how that kind of like quantitative methods before I learned how to code. And then once I did, once I actually started doing my own research, that was like really quantitative, I had a totally different understanding of it, right? Like I understood how the data worked. I understood what this type of regression mean. I like, sorry, this is really nerdy and boring, but it did it like it. I still had like I could read a research paper before and be like, I don’t know. These methods seem like a little sus, but now I can really break it down in my mind. And if you don’t interact with these things and then yes, you just magically all of a sudden at 18, like now know how to interact with, I also think then you’re also creating people that are very much like at the mercy of AI as opposed to helping to make the tech work for humans, which is also what I would like to see.

 

[00:38:28] GUEST HOST JOSIE STEWART: Yeah, I think you both offered such good examples there that you already answered a little bit of what I was gonna ask for, what measures you might find more effective. But I wanna end us on a little bit of, maybe a more positive note, and you both have throughout this whole thing mentioned the positives and the benefits that can chatbots offer and how those are a little obscured by the bans. What are your biggest hopes for, this whole conversation moving forward? How do you hope people will be able to interact with chatbots, especially kids, and have those benefits without losing access? how do you see this moving forward and what are your hopes for the technology as that proceeds?

[00:39:17] GUEST SYDNEY SAUBESTRE: I think my forever hope with all of these conversations is that we. Approach it with nuance and we approach it with like curiosity to get at the heart of the issue. So really around what is happening to whom. When Shae mentioned people who we have a lot of assumptions around, like what type of kids we’re talking about here. usually what that means. I feel like in the conversation it means like kids with parents who can actually like, engage with them on this and who have, whether it’s like a supportive household or non-supportive household, but they still have like adults in their life. a lot of the kids I worked with did not have those people. And so what does that look like? and I think that to approach a lot of these things as if those people don’t matter is really problematic, to say the least. But, so I would hope that we can have a little bit more nuance on the like. On the positive side too. Like the positive side of AI is I do actually think there’s some like really promising applications. I think that at its best what AI can do is that it can help scale solutions to problems that either we don’t have the resources to address or we don’t want to spend the resources on addressing, which I think is a lot of second time. But just to take it back to the youth mental health crisis, it is also true that there are not enough counselors, right? There’s not enough like quality therapy that people can help to help them like make sense of all these things. It’s hard being a teenager. It was long enough. No, for me, like I’m not quite Josie’s age. It was long enough for a go for me that I was like, oh yeah, maybe it was fine. But no, it’s hard being a teenager. Like there’s so much happening there. And so I think that so Dartmouth for example, is developing, I think they called it Therabot, but they’re developing a AI platform that is met specifically as a mental health chatbot and they’re actually doing randomized control trials and it’s supposed to be evidence-based. And so I’m excited to see, I’m excited to see these like. Small, very precise applications and seeing what is possible with that, and how it might be able to help people. I think though, that with the kind of larger models, it’s always gonna be a little bit tricky to balance between what’s in the best interest of people and what’s in the best interest of the companies.

[00:41:17] GUEST SHAE GARDNER: I have hope that I think, extends in a couple of directions, right? when I look at policymakers, I have hope that they, and we are able to build policy around around a goal that is not to sever young people from digital help and assistance altogether. our shared goal should be to make sure that technology does not exploit the very vulnerabilities that may make vulnerable young people turn to it while recognizing there is a reason they are headed that way regulation has the ability to consider nuance. and I hope that it does in that sphere. when I, I’m looking towards industry, I hope that industry is able to move towards products that are designed with a better understanding of privacy, safety, and the very real world context. they’re being applied in When it comes specifically to engagement, I hope industry is able to consider quality over quantity with that to consider, that the quality of the engagement a young person is having with their chatbot, that is more valuable to them than just a quantity or time spent engaging, which I think is a more common metric and ultimately, if I’m looking at this sort of large scale, I would say that my biggest hope is not ever that chatbots become stand-ins for human care. It really is that they were, they become safer, more limited and ultimately more honest tools. I think ones that can really help young people find, the information and the community that they feel like they are missing. While also speaking from experience here, reducing some of that internal stigma that I know every young LGBTQ+ person has to go through. It’s part of the journey. My hope I will end with this. I hope that something like these chatbots can end up in a place, where it is able to be. it is able to be a tool that builds to finding and connecting with real support without having to pretend to be the support itself.

[00:43:27] GUEST HOST JOSIE STEWART: I wanna thank you both for joining me today and adding the nuance that I think you’re both hoping to see in this conversation. But thank you both for being here.

[00:43:35] GUEST SYDNEY SAUBESTRE: Thank you, Josie. Thanks so much for having us.

[00:43:37] GUEST SHAE GARDNER: Thank you so much.

[00:43:40] GUEST HOST JOSIE STEWART: For listeners interested in Shae and Sydney’s work, you can find more of it on the LGBT Tech website or New America. for listeners, please explore more in-depth content on tech policy issues at Tech Tank on the Brookings website, accessible at brookings.edu. Until next time. Thank you for listening.

[00:44:01] HOST NICOL TURNER LEE: Thank you for listening to Tech Tank, a series of round table discussions and interviews with technology experts and policymakers. For more conversations like this, subscribe to the podcast and sign up to receive the Tech Tank newsletter for more research and analysis from the Center for Technology Innovation at Brookings.

Participants

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).