TechTank, a biweekly podcast from the Center for Technology Innovation at Brookings, explores today’s most consequential technology issues. Moderators Nicol Turner Lee and Darrell West speak with experts and policymakers to share data, ideas, and policy solutions that address the challenges of our digital world.
Section 230, sometimes referred to as the “26 words that created the internet,” was passed as part of the Communications Decency Act in 1996. Since then, the online world has changed dramatically, but its role—granting platforms immunity from liability for the user-generated content posted on them—has not.
The law has since become a central topic of debate, yet throughout the discussions around moderation and implications for speech, its consequences for marginalized groups, and especially Black Americans, have been largely unexplored. New research and reports from the Joint Center for Political and Economic Studies examine how Section 230 can provide benefits for Black Americans who are seeking access to free speech on the internet, while enabling some harms, such as harassment and discrimination.
In this episode of the TechTank Podcast, co-host Nicol Turner Lee is joined by Danielle Davis, the director of tech policy at the Joint Center for Political and Economic Studies, to discuss these implications and her recent issue briefs on the topic.
Listen to the episode and subscribe to the TechTank Podcast on Apple, Spotify, or Acast.
Transcript
CO-HOST NICOL TURNER LEE [00:00:00] You’re listening to TechTank, a bi-weekly podcast from the Brookings Institution, exploring the most consequential technology issues of our time. From racial bias and algorithms to the future of work, TechTank takes big ideas and makes them accessible. Welcome to the TechTank podcast. I am co-host Nicol Turner Lee, the director of the Center for Technology Innovation at the Brookings Institution. Even if you’re outside of tech policy, you’ve likely heard the term Section 230. It’s often referred to as a few words that created the internet. And it’s long been a topic of debate in regards to the responsibility of social media platforms because, you know, it generally grants giants like those big companies we all go to to like stuff and connect with people. Well, section 230 grants them immunity from liability for the user generated content posted to the platforms. Can’t tell you how often and how much I have this conversation. But the issue in recent months and years has become increasingly polarized between those arguing for stronger content moderation regulations and others essentially pushing for more free speech. But there are implications of this law on groups of color, particularly Black Americans. Who have largely been left out of this debate. So I could not wait to bring on my friend and scholar, Danielle Davis, who is the director of tech policy at the Joint Center for Political and Economic Studies. I actually started there many, many years ago because she has recently focused on the impacts of Section 230 on Black communities. And she’s just written this fantastic report that I want to talk about. And guess what? She also has her own podcast worth mentioning called The Miseducation of Technology. Now Danielle, I try to tell myself I’m not that person, so please correct me when we have this conversation. But I wanna thank you for joining us today.
GUEST DANIELLE DAVIS [00:02:18] Thank you for having me.
CO-HOST NICOL TURNER LEE [00:02:20] Well, I cannot tell you how excited I was to see a report. And I immediately thought of you as we were trying to think about an episode. You know, I laid out a little bit, just a little on section 230, but with your expertise, why don’t you give our listeners, you know, some of them may know a lot more about it, some of not so much, but provide an explanation of what section 230 is.
GUEST DANIELLE DAVIS [00:02:43] Absolutely, it’s one of my favorite topics, so…
CO-HOST NICOL TURNER LEE [00:02:46] I know, anybody who says their favorite, they must come on a podcast. I know right?
GUEST DANIELLE DAVIS [00:02:51] So Section 230 is a provision of the Communications Decency Act of 1996. It’s one of the most important and clearly one of the most controversial laws shaping the internet. So essentially at its core, it says that online platforms and services aren’t treated like traditional publishers, such as newspapers, when it comes to user generated content. So that means that generally these platforms can’t be held legally responsible for what their users post. So this protection, I often describe it as the shield. And I’m specifically talking about Section 230C1, which protects platforms from being sued just because a user posts something false, offensive or harmful. It applies broadly, not just to social media sites, which is usually… Um, I guess the, the, the platforms we think about when we talk about Section 230, because of the news, um, but it also applies to search engines, comment sections, review sites like Yelp and even marketplaces like Amazon or Etsy. So if the content comes from someone else, Section 230 usually shields that platform from liability. There’s also what I like to refer to as the sword, which is Section 230C2. Also known as the Good Samaritan Provision. It allows platforms to moderate their sites by removing content they find or deem objectionable. That’s the standard. Objectionable content can include things like hate speech, misinformation or harassment without being punished for taking that content down. So in other words, platforms can take things down and still be able to not be considered a publisher of that content. Which platforms were not allowed to do before section 230 became law. The core principle behind all of this is that platforms are considered hosts, not publishers. They give people a place to speak, but they’re not automatically responsible for everything that’s said. So, but even with all that, there are still exceptions. This isn’t just like full immunity for, you know, not being liable for the things that your content, your users post. For instance, the statute does not shield platforms from liability in cases involving violations of federal criminal law, intellectual property law, the Federal Electronic Communications Privacy Act of 1986, or federal and state sex trafficking laws. But essentially, the goal of Section 230 was to strike a balance between free expression and responsible moderation. This was The goal was to encourage companies to keep the internet open while also giving them tools to manage harmful content. But today with the internet playing such a central role in our democracy and daily lives, there’s a growing debate about whether that balance is still working, especially for communities who are often targeted online and left without real recourse when platforms fail to act, such as the Black community.
CO-HOST NICOL TURNER LEE [00:06:08] Well, yeah, and that’s so interesting too. So I kind of want to, before we delve into your report, I love the way you laid this out and thank you. I mean, it’s probably one of the best explanations I’ve heard and very clear at best as well. You know, but social media didn’t really exist at the onset of Section 230, am I correct? I get all my tech policy sometimes confused.
GUEST DANIELLE DAVIS [00:06:29] Yes, correct.
CO-HOST NICOL TURNER LEE [00:06:31] To your point, like it has become sort of. This enthralled sense that Section 230 is necessary because now social media exists. Just talk to listeners a little bit about like how this actually predated social media and how it’s just become much more interesting as social media and other platform appliers have, other applications have kind of flooded the internet.
GUEST DANIELLE DAVIS [00:06:56] Absolutely. So the news will make it seem as though Section 230 came about when tech companies came about, like Facebook and TikTok, but Section 230 was first passed in 1996 when none of these tech platforms were even a thought. The internet was still in its early days. If you want to think dial-up, AOL chat rooms.
CO-HOST NICOL TURNER LEE [00:07:23] No don’t… No, don’t don’t say think I did have those things. Okay, so I also have
GUEST DANIELLE DAVIS [00:07:26] I also had them. I was very young, but I had an AOL chat room account. Also like static websites were still a thing at that time. So lawmakers, more specifically Senator Ron Wyden and former representative Chris Cox wanted to encourage innovation and allow online services to grow without being crushed by lawsuits. Every time someone posted something offensive or false or inaccurate. They had, at that time, there had just been a court case where a platform was essentially punished for trying to moderate harmful content, which I’m sure just, it just sent the wrong message. Like a platform is trying to do the right thing, taking down harmful content. But in doing that, they’re deemed the publisher and punished for it. So Section 230 was designed to flip that to say, essentially, if you are a platform and you’re making a good faith effort to moderate content. We’re not going to punish you for that. So in other words, it created both the shield from liability and the freedom to clean up harmful content. But now fast forward to 2025, the internet is a completely different place. It has changed dramatically. Now we have massive platforms with billions of users. We have powerful algorithms that shape what we see and we have real world harm from misinformation. Harassment, and discrimination that is often facilitated online. So the debate has shifted from how do we protect platforms so the internet can grow, to how do we ensure platforms are accountable, especially when their policies and their algorithms can impact things like public health, elections, and marginalized communities. And that’s really the heart of the current conversation, in my opinion. It’s pretty much how do we update or interpret section 230? In a way that reflects the realities of today’s internet and not just what we hoped for in 1996.
CO-HOST NICOL TURNER LEE [00:09:28] And what I loved about your paper, so I want you to dive into that now, is that you’re making this argument that, whereas Section 230 has been sort of seen as more as a lever for free speech, right? And so there is this movement out there that suggests that the internet should not be censored. You actually in some respects say, okay, I got that part, right. But I wanna talk a little bit more about why it’s even more important that we reevaluate Section 230 in the context of people who traditionally have not necessarily been thought of in the free speech conversation, if I can add, you know? So tell us more about the report and why you were compelled to sort of write the report.
GUEST DANIELLE DAVIS [00:10:12] Yes, I think that the impacts of Section 230 on the Black community is often lost in the conversation. And that’s what makes the implications of Section 230 on Black communities report and the supporting issue briefs that I wrote so critical, important, and timely. It’s the first time that Section 230 has been explored solely from a Black perspective. And so I just have to give a quick shout out and kudos to um, Spencer Overton and Katherine Powell for taking the lead on this foundational research, but to get to your question, because I digressed a little bit.
CO-HOST NICOL TURNER LEE [00:10:49] Oh, no, no. Please, please give credit where credit is due. You both know Spencer, so I’m happy he actually started this work.
GUEST DANIELLE DAVIS [00:10:57] Absolutely amazing lawyer, researcher, and mentor to me, but while Section 230 is far from perfect, it’s also been critical in protecting free expression, especially for marginalized communities, including the Black community. Because of Section 230, people have been able to build platforms, they’ve been able share their stories, they’ve be able to organize around issues that are important to them without needing like legal teams or access to media gatekeepers. If we think about the role that Black Twitter has played, what used to be called Black Twitter, now it’s X, but the role Black Twitter has played traditionally in shaping culture and calling out injustice or even sparking movements like Black Lives Matter. Black Lives matter would not have been a thing without Section 230, because the kind of speech that it invoked, the raw, the unfiltered, this community-driven speech. Thrives in part because platforms aren’t held liable for every single thing that their users post. So without Section 230, many of those conversations might not have ever made it to the public square. But outside of that, the law also protects smaller Black creators and entrepreneurs and advocates who run their own forums and newsletters and community apps and other things, websites and blogs. Without these protections, a single comment from a user could expose them to lawsuits. And that’s a burden that most small creators, small businesses just can’t afford. So even as we push forward for more accountability from Big Tech, we also have to be careful not to silence the very voices. Section 230 has helped to amplify. So for me, and also I’m sure that the Joint Center would agree. And other advocates is that the challenge is how do we protect that freedom and innovation while simultaneously ensuring that platforms aren’t turning a blind eye when that same speech environment is then weaponized to harass or suppress the Black community. So it’s not about, you know, it’s, it’s as simple as throwing section 230 out because if I can speak frankly, I strongly oppose. Completely repealing Section 230 due to the benefits that I just provided. And that’s something that we also go into in the report and issue brief as well. But rather it’s about building on the good aspects of Section 230 while keeping equity and safety in mind.
CO-HOST NICOL TURNER LEE [00:13:36] Well, and that’s what I love about the paper, because to your point, and as an internet enthusiast like yourself, the internet has been low-hanging fruit for more voices to get out there, more innovators to be seen and to create products and services that may not necessarily be in the marketplace. But at the same token, you have this mechanism that enables that type of engagement, but it also engages people. You know, on the quite opposite and contrarian side of this, right? Absolutely. You know, I remember years ago, I was just confused on why the public square was so discriminatory at times. But then I realized, you know a lot of that has to do with the technological advancement of algorithms and the fact that much of that content also gets seen. I mean, speak to us a little bit, and you talk about this in your report as well, like this other side of Section 230. How in some respects it also shields a speech that could be, you know, I mean, at face value at least, I mean you’re a person who knows more than me, at face-value could be perceived as something that is less ideal and more discriminatory at best when it gets on the internet. Absolutely.
GUEST DANIELLE DAVIS [00:14:50] Because if we look at section 230, it is like a, it’s a double edged sword. And it does provide all these benefits, but there’s also the drawbacks to section 230 because while section 230 protects platforms from content that their users post, it also protects platforms when that speech is used to target, harass or endanger marginalized communities, including the Black community. And that’s not me being hyperbolic. That’s not mean being, you know, just over the top. And it’s not hypothetical. Like it’s already happening. Um, so one example could be if we take anti-Black harassment, um, specifically Section 230 C one shields platforms from being held legally responsible when Black users are targeted with racist threats, slurs, or coordinated harassment campaigns for lack of a better phrase. Even when that speech causes real harm. In the report, we referenced that 54% of Black users have experienced online racial harassment compared to just 17% of white users. And the statistics aren’t much better for Black teenagers because they’re five more times likely to face race-related cyberbullying than their White peers. Even with that, platforms have little legal incentive to intervene because section 230 protects them whether they act or not. And that legal immunity often disincentivizes them from acting immediately on those issues. But another example could even be if we take that we talk about in the report in detail is the Buffalo mass shooting. The shooter was radicalized online by white supremacist content that circulated freely on sites like 4chan, thank you, Section 230. He live streamed the massacre on Twitter. And even though the stream was taken down quickly, copies spread across platforms such as Facebook and X and remained up for days after the original massacre took place. So because of Section 230, those platforms are essentially shielded from liability even when that content fuels. Real-world violence. We can also…
CO-HOST NICOL TURNER LEE [00:17:13] Yeah, go ahead. Go ahead. I mean, I want to stop you there for a minute, right? Because I want talk about that. I mean, should we I mean are you sort of suggesting? I’m just trying to think about how I want to say this because this has been my challenge with Section 230 that companies should be wanting to do the right thing like in the case of the Buffalo shooting and immediately take it down because they don’t want that horrible content on their site. But yet do you think it’s on the part of companies like they have attention because if they take it down, they may be in many respects. Acting more like a publisher on that side. You know what I mean? Like it’s kind of a weird dynamic. I wanna-
GUEST DANIELLE DAVIS [00:17:47] Well, I think a lot of this sometimes flies under the radar, but also part of it, if I could speak honestly, I do think that, you know, engagement is profit for these companies. So longer people are on, the goal is to keep them on the platform. So whether it’s, you, know, absolutely insane or educational, it doesn’t matter to them. How can they keep you on the platform? Um, but a lot of this is just, I think it’s falling. It just falls under the radar and it’s not looked at in the same way. And also I know that, um, Zuckerberg has been on record many times saying he didn’t want to be the arbiter of truth. He didn’t wanna be put in the position to tell people what is true and what is not, I mean, we can, you know, take that for face value or, you know, but in my opinion, it’s a mixture of all the things I just discussed.
CO-HOST NICOL TURNER LEE [00:18:44] Well, that’s interesting, too, because there are carve-outs, right, but the carve-out don’t appear to be those that sort of appeal to the moral compass of people who might be looking at that video thinking, oh, my goodness, why is this still here, right? Talk a little bit about the carve outs, though. I mean, because I think for the purposes of people who are getting exposed to this, this is not necessarily a Wild Wild West, right. You would agree, Danielle, right, and I think you’re right about that. What carve-outs do exist that potentially this could apply to, or at least some of the more democratically-enabled participation mechanisms like voting and other things would definitely be something of alert onto these companies.
GUEST DANIELLE DAVIS [00:19:25] Well, there’s not necessarily carve-outs right now. We have the ones for sex trafficking. We have one that just passed a few years ago. In addition to that, you know, what I talked about earlier, you also have the exception. So not everything can just flourish. Like copyright laws, you know, anything with intellectual property, criminal laws are also something that they can’t get immunity for. But there have been many proposals about. Regarding section 230 and how we would address it. So there’s stuff like, there’s proposals like the civil rights carve out. We also have the paid ad carve out, we have the neutrality, content neutrality carve out so we have many carve outs that have been proposed but as we talk about in the report, We’re not necessarily recommending any one of these. We’re just showing you what is out there, but we do acknowledge that it would probably take a combination of multiple carve-outs to get something that is, I would say, effective. It’s not gonna be just one thing that’s gonna make everything better. And there’s also drawbacks to each carve-out.
CO-HOST NICOL TURNER LEE [00:20:49] Well, and that’s part of the challenge, right? I think Section 230, the First Amendment clause is what just triggers all type of litigation and legal challenge. I mean, this is like the hardest problem ever to solve. I mean I have to be honest. It’s one that I prefer not to always talk about because I never have an answer. And I am not one that is short with words, okay?
GUEST DANIELLE DAVIS [00:21:11] And I agree, because even, like I said, within the report, we do not recommend any one of these carve-outs because this is a larger problem and a larger discussion. And if you don’t get this right, you can really, really change the internet in ways that you won’t realize until we have the law and the books. So, how I think about this personally is… Yeah, we do need some intentional reform. We need to really think about that and it needs to be very narrow and address certain issues. But we also need to think about as a society, what are we willing to give up with addressing this issue? Like, I love, like my favorite thing online is to look at the comment section. The comment section keeps me extremely entertained. But if we have a situation where a carve out is going to make it hard for the or make the platforms liable for the things that we say, we have to accept the fact that maybe that comment section may go away and are you willing to give that up? So there’s a lot of things that we have think about when we talk about reforming Section 230.
CO-HOST NICOL TURNER LEE [00:22:27] I do think though, you know, I do read the comments and you know there’s a, there’s research that suggests when you get past the ninth comment, they just get really awful, which is usually the case. So they become so racist and discriminatory, sexist and homophobic, it’s awful. But I think about though, where we are today, you now, as we sort of hone in and sort of close out on this issue, I think, about where we’re today, right? We’re more polarized than ever before. We are not speaking to each other as human beings. We have a political backdrop that is sort of encouraging that type of polarization. And we have a lot of distrust in our institutions and many people often do go to the internet to sort of vet what their concerns are. I mean, in many respects, it appears because we’re so polarized, we’re also becoming much more rude to each other where having no liability or high immunity for tech companies. Just means that they’re sort of playing into that speech as well, which can be harmful and hateful. So there’s one side of me, my Jekyll and Hyde, that says, well, maybe we need to reform Section 230 because there’s just some things that should not be said or better unsaid on the internet. But then there is the rollbacks on DEI. There are, you know, books that are being banned. There are speech that’s being limited in marginalized communities where you do need. Some protections for people to have some free expression, especially if they’re on a college campus and they’re essentially being arrested. Or, I mean, let’s just go a little further or their students and their social media is being surveilled for what they say. It appears to me in that instance, Section 230 is gonna do nothing to protect people who just want to live a life of existence. Looking into the future, what do you think becomes a Section 230? Are we still going to be sort of stuck talking about whether or not we need to reform it? Do you see this as a trigger for a larger conversation on speech in America? Is this something that will continue to have some immunity for the tech companies long enough until we see an administrative change? I’m just curious from you, like, what happens next, given that the internet has just become a place of a lot of, you know, it’s kind of like. Take your shoes off and dump your trash, jump your stuff at the door kind of thing if you don’t have a mud room at your house. So everything just piles up and then over time, everybody forgets that they have 10 shoes there. Like that’s where we’re headed, right? That’s not the best analogy, but that’s what we’re headed.
GUEST DANIELLE DAVIS [00:25:03] I completely agree. And honestly, just like you, I don’t have the answers to this question. It’s a very complex topic because of the dual edge nature of Section 230. Like right now, I would say that because of of the political environment that we are in and because of attacks on DEI and what it feels like attacks on free speech right now. I’m happy that the current administration is not addressing this issue at this very moment, because I don’t know what that would look like under a Trump administration. But I can say that when we do address it, and if and when we address this under this administration or any administration going forward, it has to include the perspectives of those who are most harmed by the law. To me, one clear step in making sure, one clear is making sure that any reform efforts center equity and not just these abstract principles of free speech or innovation, because the report makes it very, very clear that Black Americans are often on the front lines of harm from voter suppression to algorithmic exclusion online and. And online harassment, but we’re often not in the rooms where decisions about internet policy or Section 230 is being made. And in my opinion, that is what has to change. Policymakers and tech platforms really need to engage directly with Black experts, researchers and community leaders who are experts in this area, not as only an afterthought when something has gone wrong, but as a core contributor. Uh, we also need more data transparency from platforms so we can actually understand how these harms show up. Um, but because without that, it will make it significantly harder to hold companies accountable or push for certain targeted reforms. Um, and I also don’t want to. Like, you know, make this, this false, um, choice that we have right now. Like we have this false choice presented to us as free expression or safety. And I just don’t agree that that is that we have to accept that as true because like we just discussed earlier, while the report doesn’t explicitly recommend any of the proposals discussed in it, we do acknowledge that if anything is done, it may need to be a combination of proposals that’s enacted in order to be somewhat effective. But we have do so, and this is the key part of why I’m hesitant on actually amending or repealing the law. Is because we have to make sure that it doesn’t stifle the opportunities that section 230 provides to our community as well as other marginalized communities. So in short, I may have went the long way to answer your question, but I believe how we address this going forward, you know, I researched the impacts of Black Americans. So Black voices have to be a part of this policy conversation. Marginalized voices have to be apart of the conversation and not when the harm is already done. But when you are shaping the future of internet governance from the start.
CO-HOST NICOL TURNER LEE [00:28:27] You know, I agree with you and that’s why you made our unhidden figures list not too long ago. Thank you so much. For those of you who follow the Center for Technology Innovation, we have an AI equity lab. And most importantly, what we try to do there is bring together interdisciplinary folks from different sectors and different communities and different backgrounds to sort of think about purposeful and pragmatic AI solutions. And I’m curious, I want to keep talking about this with you, because I think to point. As technology has become much more sophisticated, it’s just really harder to disentangle when it’s actually providing a public good and when it potentially may be providing more harm. And I think that’s the tension that Section 230 continuously is wrestling with. How do you make sure people can say what they wanna say, but do it as my mama would say in a respectful manner so that people are not hurt. And so… Yeah, come on back. Let’s talk about it more because I think you’re right on target. Like we got to have more voices like yours at the tech policy table. Listen, thank you so much for joining me. And you have your own podcast, Miss Education of Technology. Where can people find that podcast?
GUEST DANIELLE DAVIS [00:29:39] The podcast is on Spotify podcast is as well as Apple podcast. So take a listen and leave a comment and let me know what you think.
CO-HOST NICOL TURNER LEE [00:29:48] Beautiful. It would not have been inspired by Lauren, Lauren Hills, the miseducation of her album. Okay, nevermind. I’m dating myself. Plus, no, you’re not. I mean, I’m not right. I didn’t go back to Carter G. Woodson. Okay. I went to, you know, but please, let’s listen to what Danielle has to say. As much as you listen to us, there’s There’s not enough voices in tech policy space. To go around. So thank you again for joining me. And thank you to all of you that took time to listen to this current episode. Please explore more in-depth content on tech policies at TechTank on the Brookings website, accessible at Brookings.edu. Your feedback also matters to us about the substance of this episode and others. So leave us a comment, like us, share us, do anything that we get more people to actually have these engaging conversations like we do. This concludes another insightful episode where we take bits and turn them into palatable bites until next time. I’m Nicol Turner Lee and thank you for listening. Thank you for listening to TechTank, a series of roundtable discussions and interviews with technology experts and policymakers. For more conversations like this, subscribe to the podcast and sign up to receive the TechTank newsletter for more research and analysis from the Center for Technology Innovation at Brookings.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
PodcastExamining Section 230’s impact on Black Americans | The TechTank Podcast
Listen on
July 23, 2025