And we have the ability to ensure that we build technologies that are ethical, responsible, inclusive, and fair, and that they do not embolden further systemic inequalities that pull us back as opposed to progress us in the type of world that we want to live.
Nicol Turner Lee
Political campaigns in America have always featured misinformation about the issues, but today, AI and other new technologies represent an unprecedented challenge to the electorate and our political system. The scale and sophistication of AI-generated deepfake images, voice recordings, and videos are widespread and could alter the outcome in many elections. In this episode, host Katie Dunn Tenpas sits down with Senior Fellow Darrell West and Nicol Turner Lee, senior fellow and director of the Center for Technology Innovation, to discuss the impact of new technology on elections and what we can do about it.
- Listen to Democracy in Question on Apple, Spotify, or wherever you like to get podcasts.
- Watch episodes on YouTube.
- Learn about other Brookings podcasts from the Brookings Podcast Network.
- Sign up for the podcasts newsletter for occasional updates on featured episodes and new shows.
- Send feedback email to [email protected].
Transcript
[music]
TENPAS: Hi, I’m Katie Dunn Tenpas, a visiting fellow in Governance Studies at the Brookings Institution and the director of the Katzmann Initiative on Improving Interbranch Relations and Government. And this is Democracy in Question, a podcast about contemporary American politics and the future of democracy.
In each episode, I’m asking my guests a different question about democracy so that we can better understand the broader contours of our democratic system. There’s a lot happening in U.S. politics at the moment, including a highly contested presidential race. But in this podcast, I’m trying to get at the deeper questions of how democracy works or is supposed to work.
On today’s episode, the question is how do artificial intelligence and disinformation impact elections?
Political campaigns in America have always featured misinformation about the issues, made up stories about opponents, and even lying. But today, artificial intelligence and other new technologies represent an unprecedented challenge to the electorate and our political system. The scale and sophistication of AI-generated deepfake videos, realistic but misleading images, and simulated voices are widespread and could alter the outcome in many elections. While some state governments are attempting to pass legislation to govern the use of AI in campaigns and elections, the technology is advancing too rapidly, and public education is not keeping pace.
So, how do AI and disinformation impact elections? To help explore and answer this question, I’ve invited two of my Governance Studies colleagues to the show. First, Darrell West, a senior fellow with the Center for Technology Innovation and the Douglas Dillon Chair in Governmental Studies. He’s also a coeditor-in-chief of TechTank blog and podcast and coauthor with Elaine Kamarck, a previous guest on this podcast, of the new book from Brookings Institution Press, Lies that Kill: A Citizen’s Guide to Disinformation.
And then I’ll talk with Senior Fellow Nicol Turner Lee, who is director of the Center for Technology Innovation, coeditor-in-chief of TechTank blog and podcast, and the author of another just released book from Brookings Press titled Digitally Invisible: How the Internet is Creating the New Underclass.
Darrell, welcome to Democracy in Question.
WEST: Katie, it’s nice to be with you.
TENPAS: So, my first question is, are you real?
WEST: I am real. This is not a bot. This is not AI. This is me.
TENPAS: So, tell me initially, you’ve had a very successful career generating a great deal of scholarship. How did you land on this topic and what year did you land on this topic as something to explore and then write about?
[3:03]
WEST: Well, your topic of democracy in question basically highlights the importance of our information ecosystem. And it’s been clear for a number of years we have big problems in that area. Local news is being wiped out across the country, so people have lost that form of accountability. As you mentioned in your intro, there have been advances in digital technology, including most recently generative AI, that has democratized the technology in the sense anybody can use this. Like, you don’t need to be a computer science major. The AI is prop driven and template driven, so it’s accessible to anyone.
And so, the problem that we face now—and we see it virtually every day—is there has been an explosion of fake videos, fake images, false narratives. I mean, recently, Senator Ben Cardin, who’s the chairperson of the Senate Foreign Relations Committee, was the victim of a deepfake operation where someone impersonated the former foreign minister of Ukraine, called him up, got through, they had a conversation. The senator thought this was actually the real guy. And then the guy tried to trap him into making what would have been incriminating statements about, yeah, the U.S. is selling weapons to Ukraine with the goal of launching long range missiles into Russia.
And so, it just shows the technology has advanced to the point where even very sophisticated members of the Senate can be entrapped. And it took him a while to figure out that this deepfake person was not the person he was claiming to be.
TENPAS: And this is a fascinating story to me. How did he eventually figure out? It strikes me that if the voice was similar to what he might have expected that they could have hung up and he would have just moved on his way thinking it was completely a genuine call.
WEST: The voice was completely similar. And so, the tip off came when the guy just started asking very leading questions that clearly were designed to get him to make statements of, oh yeah, the U.S. supports Ukraine launching long range missiles into Russia. And at that point he realized, wait a minute, there’s something weird going on here. And at that point, you know, he got out of the conversation. But the conversation went on for a while before it reached that point.
TENPAS: And I’m not sure of the validity of this question, but is it possible to find out who is pushing these efforts and spreading these false narratives and why they’re doing it? Any insight on that?
[5:22]
WEST: In this particular case, the U.S. intelligence community believes it was a Russian foreign influence operation. But in general, anybody can create a fake video. I mean, in this election campaign we’ve seen an explosion of this type of material. I mean, on my social media sites, I have personally seen examples of a Kamala Harris in a swimming suit hugging convicted sex offender Jeffrey Epstein. That is a hug that never took place, but it looks completely authentic. With the two Trump assassination attempts, there’ve been all sorts of conspiracy theories, that it was an inside job on the part of the Secret Service to Trump engineered this in order to create sympathy for his campaign. There’s no evidence for either one of those interpretations.
But every day we’re just seeing examples of people making stuff up to try and embarrass the opposition or to promote their own political narratives. And I think from the standpoint of democracy, that’s what makes it dangerous. Like, if people start to believe this, especially in a very closely contested presidential election, we could end up in a situation of an election being decided based on false narratives.
TENPAS: Right. And this existed in 2016. The biggest difference is now we have generative AI, which makes it more accessible to more people.
[6:34]
WEST: In 2016, we had Russia hack email of top Hillary Clinton campaign advisors and then they put them out in an effort to embarrass Senator Clinton. What is different today is the scalability because the technology has grown so accessible you can basically go from 0 to 60 miles per hour almost instantly, meaning that you could create a fake video in a matter of minutes, put it on a social media site. There could be bots that then promote and publicize that. You could reach an audience of millions in a very short period of time.
And so, as you point out in the introduction, this is not the first time we’ve seen lies, propaganda, or dirty stories put out about the opposition. But the scalability and the speed and the velocity by which these stories can circulate make it particularly dangerous. I mean, there are fact checkers today, you know, we still have national journalists who are monitoring these stories. They can’t keep up with the false stories, like the stories can be generated much more rapidly than anyone can fact check them.
TENPAS: Wow. Please tell me there are ways to mitigate disinformation. Do you have any suggestions? Does the responsibility … is is it borne by these organizations like X and other social media?
[7:45]
WEST: The good news is, even though our book, Lies that Kill, describes a very challenging topic for democracies around the world, we actually wrote an optimistic book in the sense that we make a number of policy recommendations in the sense that there actually are lots of things that we could be doing to help people deal with this. Like, the subtitle of our book is “A Citizen’s Guide to Disinformation.” So, we really present arbitrarily in a way to try and help people both identify disinformation so that they can protect themselves, but also advice for societies and communities on what they can do.
So, for example, on the question of digital platforms, they should be doing a lot more than they are right now. In fact, in the 2020 election, most of the major social media sites had very active content moderation strategies. When they saw blatantly false material, they would take down that content. Today, many of those very same sites are not taking down content. Even in the case of the Trump assassination, those things circulated for hours and days, even though they were blatantly false. So, the tech companies definitely bear responsibility.
TENPAS: Can I stop you just for a second there? Why did they previously take down that kind of information relatively quickly, but now they’re not? What’s the policy change?
[9:00]
WEST: They felt like they had a social responsibility to help democracy function effectively. So, in 2020, they took on the responsibility, they hired a bunch of humans, they would monitor social media posts, and then they would take down the most egregious things.
The problem is disinformation has become a contested space. People argue, oh, that conspiracy theory, like, you think it’s a lie, I actually don’t think it’s a lie. So, just information has kind of gotten engulfed in polarization and hyper partisanship. And so, the tech companies basically decided they didn’t want to be the referee in this fight between liberals and conservatives, and Republicans and Democrats. So, they stepped back from the very responsibilities they exercised just four years ago and basically decided not to do that. Some of them actually fired or reduced their staffing of what they call their human safety individuals. So, there’s just a lot fewer people policing the internet. It really has become a Wild West where virtually anything goes.
TENPAS: Wow. And are there examples of social media platforms that actually still adhere to this policy of taking down what they see as patently false? Or have they all just sort of thrown up their hands and it’s the Wild West?
[10:11]
WEST: Some of them actually still have terms of service that basically outlaw illegal activity, inciting violence, and other sorts of things. But they’re often not enforcing those laws. So, for example, Twitter/X still says it has a policy not to spread false narratives. But Elon Musk, the owner of X, himself is spreading lies in a lot of cases. So, you know, it’s one thing to have the policy, but if you’re not enforcing it the policy doesn’t mean anything.
TENPAS: Right. And I guess in a situation like this, citizens would say, well, Congress, you need to intervene and regulate and make requirements that force them to behave like they did in 2020. Is there any motivation to do that?
[10:54]
WEST: There’s a lot of interest on Capitol Hill. I’d say in the last year I’ve probably done almost a dozen briefings with people across the political spectrum, both Republicans and Democrats. I mean, senators and House members are hearing lots of complaints from their constituents, you know, people worried about this election. So, there’s a lot of legislation that either has been introduced or is about to be introduced.
Part of it is going to be disclosure requirements. So, for example, if you use AI to generate content in a campaign communications—a TV ad, or otherwise—the legislation would require the disclosure of the use of that AI, so at least people would be aware. Some states have actually gone even further, like Minnesota has actually been a leader in this area where they are trying to regulate harm.
So, there are all sorts of laws on the book, like defamation laws, consumer fraud. Obviously, there’s a range of laws outlawing various types of behaviors. Some states are basically trying to apply this to the digital space and saying if you do something false online that harms another individual, especially in the context of an election campaign, there can be fines and other remedies waged against you. So, there are some states that are legislating.
Congress has not yet passed any disclosure law or legislation dealing with the harm. But I think based on what we’re seeing this year, that is likely to be the case in the very near future, because both Republicans and Democrats are worried these tools are going to be used against them. So, there actually is some hope of bipartisanship on this issue.
TENPAS: Right. So, here’s a question. If it is the Wild West and social media platforms are basically just letting anything go up online, is it clear that that the disinformation is benefiting one party over another? It seems like they both would be adversely affected by it.
[12:38]
WEST: In our book, we find examples of lying across the political spectrum. There are conservatives who are spreading lies, there are liberal organizations who are spreading lies. There are companies, for example, in the smoking area and in the climate change area that over a period of years have spread lies undercutting the reality of climate change. We have a chapter on public health, and we all remember during COVID there were a number of false remedies that were advertised. The Food and Drug Administration undertook a number of enforcement actions to take down companies’ sites that were advertising false remedies and trying to protect the consumer on that basis. So, there is a lot that has been percolating in those areas.
TENPAS: And so, when we talked about earlier in the conversation, we talked about who is spreading these false narratives, and it seems like they’re mostly coming from abroad—you mentioned Russia. Instead of requiring the social media platforms to rise to the occasion and regulate, is there a viable means of going after the perpetrators? Or is it because they’re spread out and it’s so easy for them it’s just impossible to?
[13:45]
WEST: We actually have seen the U.S. Department of Justice indict several Russians on grounds that they were spreading false information. So, there is a legal basis to go after foreign operatives. There need to be international agreements not to try and take down another country’s society. Even during the Cold War we had negotiations with the then Soviet Union to try and limit the spread of chemical weapons, nuclear weapons, biological agents, and so on.
I view these threats in the information area to be as serious as some of those threats. So, we’re going to need international agreements where countries basically agree they’re not going to take down the critical infrastructure, including the information infrastructure, of opposing countries even if that country is an adversary of yours, there should be some things that are off limits.
TENPAS: And would you be optimistic about those agreements actually being implemented and followed?
[14:35]
WEST: International agreements work only if both sides, both parties to the agreement, have an incentive to enforce this. And it’s kind of like nuclear war, like everybody understands that’s going to be bad for everyone. People need to understand that information warfare can be very destructive of all the countries involved. So, there actually is a basis for countries to come together, like, America doesn’t want Russia and China taking down our critical infrastructure. China should worry about other countries trying to do damage to them as well.
So, there should be some rational basis for adversaries to come together in the same way that we did during the Cold War. There were lots of international treaties between the United States and the Soviet Union, even though we were sworn enemies at that time.
TENPAS: Right. And are there any European countries or other countries that are sort of at the forefront of trying to create these treaties and to get countries to sign on?
[15:28]
WEST: The European Union has passed legislation in several different aspects of digital technology, but most notably AI. They have created a provision where companies can be fined up to 6% of their global revenue if they allow disinformation to spread on their site. Now, 6% of the global revenue of any of these large companies, that is a very big number. So, that is a serious fine. The U.S. government has levied against some of the U.S. tech companies on various types of issues. They generally tend to be lower fines, but the fines are getting bigger. So, countries are getting tougher in a lot of these areas. And so, the tech companies need to understand if they don’t act responsibly, there’s likely to be more government oversight over their activities.
TENPAS: And are the fines of a sizable nature that it’s not just a slap on the wrist, that they actually can affect their bottom line? Do you have any sense of that?
[16:20]
WEST: If it’s 6% of your global revenue, that’s a serious fine, like we’re talking about billions of dollars in that situation. So, that clearly is very significant.
A lot of countries are now starting to undertake enforcement actions against the tech companies. The tech companies understand that things are a little out of control. I think in the United States, one thing that is going to encourage national legislation is the states are acting. What the tech companies don’t want is to have 50 different sets of laws. Like you don’t want to have to build a different algorithm for Illinois versus Idaho. Like, that defeats the whole scalability of technology. Like, you lose the advantages of technology if every state has a different set of rules, and you have to devise a different kind of platform for each of those situations.
So, the fact that so many states are acting now to pass legislation to deal with various types of digital technology, that’s going to create real problems for the tech companies. And I think eventually that’s going to encourage them to go to Congress and say we need meaningful legislation in this area.
TENPAS: Yeah, and that’s just a matter of time for them to start to feel the pressure.
[17:26]
WEST: Yes, the European Union already has acted. American states are acting. There are a bunch of legal cases going through the judicial system. The Supreme Court already has acted on few. There are others that are going to be coming up. So, there’s actually a lot of different types of public oversight that is taking place. I think people understand kind of the genie is out of the bottle. You know, we still want to preserve, you know, American competitiveness in the innovation space, but we need to deal with the harmful problems that already are apparent to really anybody who follows this area.
TENPAS: And just in terms of magnitude, are you able to ballpark how many social media platform companies in the world that are sizable enough to influence a democracy? Are we talking about, like, hundreds of companies or thousands of companies that that need to be regulated in this way? How many?
[18:14]
WEST: Well, when you look globally, it’s actually a handful of companies that have the scale and magnitude that would be probably—
TENPAS: —so like less than ten?
WEST: Yes, definitely less than ten. Many of them are American companies, but not entirely. There are companies in other countries that have also reached a very high degree of scalability. So, there’s a a finite number of companies. And the European Union actually just focuses on the large companies. Like, there are, you know, dozens if not hundreds of social media platforms. But if you’re a small platform, your ability to create havoc in a society is somewhat limited. And so, people are really wanting to target the largest companies just because that’s where the scale of the problem becomes really problematic.
TENPAS: And I mean, it’s obvious that spreading disinformation affects elections and therefore affects democracy. But what do you think is the most deleterious impact? Like, there’s lots of ways it’s adversely affecting our democracy right now. Which is the worst? Like, what do you think is the most egregious?
[19:14]
WEST: I think one of the biggest problems is disinformation sows discord within a society. It pits people against one another. And it creates a situation where nobody trusts anyone else. Every democracy requires some common set of interpretations about what’s going on. They need common facts. There needs to be some minimal levels of trust for a political system to operate. Like, we know we face lots of problems. In order to bargain and compromise and negotiate, there needs to be some shared understanding and there needs to be trust in the other side. Those things are lacking right now, and it’s one of the reasons our democracy is not performing very well.
TENPAS: Right. And so, if you were to look at the 2016 or the 2020 elections in retrospect, are there ways as a social scientist that you can determine what the impact was of disinformation? How can you do that?
[20:09]
WEST: It’s a hard topic to investigate because we don’t really have the data or the measurement instruments to make definitive declarations in that regard. But if you look at public opinion surveys today, just in terms of the percentage of Americans who believe things that we actually know not to be true, it’s shocking how big some of the numbers are. There are false beliefs that are shared by 30, 40, or 50% of Americans. There are doubts about the reality of climate change that are pretty widespread in the United States. We saw during COVID a significant part of the population didn’t want to get vaccinated even though there were known health advantages of vaccinations. So, you can start to see when there’s millions of people believing false information how big of a problem this is and our need to get a handle on the information infrastructure.
TENPAS: And when you think back to the Trump administration from 2017 to the early part of 2021, there were moments where he himself clearly tried to change the narrative and change information such that there was a hurricane forecast and he was trying to show that it was going to hit states that NOAA said it wasn’t going to hit. And then there were examples during COVID where the administration said X, but actually Y was the reality.
Do you think it’s just a confluence of the rise of AI and the technological advancements converging with a president at the time who was willing to kind of perpetuate falsehoods that has made this a perfect storm and made it more influential than it otherwise might have been? I realize that’s a hard question to answer with certainty. But can you talk about that?
[21:48]
WEST: I mean, there certainly are individuals who are spreading lies. And, you know, that is a big problem for our country. But I think there’s a deeper and more fundamental underlying problem here, which is we live in an era of mega change where there are large scale transformations taking place in technology, in business models, the way that markets operate, the geopolitical situation around the world.
The problem with all that mega change is it’s making all of us nervous. It’s making us worried that people who don’t share our views are out to get us in some respects. We’re not trusting even of our neighbors. There have been surveys on social trust, do you trust your neighbor? And people say, no, I don’t trust my neighbors anymore.
And so, all these things have come together in a way that creates real risk. It’s not just the politicians who are spreading lies, but the fact that people are so anxious and, in some cases, angry that false narratives become completely believable to a large number of people. I think that is a bigger problem. It’s not just the individual spreading the lies, but the fact that some of us sometimes want to believe really bad things about the opposition.
TENPAS: Yeah. So, on a scale of 1 to 10, how nervous are you about the future of American democracy?
[23:07]
WEST: I’m worried about this election cycle just because many of the things that we recommend in our book are not going to get enacted in the few weeks that we have leading up to the November elections. But I’m actually optimistic on a longer-term basis because I think the one thing that has happened this year is this year has been a tremendous teachable moment for all of us. Teaching us about the power of AI, the risk of disinformation, the way polarization and hyper partisanship creates problems for our political system. So, I do think there’s legislation pending. There’s going to be more public oversight. Eventually, we will get a handle on this. So, I do think in the longer run, I’m actually quite optimistic about our ability to handle these issues.
TENPAS: So, then I take your two numbers of how nervous you are now, plus how un-nervous you are in the future and average them and that tells you where you are on the scale?
WEST: Pretty much. That would be a good estimate.
TENPAS: What’s a good number then? Maybe a 7?
WEST: It’s hard to quantify, but in terms of this year, because of the Electoral College, I’m worried that it would only take the ability of disinformation to influence a very small number of people in one, two, or three states to tilt the election one way or the other. So, I think that’s something that I think is very risky. But on a longer-term basis, I think our country will get a handle on this. And other countries around the world are experiencing exactly the same thing. Like, this is not an American phenomenon, this is a global problem. And there are lots of smart people around the world working on these issues.
[music]
TENPAS: Yeah. Well, thank you so much for your time. This is a fascinating discussion. You taught me a lot. I appreciate it.
WEST: Thank you, Katie.
TENPAS: And now Nicole Turner Lee, who, in addition to her leadership of the Center for Technology Innovation, has extensive experience researching AI governance and in 2023 launched the AI Equity Lab, which focuses on advancing, inclusive, ethical, nondiscriminatory, and democratized AI models throughout the world.
Nicol, welcome to Democracy in Question.
TURNER LEE: Thanks for having me, Katie. I appreciate it.
TENPAS: Yeah, and congratulations on your new book. That’s very exciting.
TURNER LEE: Oh, I appreciate that, too. It is a labor of love but been so exciting.
TENPAS: Yeah. That’s great. Well, why don’t we just start at the top with this broad question of how AI and disinformation influence elections, and you can take it from there.
[25:38]
TURNER LEE: You know, I think that’s a really interesting question, and I think it’s one that so many people are pursuing because to a certain extent, we have an election that’s before us and we have many elections that are happening across the globe. But yet we also have the availability of artificial intelligence tools that have the ability to create deepfakes or manipulate images and in some ways refine even static images like memes that show up in the inboxes or across various platforms of voters.
With that being the case, part of the reason is because they’re so commercially and widely available. Right? Anybody can actually engage in any type of disinformation or misinformation effort in much greater capacity than we’ve ever seen before. In other words, you don’t have to be a technical genius to be able to contribute to this economy.
With that being the case as well, it’s just really important that for us to maintain an informed, democratic society that we also are able to distinguish between what is produced and basically convened by technology and what is actually something that’s more truthful.
TENPAS: And based on your just general knowledge of AI, do you think that it’ll only get easier and more prolific because as the technology advances, it becomes more accessible?
[26:55]
TURNER LEE: You know, I think so. I mean, I think low-cost options that are available in various platforms make it easy for people to do manipulation. I’ll give a personal story. My 17-year-old came in the room not too long ago and said, Mom, say “Hi.” I said “hi.” And the next, you know, I was giving her an excuse to be absent from school with my voice. You know, she then came and said, please, I know you’re a policymaker. Don’t tell anybody about this voice extraction tool because I like to play with it. But there are, you know, bad actors that see the availability of those types of tools as ones in which they can actually create much more harms.
And I think the other thing that we’re seeing as well is just the profit incentives that are embedded in the technological ecosystem also try to drive those costs down as well as reduce the barriers to access.
So, this is very different in my perspective from what we saw in 2016, where there was a lot more foreign operatives manipulating various images and creating much more of an emotional stimulation when it came to disinformation. I think today’s disinformation is also done, you know, here in the U.S. among actors and perhaps among companies where there’s a profit incentive to be able to utilize many of these tools.
TENPAS: And what would you say is the most practical advice that you could give to somebody? I mean, obviously, if I’m, you know, scrolling on on X or something like that, there will be pictures that I see that I know obviously are fake on its face. But it seems to me that what’s more insidious is the disinformation where it’s really hard for somebody to discern whether it is indeed a fake. So, what’s some practical advice?
[28:31]
TURNER LEE: It’s getting more difficult to actually discern that, and we’ve seen tech companies just employ better practices to be able to figure out what is genuine content. We’ve also seen regulators try to attempt to do what’s called digital watermarking. So, there’s a lot more provenance when it comes to was this an AI-generated image or not? So, people can sort of un-embed the original image or voice or text or whatever the case may be, whatever the digital artifact is.
When it comes to everyday people understanding this, this is where it becomes quite difficult. Because as I mentioned with the voice extraction tool that my daughter used, we saw in New Hampshire, for example, during the primary, Joe Biden’s voice sort of dissuading people to go to the polls. Very realistic voice. Many of his same mannerisms and things that he commonly says. Right?
But oftentimes we’re now seeing disinformation in the form of your pastor calling you or your neighbor calling you. And it becomes quite hard to figure out is this a person who I think it is sort of directing me or giving me advice on, you know, my voting behaviors?
It used to be it was easy to distinguish an AI-generated image because AI was not always good at how many fingers a person had or some of the granular details. If you look at some of the images, you can see that the hand starts by embracing the other person, but it never gets to the other side.
Nowadays, you know, we have this development of a variety of media, some of it which is using deepfakes of celebrity endorsements, where it becomes harder for people to determine what’s true and what’s not true. That’s why we need more regulation in this space, because I think that’s where the lack of transparency and disclosure, the idea of watermarking are just really good best practices for the U.S. for people to better understand what’s real and what’s fake.
TENPAS: So, tell me your wish list in terms of regulations. What would be the most effective regulations if it was sort of up to you to implement them or to require social media platforms to adopt them? What do you think would be most effective?
[30:38]
TURNER LEE: You know, this is a hard one. I mean, I obviously am a fan of digital watermarking, as many legislators are, because it does provide some type of stamp or a visible marking of AI-generated content, which I think is a step in the right direction. I’ve written at Brookings a lot of research on just more transparency guidance when it comes to things we’ve seen in the consumer marketplace. When the Energy Star rating, when we see the big yellow sticker on a dishwasher, and we know that this is actually a trusted product. I think we need to do more of that, particularly when we’re looking at our election infrastructure to ensure that we know that there is misinformation and disinformation being shared.
I also think on the regulatory side that we need to find ways to fund more states and, in the case of elections, local offices who can engage in better fact checking, better consumer outreach materials, be able to engage, I think, in a wider and broader media literacy campaign. We often don’t think of policy as something as simple as funding these kind of efforts. It’s not really hard to get on the same page of being able to do better materials and ensuring that they’re multilingual as well as accessible to everyday people when it comes to this issue.
And then I think the other thing is just really trying to find ways to align, particularly when we look at election as critical infrastructure, to manage the online space in many ways that we manage the offline space. You know, there’s a lot of disclosure that comes with election and political ads, not much when it’s online. So, ways in which we can actually impress those values and those processes into the online space, particularly in election scenario, I think would be helpful to consumers and reduce the levels of harm.
TENPAS: Yeah, yeah. That sounds like it makes a lot of sense and there’s no reason why they shouldn’t exist. If they exist for television commercials and things like that, then this is just another avenue of communication.
[32:34]
TURNER LEE: Yeah, I think part of the challenge is many of the legislators are still grappling with how much of this is produced. Right? And so, they’re not quite sure, you know, themselves on whether or not, you know, this deepfake is something that is manipulative for the sake of generating individual or community impact or is it something that’s being done innocuously.
And, you know, there’s a fine line. I just wrote a piece where I’m thinking about some of these memes that are out there. And technically there have been many memes that have been trolling the internet around conspiracy theories, right? that have come out. But at the same token, those are not regulated by any of the existing legislation that we have at play because satire and humor are carve outs.
TENPAS: Ahhhh.
TURNER LEE: They’re not under any of the legislative drafts—
TENPAS: —or loopholes.
TURNER LEE: Or loopholes. Right, exactly. So, you know, for me that makes it even easier for people to impress through what could be seen as more innocuous these political messages that can be polarizing. And, you know, the more that we allow this, you know, this this well to sort of get deeper and more opaque, it’s going to be harder for us to come back and fix it.
TENPAS: So, let’s shift gears and talk about the other side of the coin, which is what about law enforcement and what about getting the perpetrators at the root of this? I mean, it seems to me you shouldn’t really ignore that side of the coin, but maybe it’s just too difficult to track this down.
[33:56]
TURNER LEE: At least on the cyberspace, there have been many recent arrest of Chinese as well as Russian operatives that have been using disinformation tactics to upend our current election. It’s been known, and I just recently read this, so please correct me if I’m wrong if you’ve read otherwise, but in some instances, some of the messages are uplifting the countries and what their role should be to sort of temper down some of the feelings. The article I was reading, for example, was suggesting that the Chinese government has been engaging in disinformation to actually spread more positive messages, to sort of tone down some of the temperament that they, quote unquote, perceive is happening in the current election. And we’ve seen on the other side, you know, Russian operatives come in and break into the campaign banks of several of the candidates and sort of use that to spread misinformation and disharmony.
So, I think we’re doing a really good job so far, effective job for that matter, at being able to troll the international landscape, because I think we’ve done a much better job when it comes to international governance of technology more broadly. So, we still have a long way to go when it comes to AI regulation in particular, but technology governance more broadly, we’ve been working on this for a while. So, I do think that we’ve provided some foundational playing field for us to be able to go get those international actors.
Where it’s becoming more difficult is that we’re seeing in some instances, international actors level domestic players as their front to spread disinformation. So, a little bit of this in 2016 when there were cover groups or show groups created to dissuade Black voters from going to the polls. But we’re hearing now from some of these reports that they’re using, again, domestic actors to—
TENPAS: —so almost like spies?
TURNER LEE: Yeah. Yeah. I mean, there’s there definitely is a domestic element to this that is probably more scary and harder to discern.
TENPAS: Right. And is much more prevalent than 2016 or 2020.
[35:54]
TURNER LEE: That’s right. And I think it’s also worth mentioning, Katie, too, that misinformation and disinformation are not just today about one tidbit of information, like your polling place has been moved or the election date is actually this date. I mean, we’re talking about a web of disinformation that people are being exposed to, whether it’s health disinformation or misinformation, it’s disinformation about the economy, it’s disinformation about, you know, your child and social media. It’s it’s this big conglomerate.
TENPAS: Right. They don’t just focus on one issue. They just … the whole idea is to cause disharmony and polarization writ large.
TURNER LEE: That’s it.
TENPAS: Not just to focus on an election. They would love to influence our election, but they also would love to just instill a sense of disharmony and anger and resentment.
[36:41]
TURNER LEE: And that’s the part I think that we’re seeing many of the campaigns pay more attention to. This is an area which I think journalists have begun to deploy some better fact checking tools to be able to monitor when the type of false information shows up. There are now people, researchers like ourselves, who are paying attention to the web of misinformation and disinformation. There’s a group that I’ve recently become attuned to called the Onyx Collective, who is looking at the web of misinformation and disinformation as delivered to Black communities, where it’s a conglomerate of all these factors we just discussed that lends itself to creating disharmony, but also dissuading people to actually vote.
TENPAS: And am I wrong … at first glance it makes sense to me, but maybe it seems exaggerated. It seems like this all causes a great deal of, like, psychological warfare and anxiety and makes people much more nervous. Is that also the goal?
[37:39]
TURNER LEE: Oh, I completely agree. I mean, and the challenge today is—and this is something I also want to associate with this conversation is—you know, where we consume our news has a lot to do with this contribution towards anxiety. Right? I don’t know if you knew this, but, like, Spanish language speakers primarily just use social media. And they tend to actually share a lot of disinformation and misinformation. Because that’s what they rely right on.
TENPAS: So, no newspapers?
TURNER LEE: No newspapers. A recent study came out by Free Press that did a really interesting study on how Spanish language speakers are consuming news.
I think at the end of the day, this goes back to this erosion of our information democracy. And that erosion has sort of created, to your point, amplified messages that often are not as democratic as we’d like them to be. They also include minimal ways in which to intervene on where there are blind spots and where you get your information. I mean, people often don’t realize that with the rise of the internet as our major media news source has been the decline in local media.
TENPAS: Yes, big time. Right?
TURNER LEE: Right, right. And so—
TENPAS: I mean, it’s almost gone.
TURNER LEE: It’s gone. Right? Exactly. Our policies have not been designed for that. And then combined with the commercial availability of these tools as a retail products has also contributed to anyone being able to create and concoct these messages.
You know, in all honesty, we’ve been dealing with this issue for, you know, I think since 2016. We’re going to just see how hard and more difficult it is to get to some policy solutions that make sense.
TENPAS: And where would you put the likelihood of some of these social media platforms meeting together to try to impose their own restrictions so they don’t have to suffer at the hands of a congressional, federal regulation? Like, is there any chance that they would ally together to come up with reasonable approaches to this, or is that just pie in the sky?
[39:29]
TURNER LEE: Well, you’re talking to the right person. I just actually put out a paper last week, I think it went live, around whether or not social media companies would acquiesce and come to some harmonization around rating systems when it comes to children’s content. Suggesting that, you know, the music industry has done it, the movie industry has done it, the gaming industry has done it, but they haven’t done it. So, I encourage people to read that paper. We’ll have it on the Brookings website under my expert page.
With that being said, I think, you know, there are so many other concerns around social media—this is probably not top of mind. To their credit, what they have done is more purposely and intentionally reported what they’ve taken down when it comes to misinformation and disinformation. They have been much more willing to be transparent that this type of content is actually proliferating their platforms. And we’re seeing a lot more news related to the number of takedowns that they do.
But that has been done on a company-by-company basis. And some would argue that there are some companies that are touting their own misinformation and disinformation. We’ll keep silent on who.
But, you know, at the end of the day, you are going to need an all-hands on deck strategy to get this right. I mean, AI has made it so much easier for the likeness of voice, image, ideas, values, temperament to be replicated through hyperactive and super manipulative bots and chat bots and tools. And and don’t get me wrong, I mean, some of those things are going to be quite interesting going into the next century because they’re going to help us to solve some of the pressing challenges or give voice to people who cannot talk, you know, or give emotion to people who have lost that side of their processing. But in the context of critical infrastructure like elections, we should just be better than this and we should be having some landing to get this right.
TENPAS: And not to be critical, but, like, the takedown policy, you take it down, but by the time you’ve done that, you don’t know how many millions of people you’ve already influenced.
TURNER LEE: Yeah, I keep telling my mother to stop sharing stuff on social media platforms without reading the article first, right?
TENPAS: Right.
TURNER LEE: I mean, that’s that’s—
TENPAS: Helping her cause.
TURNER LEE: And you know, there was a there was an article several years ago that seniors were like the largest aggravators of like, misinformation because they tend to not read the whole article because they’re afraid to open it up because it could be something deceptive or click bait. So, they just share it, right?
TENPAS: Right, it’s not helping.
TURNER LEE: Right. I’m so glad some of them just don’t even go on those sites anymore. They just have picked other sites to go on. Right? But the key thing is, you know, I forgot our question. I was just thinking about my mother, she does that all the time. Like, why do you keep sharing that stuff? Like Smokey Robinson is not dead. Go back to your original question.
TENPAS: Yeah. Okay. So, this disinformation and this horrible use of AI, it seems to me it strikes to the heart of our democracy, especially when it affects election outcomes. And I always like to ask my guests when they talk about their particular area of expertise, how does it affect your attitudes about the future of democracy? And on a scale of 1 to 10, how nervous are you knowing all of this information? How nervous are you about the future of democracy?
[42:43]
TURNER LEE: Well, I was in a conversation recently, and I think we use the word “democracy” pretty loosely. Right? And depending on who you’re talking to, the definition around democracy has different meaning. Meaning my elders have a different take on what it means to have liberty and civil rights. Whereas my children have a different sense of what it means to live and participate in a democratic society.
And technology in many respects has enabled many social movements to exist because without technology, for example, when we saw and witnessed some of the early egregious actions in policing, it was through a platform that the Black Lives Matter movement formed. And where we found young people going to these platforms to find each other.
We also found technology being helpful when the country suffered a pandemic alongside our global partners. And young people. Despite the mental health concerns we talk about today, found themselves in social media communities trying to cope with the panic of not having homecoming and isolation.
I write about that in my book, Digitally Invisible, that we are always going to have this two-sided coin when it comes to technological use and innovation when it comes to our democracy. There will be times that we love it, there will be times that we don’t, and there will always be a tradeoff.
What does that mean, though? It means that the technology doesn’t run us, that we are still human beings with some cognitive ability to discern what is right and what is not right. And we have the ability to ensure that we build technologies that are ethical, responsible, inclusive, and fair, and that they do not embolden further systemic inequalities that pull us back as opposed to progress us in the type of world that we want to live.
So, I share that, not to just sound like a preacher of sorts, but to suggest as a policymaker that we’ve got to continue to look and interrogate these technologies as they are introduced into contexts where real people live.
And I think that at the end the technology doesn’t exist in a lab. It exists in a context. And when that context actually treads alongside some of our most precious and inalienable rights, we’ve got to sit down and look seriously and say, is this something that we can either make better so that it doesn’t create as much consequence and harm? Or is this something that we should try to not use?
TENPAS: Right. Okay, so, let me press you one more time. You gave a really nice, cautiously optimistic view about the future of American democracy. But if I had to ask you to pick a number on a scale of 1 to 10 about how nervous you are about the future, could you pick a number? And if so, what would it be?
[45:41]
TURNER LEE: That’s hard because I like to stay within this space where I can recognize both sides of the coin, because with technology, there’s always promise and peril. But I would suggest that, you know, I would give it more of a 7 to 8 in terms of how nervous I am. And the only reason I could say that is because I think we as humans still must have agency over these technologies, whether we are the creator or we’re the person who is impacted by the technology.
In the end, democracy is about people. And when it’s about people, people still have to have some control as to the extent to which there’ll be a technology, in this case, it’s a technology of innovation. Back then, when my elders were growing, it was a technology of industrialization and slavery. My point is, we should have agency.
TENPAS: Yes.
TURNER LEE: When it comes to what types of technologies are being used to advance democracy. You know, back then it was cotton gin, and today it’s AI.
[46:38]
My point is, you know, I’m still nervous that we are developing technologies outside of the context of labs that get placed into communities and we’re not asserting the same level of responsibility, transparency, inclusiveness, ethical constructs that ensure that we’re not persisting some of the systemic inequalities in which we have today.
So, yeah. I think as a policymaker, there’s a lot that I would like to see in terms of introspection and interrogation of these technologies to just make them fit better in the society which we’ve created and not technologies that regress that progress.
TENPAS: Yeah. And I suppose picking the number 7, it’s always good to be a little bit nervous, especially when it’s new technology and you’re seeing what some of the negative results are.
TURNER LEE: I mean, look, I worry about people who say it’s like, no, there’s no, you know, need to be nervous and cautious. You know, if you’re a number 3 person, we’re probably not sitting at the same table.
TENPAS: Or reading the same things.
TURNER LEE: I don’t think so. You know, because I think with any technology—I’ve been doing this for 30 years—with any technology, it’s just so important to make sure, you know, people have some say into the design and the development and that we all are not just subjects of that technology.
TENPAS: So, it seems like you really strike kind of a cautiously optimistic.
[47:59]
TURNER LEE: Oh yeah. Oh yeah. I mean, I always think about, you know, there’s promises and there are perils. There are opportunities and there are challenges. One of things we’re trying to do with the AI Equity Lab is to workshop real high-risk areas like criminal justice and AI use, and health care and AI use, and just sort of determine along these various verticals where the harms really exist so we can have more informed conversations.
But this end of the day, we’re not even quick enough to keep up with the technologies that are out there. So, you know, I’m not one that wants to ban the technology, but I do want us to put on our hat of interrogation and introspection and inquiry and see if we can make these technologies better coexist with the people who live within these democratic spaces.
TENPAS: Yeah, that’s a terrific message. And I am so grateful for your time today. I learned a great deal. And wow, it’s a really … it’s a fascinating topic and one that you’re going to be living with for a while it sounds like.
TURNER LEE: I will. And you have to promise me that we don’t publicize this so much that my daughter gets upset with me that I’m going to ban her voice extraction technology that she so loves to use in her free time.
[music]
TENPAS: If this podcast gets so popular that that your teenage daughter knows about it, I will be shocked. But I’ll be happy. Anyway, thank you so much.
TURNER LEE: Thank you. Thank you so much.
TENPAS: Democracy in Question is a production of the Brookings Podcast Network. Thank you for listening. And thank you to my guests for sharing their time and expertise on this podcast.
Also, thanks to the team at Brookings who make this podcast possible, including Kuwilileni Hauwanga supervising producer; Fred Dews, producer; Colin Cruickshank, Steve Cameron, and Gastón Reboredo, audio engineers; the team in Governance Studies including Tracy Viselli, Catalina Navarro, and Adelle Patten; and the promotions teams in both Governance Studies and the Office of Communications at Brookings. Shavanthi Mendis designed the beautiful logo.
You can find episodes of Democracy in Question wherever you like to get your podcasts and learn more about the show on our website at Brookings dot edu slash Democracy in Question, all one word.
I’m Katie Dunn Tenpas. Thank you for listening.
Commentary
PodcastHow do artificial intelligence and disinformation impact elections?
Listen on
Democracy in Question
October 10, 2024