Sections

Commentary

Podcast

Unpacking President Biden’s executive order on artificial intelligence

On October 30, President Joe Biden issued an executive order on “safe, secure, and trustworthy artificial intelligence.” The E.O. sets new standards for AI safety and security, has new privacy provisions, and much more. Nicol Turner Lee, a senior fellow in governance studies and director of the Center for Technology Innovation, joins The Current to talk about the scope and implications of the AI E.O.

Transcript

[music]

DEWS: You’re listening to The Current. I’m your host, Fred Dews. On October 30th, President Biden issued an executive order on quote, safe, secure, and trustworthy artificial intelligence. You can read the order on whitehouse.gov, but basically it sets new standards for AI safety and security, has some new privacy provisions and much more. To talk about this order and its implications, I’m joined by Nicol Turner Lee, a senior fellow in governance studies and director of the Center for Technology Innovation.

She’s also co-moderator of the Tech Tank podcast, which you can find on the Brookings website and wherever you listen to podcasts. Nicol, welcome to The Current.

TURNER LEE: Oh, thank you so much, Fred. It’s great to be here.

DEWS: So, what problem is this executive order trying to address?

TURNER LEE: So, I think we’ve had these issues when it comes to artificial intelligence and new emerging technologies like generative AI for quite some time. And the executive order, I think, provides a ramp, like a pathway for us to start to address what those larger concerns have been on the public interest side, as well as confirm some of the will that we’re seeing right now among companies who have come to President Biden and Vice President Harris on these voluntary commitments.

So, you know, when I think about problems, Fred, I think about a range of problems that potentially the order may hit. You know, things like socioeconomic determinations and eligibility tools when it comes to applying for credit; or housing mortgages; the extent to which AI is used in criminal justice, particularly when it’s used in facial recognition technologies to identify potential suspects. The fact that we use AI now in an assortment of educational and healthcare related tools. And now with generative AI, I think the bigger concerns are its ability to replicate both image and voice.

My main points are, there are a range of societal implications that this order may be looking at. And most importantly, what I have found interesting is that it’s doing it under the guise of safety and security while also advancing, you know, citizens’ privacy, equity and civil rights, technological innovation and better cadence.

And it’s doing something that, you know, it’s a long document, Fred, it’s a hundred pages, okay, but it really is doing something that’s sending a message to the global community that we’re finally taking a stance in this.

DEWS: I want to follow up on one of those points, and it has to do with the fact that they are voluntary guidelines. Because one of the parts that caught my attention and which might be of general interest to a lot of people is the directive to the Department of Commerce to quote, “develop guidance for content authentication and watermarking to clearly label AI generated content.” So does this mean all content or just content coming from the federal government or is it suggesting that companies that create generative AI also label their content?

TURNER LEE: That’s a great question, right? And we recently had a Congresswoman, Yvette Clark, who represents the great, I like to call it the great state of Brooklyn, right? Brooklyn in New York, talk a little bit about some legislation she has that complements what the executive order is suggesting. You know, this idea of content authentication and watermarking are clearly important as we see these deep fake technologies penetrate our general domain.

And I think what we’re seeing in what you’re referencing, Fred, is not a lot of clarity as to who it relates to. I think the suggestion of guidance, because as many people should know, this is not necessarily legislation that’s been placed out there. This is more like guidance or perhaps the beginning of values and norms that the United States can actually start with, with the assumption that Congress will put their teeth into this with some more coherent legislation that backs up many of these suggestions and proposals in this order.

With that being said, the federal agency of the U.S. Department of Commerce is on the hook to come up with great guidance on watermarking and what that looks like to label AI-generated content.

Now, what that looks like for you and I, I don’t know, right? When I’m using a Word document, I see that watermarking pretty clear when it says “draft,” right, or tells me when this document was generated. I know companies like Google have started to come up with some technical cadence around that. I think the idea is if we perhaps deploy this at the federal government level with various agencies, that we’ll see the private sector either take the next step or help us to innovate in that area. But there’s still some clarity that needs to be had on what that actually looks like and the extent to which it’s going to be something that will be recognizable by people when it comes to authenticating content.

DEWS: Sure, so the federal government will adopt these guidelines because the president’s ordered it. Companies could voluntarily adopt the guidelines. But what do you think the prospects are of Congress enacting legislation that would, as you say, back up these guidelines?

TURNER LEE: Listen, if it takes as long as it took for us to get our House in order, I’m not sure. I want to be confident that Congress is going to step in. This has been of concern to both sides of the aisle over the last year. You know, Republicans are just as concerned as Democrats and independents when it comes to the use of AI tools in areas that have either critical sensitivities, like I’ve mentioned in terms of employment, education, et cetera. But we also want to make sure we have resiliency and some oversight of these technologies when we start talking about militarization.

With that being said, we have to pass a budget first, right? So I’m not sure that this will be top of mind for many legislators. Although I can tell you this, there has been activity on the Hill. Senator Chuck Schumer’s work on the AI summits is one example of that. There are a variety of legislators that are sort of dabbling into what could be AI legislation for them.

I think, Fred, what’s going to ultimately happen is that this executive order is going to nudge Congress to either think about the things that they want to put out there in this sphere of conversation or to act on some of those things. And I think the little wins will probably get through faster than the harder ones.

And again, if you read this document, for everybody listening, it’s not an easy document to read and it’s quite intriguing, but it really is a comprehensive way to look at what AI legislation could look like going forward. We’ve kind of gotten a little bit of the language that could go into any legislative directive.

DEWS: Let’s go back to the document. I’m gonna ask you some more questions, Nicol, about that. The order also calls on Congress to pass data privacy legislation. So how would that be different than previous attempts to safeguard individuals’ privacy?

TURNER LEE: Well, it’s interesting because people like our colleague, Cam Kerry, have been pushing for comprehensive privacy legislation for, I want to say decades. I’ve known Cam for over a decade, and I know he’s been working on this over a decade, starting when he was in the Obama administration. The key thing here that I think is imperative from the order for people to pick up on is that the fact that it does mention it, right? So we are getting a couple of things in the order that the White House is suggesting.

One, nudging, again, Congress to really start thinking about ways in which to structure data privacy legislation. And two, prioritizing privacy preserving techniques. There is some caveats within this document that is calling on the National Science Foundation, for example, to promote the adoption and to help in innovating privacy-enhanced technologies so that we start with privacy by design versus privacy in the aftermath or the consequence of not having privacy, which I think is pretty interesting. And it’s a pretty assertive recommendation on the part of the White House to ensure that we start from the beginning and get this right, particularly in these new technologies.

I think the White House is also suggesting that there’s been a lot of conversation out there and calling Congress to create a bipartisan data privacy standard is a huge, huge ask. But we’re seeing bits and pieces of that, Fred, actually happening right now on the Hill, whether it’s the current conversations on children’s privacy, whether it’s, you know, conversations on a comprehensive privacy standard, we are so close in this country to getting that. And this is one of those areas in which our allies, as well as our competitors, have one step above us when it comes to defining the parameters of a data privacy standard.

So, I’m not sure, given the fact that it’s an executive order and the teeth that it will have to get people over the finish line. But I do think it sends a clear message that in order for us to have effective, resilient AI networks, we must have a data privacy standard in place that offers guidance, right, on what people can collect about us, whether that’s personally attributable information or biometric information. It’s really important. These systems work off of our personal data, Fred. I can’t tell people that enough. And without privacy legislation, it’s still a Wild West.

DEWS: So, Nicol, a few minutes ago, you mentioned the issues like credit scoring, housing, mortgages, criminal justice. And as you noted, there’s a whole piece in the executive order on advancing equity and civil rights. How does an executive order on managing the risks of artificial intelligence advance those goals?

TURNER LEE: Well, you know, it’s interesting because I’ve been one, as you know, with my work, that is really serious about figuring out ways in which we continue to stress the explicit compliance of civil rights statutes when it comes to existing and emerging technologies. Just because companies do not collect federally sensitive information about certain groups doesn’t mean that they cannot or will not break the law.

And so I think going forward, what I do appreciate about the order is that it is an extension of the previous conversation and the previous guidance that we got from the National Blueprint on an AI Bill of Rights. If you all recall, that came out a few months ago, and that is really around what rights people have in this AI space and this digital space for that matter.

With that being the case, there are still some questions, right? Federal agencies in particular are really the subject of this order because we have agency as a country over what federal agencies do as taxpayers, for the most part.

The people that we often worry about the most though, are not the federal agencies. And so that is going to be a question as to the extent to which that type of discrimination is being picked up in certain algorithmic systems, particularly those, Fred, that are very opaque, the ones that we cannot see.

And what’s more interesting when it comes to enforcement, you know, we have, again, federal agencies that have some regulatory authority to do so. The order does suggest that there’s going to be an exercise among the Department of Justice to sort of delve into this. And the order also puts in, different from the Bill of Rights, a focus on fairness in the criminal justice system. But these are age-old, settled concerns, you know what I mean? And it’s really gonna be a question as to the extent that, one, we can clearly identify them in opaque systems. And two, that we have the enforcement capacity to call it out when we see either disparate treatment or impact on the part of vulnerable populations.

DEWS: Nicol, you mentioned the order is hundreds of pages long. So briefly, if possible, what other provisions of the executive order would you want our listeners to understand now?

TURNER LEE: You know, like I said, this is the most comprehensive order we have seen on this issue. So much so that we had a hundred pages and a few days to read it before we started talking to people like you, Fred, right? And I’ve been busy doing that.

I’d like to point out some other things in the order that I think were also quite interesting. There is a lot of discussion in this order around workers. And in particular, the use of AI in terms of worker surveillance, wage demands. It’s very interesting that this order goes into both the use of the technology to sort of disrupt behavioral practices and rights that workers have within their domains. But it also does something which is to recognize that automation is going to come with some cost. And there will be labor market impacts.

So, I think that’s an area, if you haven’t delved into the executive order yet, that you will find to be interesting as part of this conversation. And there is some charge to the Department of Labor to sort of do something about this as well.

In the order is also, I think, this poignant point around global harmonization. Vice President Harris was recently at the UK AI Safety Summit, and if you have not watched her speech, it is pretty clear that we are putting the gavel down when it comes to asserting U.S. leadership in this space around civil protections and public interest protections in particular.

And what’s also interesting is that we’re stepping into a period where there’s a flurry of activities happening around this conversation that really we haven’t been in assertively because of the fact that we haven’t had something like this. Again, without Congress’s authority, because President Biden has pretty much enacted the Defense Production Act to be able to do the things that he’s doing. But you and I both know, a new president, a new wave of other issues will diminish the power of this if Congress doesn’t act.

And so, you know, when I think about what we’ve done here and I think about the timing, I just think, Fred, that it’s a way overdue document. And it’s one that probably, if I get this right, is the beginning of future deliberation. It’s it should be an organic and iterative conversation. And I think the president and the vice president have really put out something that’s going to make us raise the chatter a little bit so that we can get this right.

DEWS: Well, Nicol, I wanted to ask a sort of a last question. Is this executive order enough? But you said it’s the most comprehensive order in a generation. So, I mean, is it really just a foundation, a starting point for this AI safety, security conversation?

TURNER LEE: Well, listen, I’m going to put it like this. I’ve been doing this for 30 years in terms of technology, policy, and digital activism. When I say it’s the most comprehensive order in AI, it is because we didn’t have AI when I was growing up, right? And so it’s the most comprehensive document to address what we’re currently seeing in this ecosystem that builds upon a variety of legislative proposals that, unfortunately, have not been passed to the state.

But what that also means is that we are still behind in terms of where technology is going. There’s still a lot of questions as to the extent to which this order will actually address some of the concerns around generative AI. We still have militaristic applications of AI that may be addressed in this order, but have really serious national security concerns.

And on the public interest, we’re not sure at this time that people even know that AI is responsible for certain decisions that are being made about their lives because we don’t disclose that.

And I’m not sure if the everyday person will look for a watermark versus some type of statement that says that this is an AI generated decision.

That’s why Fred, I’m gonna put the plug out there. First, that’s why we have Brookings, right? And that’s why our scholars are real intricately involved in sort of staying on top of these conversations. And this week we are publishing an “Around the Halls” with a compilation of Brookings scholars reacting to this. So I’m excited about what we do at Brookings to be able to raise awareness.

And then secondly, I guess I’ll have to do it, Fred. That’s why I have a shameless plug on a book that I have coming out next year, which really talks about how these advents in technology have really forced our society to rethink the visibility of people in certain demographic groups and communities. Technology should not erase people nor should it harm people. In fact, it was always developed to help us solve social problems.

And that’s a point now where this order is at least raising the bar on us understanding what is happening to us so that we don’t continue to be the products and commoditized by it.

DEWS: Well, we’ll leave it there, Nicol. I will put a link in our show notes to that piece you mentioned, “Will the White House AI executive order deliver on its promises?” And I encourage listeners to find your work on AI and other technology issues and check out the Tech Tank podcast. Nicol, thanks for your time and expertise today.

TURNER LEE: Oh, thank you so much, Fred. I always enjoy talking to my favorite colleague here. You hear that, my favorite colleague. Thank you.

DEWS: I’m honored, thank you.

TURNER LEE: Thank you.

Authors