Sections

Commentary

Podcast

Does the Anthropic–Pentagon feud mean the end of responsible AI? | The TechTank Podcast

March 23, 2026


  • In February, Anthropic faced an ultimatum from the Pentagon to allow the Department of Defense unrestricted use of its models.
  • The company declined, raising questions about the role AI plays in military operations and more broadly how it is used within the federal government.
ARLINGTON, VIRGINIA - MARCH 19: U.S. Secretary of War Pete Hegseth provides updates on military operations in Iran during a press briefing at the Pentagon on March 19, 2026 in Arlington, Virginia. The U.S. and Israel continue their joint attack on Iran that began on February 28. (Photo by Win McNamee/Getty Images)
ARLINGTON, VIRGINIA - MARCH 19: U.S. Secretary of War Pete Hegseth provides updates on military operations in Iran during a press briefing at the Pentagon on March 19, 2026 in Arlington, Virginia. The U.S. and Israel continue their joint attack on Iran that began on February 28. (Photo by Win McNamee/Getty Images)

TechTank, a biweekly podcast from the Center for Technology Innovation at Brookings, explores today’s most consequential technology issues. Moderators Nicol Turner Lee and Darrell West speak with experts and policymakers to share data, ideas, and policy solutions that address the challenges of our digital world.

On Feb. 24, Defense Secretary Pete Hegseth issued Anthropic CEO Dario Amodei an ultimatum: By the end of the week, Anthropic had to give the Department of Defense unrestricted use of its AI models, or the Pentagon would potentially sever ties with the company. Anthropic denied this request, which led to Hegseth designating the company as “supply-chain risk” and entering a contract with OpenAI.

The situation has elevated debates about government’s interference into the operations of AI systems, especially in making demands of companies to lower their safety standards. Further, the dilemma raised questions about the role the technology plays in military operations and more broadly how AI is used within the federal government amid a waning interest in responsible AI.

In this episode of the TechTank podcast, producer and guest host Josie Stewart is joined by Brookings fellows Stephanie Pell and Valerie Wirtschafter to discuss these questions and the broader implications the feud may have on trust in AI and adoption across the federal government. Listen to the episode and subscribe to the TechTank Podcast on AppleSpotify, or Acast.

Transcript

[00:00:00] CO-HOST NICOL TURNER LEE: You are listening to Tech Tank, a biweekly podcast from the Brookings Institution exploring the most consequential technology issues of our time, from racial bias and algorithms to the future of work Tech Tank takes big ideas and makes them accessible. 

[00:00:26] GUEST HOST JOSIE STEWART: Welcome to the Tech Tank Podcast. I’m today’s guest host Josie Stewart, a senior research and communications assistant in the Center for Technology Innovation at the Brookings Institution. On February 24th, Defense Secretary Pete Hegseth issued the Anthropic CEO, an ultimatum. By the end of the week, anthropic had to give the Department of Defense unrestricted use of its AI models, or the Pentagon would sever ties with the company. 

[00:00:50] Anthropic declined the request, which led to Hegseth designating the company as a supply chain risk. The situation didn’t end there. Soon after the talks with Anthropic fell apart, OpenAI entered its own contract with the DOD. Anthropic also returned to talks with the U.S. government only for them to fizzle out again. 

[00:01:08] But more importantly, the development has brought debates about limitations for the use of AI to center stage. Raising questions not only about the role that technology plays in national security and military operations, but also who controls AI systems and more broadly, how they are used within the federal government. 

[00:01:25] I’m joined today by two of my Brookings colleagues, Stephanie Pell and Valerie Wirtschafter. Stephanie is an expert in cyber and national security, and a fellow at the Brookings Center for Technology Innovation. Her work encompasses topics like surveillance, cyber ethics and cybersecurity law. Valerie is a fellow in the foreign policy program in the Artificial Intelligence and Emerging Technology initiative at Brookings. 

[00:01:48] Her work focuses on democratic resilience, artificial intelligence, technology, and the information space. She’s also the author of a forthcoming Brookings Report on AI use. Within the federal government. Valerie and Stephanie, thanks so much for being here today.  

[00:02:03] GUEST STEPHANIE PELL: Thank you so much for having us. 

[00:02:05] Thank you, Josie. 

[00:02:06] Good to be with you. 

[00:02:08] GUEST HOST JOSIE STEWART: Great. I wanna start us a little bit closer to the beginning of this whole saga. When Hegseth gave Anthropic the ultimatum, as reports were coming out about the negotiations, what were your initialreactions given your areas of expertise? And Stephanie, we can start with you.  

[00:02:25] GUEST STEPHANIE PELL: I would say that broadly speaking, my first reaction was that this is a terrible way to make public policy. 

[00:02:33] You have two significant issues, the use of AI in domestic surveillance and in autonomous weapons systems that are developing and potentially being decided, at least in the short run, through a fight between Dario Amodei, the CEO of Anthropic and the Department of War, formerly known as the Department of Defense. 

[00:02:56] And just to lay out a little more background, the Secretary of War, Pete Hegseth, issued a memo in January instituting new contracting returns, requiring quote. All lawful use of relevant technology, so previous restrictions that had been negotiated by contract in Anthropic’s case restrictions on the use of Claude in lethal autonomous warfare and mass domestic surveillance would no longer be permitted and nude contracts with the quote. 

[00:03:28] All lawful use language would be required. Again, the fact that these issues were playing out, in a clash between two very powerful authorities struck me as not the best of ways to go about making public policy.  

[00:03:47] GUEST HOST JOSIE STEWART: And Valerie?  

[00:03:48] GUEST VALERIE WIRTSCHAFTER: Yeah, the immediate reaction was like, it was very clear to me that this was obviously about the politics of AI or the perception within the White House that Anthropic was maybe a little more risk minded or, Biden administration coded, then its competitors. David Sachs for months had been, and he’s the White House aI and Crypto Czar had been accusing Anthropic of being woke trying to implement regulatory capture based on fear mongering. Anthropic CEO didn’t attend the inauguration, and I believe other tech companies CEOs did and that stood out. Anthropic had been quite vocal about its opposition to state preemption of AI regulations and also had been donating to PACs that opposed federal efforts to quash state AI regulations. so it was definitely the politics were front and center for me there. and then, my other reaction was really that this could do some serious damage to Anthropic’s business, but also from a federal government adoption perspective it would be I think, pretty detrimental in terms of building confidence in how the government is using AI. the administration had itself in a bind a little bit from the public, a little bit of a back down and you look weak, push forward and suddenly you’re advocating for domestic surveillance and autonomous weapons usage. And I don’t think that really bodes well for an administration that’s put so much effort into embedding AI into the federal government. and it doesn’t really look good for voters who are already, I think, and I think this is really important, extremely skeptical about AI that has gone up since 2021, since after chat GPT launched something like 50% of the public, is more concerned than excited about AI usage generally.  

[00:05:43] GUEST HOST JOSIE STEWART: As you both pointed out, problems with the contract worsened when Anthropic, made clear that they wanted language that prevented the Pentagon from using its technology for autonomous weapons and mass surveillance of Americans. Let’s start with the first of these, which kind of gets into what you were just talking about, Valerie. what concerns did Anthropic and other external audiences express about the use of AI for autonomous weapons?  

[00:06:09] GUEST VALERIE WIRTSCHAFTER: Yeah, so Anthropic’s position was that, it just, that was one of its lines was that it wouldn’t use the application for lethal autonomous weapons. That’s not to say that they didn’t want it used in military usage. Claude had already been part of intelligence analysis or, operations, planning, all these types of capabilities, reportedly for the raid in Venezuela that captured Nicolas Maduro in early January, there are still reports that it’s being used or has been used in, the current conflict in Iran. So it wasn’t about necessarily the military usage, it was the fact that Anthropic objected to the fact that, Claude was not reliable enough to operate, to make decisions about who to target without human involvement.  

[00:06:56] GUEST HOST JOSIE STEWART: And then what about the mass surveillance part? Stephanie, can you explain what laws enable AI to be used for this and what concerns this raises? 

[00:07:05] GUEST STEPHANIE PELL: So again, the term that Dario Amodei uses is quote, mass domestic surveillance. And that term can mean different things to different people. So for some level setting, it would be fair to say that the intelligence community, which includes the NSA, a component of DOW, has. Broad authorities to engage in the collection of foreign intelligence information, and there are a variety of legal authorities. Executive order 12333, the Foreign Intelligence Surveillance Act, and the Fourth Amendment that govern the intelligence community and DOD’s ability to engage in the collection of foreign intelligence information. Now, generally speaking, these authorities provide significant limitation on the intelligence communities and DOD’s ability to collect information about U.S.persons. U.S. persons are defined under the law as a United States citizen or alien admitted for permanent residence in the United States, and any corporation, partnership, or other organization organized under the laws of the United States. But these authorities currently don’t prohibit the government from purchasing commercially available information from data brokers. That does contain information about U.S. person. Soamong other things, this commercially available information can include location data and the use of AI to analyze this kind of information in conjunction with other kinds of information that may be gathered raises significant privacy concerns. And although commercially available information may be anonymized, it is possible to de-anonymize and identify individuals, including U.S.persons, and in doing so, expose very sensitive information and allow for the construction of patterns of life. So again the, you the term, mass domestic surveillance doesn’t have a particular meaning in the law. and people may look at different forms of surveillance and place that label on it. As best as I can tell from piecing together various reports over the last couple of weeks, Anthropic, CEO was extremely. Concerned about the use of of AI to analyze commercially available information and other kinds of, information in unclassified systems, he again according to reporting in the New York Times. He did not seem to have the same problems with information collected, let’s say, pursuant to the Foreign Intelligence Surveillance Act that would be contained in classified systems. 

[00:10:15] GUEST HOST JOSIE STEWART: Yeah. And let’s turn to Anthropic being labeled as a supply chain risk, which is normally a designation reserved for foreign companies. What do you make of this reaction from Hegseth and how might this impact Anthropic? If Valerie you want to start. 

[00:10:32] GUEST VALERIE WIRTSCHAFTER: Yeah, so actually I think first Hegseth, threatened both the Defense Production Act and a supply chain risk, which is mm-hmm. I think Anthropic pointed this out first, but others have pointed it out as well as this is just a contradiction, right? either this is like a important, fundamentally important technology that the Defense Production Act will compel Anthropic to provide the technology. Or it is a risk to the supply chain and so it’s, it’s ironic that they were both kind of being thrown around, at this point. But I think ultimately the Pentagon settled on this supply chain risk. At first, Hegseth said something, I think in a tweet about how it. All contractors had to cease commercial relations, all commercial relations, with Anthropic. But that was, I think, way beyond the power that he had to be able to compel change from the private sector. Now, I think it has to do with, military contracts and the direct fulfillment for contractors who are working directly with the military. They can’t use Anthropic as part of what they provide to the military through those contracts, it’s less menacing then the original threat, which was beyond Secretary Hegseth’s power. but I think it could still be quite devastating to Anthropic’s business. Those contracts were worth quite a bit. And then, there is, there are a lot of other companies that Anthropic works with as well, that are, I think more hesitant now, especially in light of some of the uncertainty here. 

[00:12:02] GUEST HOST JOSIE STEWART: Yeah. Stephanie, what are you thinking about this? And also should note that Anthropic is challenging this, and so how do you expect this might play out in court?  

[00:12:11] GUEST STEPHANIE PELL: Sure, like Valerie, there was some bit of surprise on my part here. Among other things, it was quite interesting that Secretary Hegseth was picking a fight, as the U.S. was on the brink of going to war with Iran when would presume that Hegseth knew, what the U.S.intentions were in that regard when it was picking this fight with a leading AI company? And I also think that it’s fair to say that by declaring Anthropic a supply chain risk. DOW is trying to sanction it, essentially punish it for, among other things, exercising its rights to engage in contract negotiations and for exercising its first amendment rights. And we should keep in mind here that Anthropic has at least here to, for been a partner. With the Department of War, Andro has stated, and in a pleading it filed, that Claude is reportedly the department’s most widely deployed and used Frontier AI model, and the only one currently being used on classified systems. So how this will, play out in court, Josie, as you referenced, Anthropic filed a complaint against the Department of War. Secretary Hegseth and a host of other federal government agencies and officials seeking declaratory and injunctive relief, and that was based on a notice that Anthropic received on March 4th, which in its pleading states that. The notice indicated the Department of War had determined that the use of Anthropic’s products in the department’s covered systems present a supply chain risk, and that exercising the authority granted under 10 USC Section 32 52 against Anthropic is quote necessary to protect national security. The secretarial letter pronounces that this determination it covers all Anthropic products and services, including any that become available for procurement. And it asserts that less intrusive measures are not reasonably available to mitigate the risks that Anthropic’s products and services supposedly posed to national security. One of the primary statutes here at Issue again is 10 USC section 32 52, which is an authority that allows the government to designate a vendor as a supply chain risk, and in doing so, exclude it from government contracts and it restrict its participation in the supply chains of other contractors doing business with the government. Now, apparently this authority has never been used against aU.S.company before and there is no case law, interpreting the statute. I would recommend our listeners look over to Lawfare, where my colleagues Alan Rosenstein and Michael Andreas make a compelling case in a piece entitled the Pentagon’s Anthropic Designation. Won’t survive first contact with the legal system that Anthropic will prevail in its challenge for a number of reasons, including that the designation exceeds the statutory authority grant, the to the government, and that the designation is nothing more than a pretext for punishment.  

[00:15:43] GUEST HOST JOSIE STEWART: Yeah. So even if following that piece, if it is successfully challenged, what are the broader implications for AI developers seeking to do business with the government? Valerie, you hinted at what kind of precedent this might set for companies who are trying to ensure safeguards are in place for their models, but who are also trying to do business with the government and have seen this fallout play out.  

[00:16:05] GUEST VALERIE WIRTSCHAFTER: Yeah. So I see three pathways. I, none of them are great for AI development or really even if you think more broadly, the diffusions of AI developed in theU.S.around the world, right? That’s been a stated objective of this administration is to export the AI stack. and what does it mean if the federal government can invalidate policies of companies, yeah, chilling on government adoption, companies who invested a lot in the complaint, Anthropic talks about all the adaptations that they made to their models to embed them into classified systems to be useful in these contexts. Companies invest in that, especially when they’re bidding on government contracts is that worth it from a business perspective. That’s something that I think companies are gonna have to weigh. especially if their whole business could be threatened if they disagree on some terms for a contract. I think international corporations, foreign governments who are maybe thinking about using, U.S. built AI might double down on some of their efforts to build their own stacks maybe they can’t trust the U.S. companies, especially to abide by their own laws. I think companies also who are seeking government contracts. Are they going to capitulate, are they gonna put in place, policies that are weaker potentially? maybe agreeing to things when the tools aren’t quite there yet what happens if there is a failure potentially and so I think all of these things are, pretty, pretty challenging from a government adoption perspective, but also a diffusion perspective as well.  

[00:17:49] GUEST HOST JOSIE STEWART: Yeah. Stephanie, any additional thoughts on that front?  

[00:17:52] GUEST STEPHANIE PELL: I want to agree with Valerie that I, think this all has a potential, broadly for a real chilling effect when you bully companies or threaten them from exercising their first, amendment rights, and their rights to engage in contract negotiations that’s going to make them make policy decisions that are perhaps not very good policy decisions, both for national security and for the rule of law. So I think ultimately this kind of activity on the part of government,serves to undermine the rule of law, which is, never a good thing. 

[00:18:34] GUEST HOST JOSIE STEWART: Yeah. And even outside of the federal government, what I know, Valerie, you referenced earlier how the Trump administration has really prioritized AI diffusion development. What effect might this have on the general public who, as you mentioned, is feeling maybe less good about AI as. Development continues. The story really dominated the new cycle and clearly had ripple effects. we saw the downloads of Claude increase after Anthropic refused the DOD’s ultimatum. How might this impact people’s perceptions and trust of AI?  

[00:19:07] GUEST VALERIE WIRTSCHAFTER: Yeah, AI has such a big PR problem, that stories like this, I do not think, help build confidence in federal adoption of AI systems this I don’t think helps at all. I, do think, as Stephanie alluded to, we’renow at War in the Middle East and it’s a story like this. Even though it did blow up quite a bit, it, I think could have dominated headlines even further had it not been for. That conflict. while this is really confidence diminishing, I think it could have been way worse. But we did see some movement, from consumers, people who were following this story closely, and it was quite a story. and will still, I think, continue to be something that people follow. But, OpenAI had negotiated a deal similarly with the DOD. I guess they agreed to terms that were pretty. Similar to what Anthropic wanted. ultimately it’s, it’s unclear, what OpenAI’s terms were. Some people have argued that they were quite a bit softer. after news of that announcement, Anthropic Claude downloads shot up immensely. And, OpenAI faced quite a bit of backlash. Sam Altman, the CEO of OpenAI. Said that, the timing of his announcement looked opportunistic and sloppy, I think were his words. maybe that wasn’t the intent. but that’s certainly, I think how it, played out publicly. we are seeing the, sort of public backlash as well here. 

[00:20:44] GUEST HOST JOSIE STEWART: Stephanie, what are your thoughts, especially given, mass surveillance, something that obviously would impact everyday people. What are you thinking about how AI might be implemented in the federal government following what people are seeing play out here?  

[00:21:01] GUEST STEPHANIE PELL: Again, I come back to the fact that. Mass domestic surveillance, is a term that people will apply maybe in different ways. It It doesn’t have a fixed meaning in the law. I think. For a whole lot of reasons. I’mgonna, I’m gonna go back to the Edward Snowden disclosures. There, there is always, at least among some part of the public, an underlying concern with growing government surveillance capabilities and how they will be used. So when an issue like this is raised in a high profile fight among. The CEO of a leading AI company and the Secretary of War and the, term mass domestic surveillance is raised. I think it causes concern. And then the work to do is to try and parse under current authorities of law what that really means, and forge a path for Congress to come in and regulate. 

[00:22:08] GUEST HOST JOSIE STEWART: So I wanna follow up on exactly what you just said. How do we proceed from here? the U.S. does not have a national framework for AI. And how, Stephanie, do you expect this might influence legislators to seek, to ensure safeguards both on that front and generally around its use within the government? 

[00:22:27] GUEST STEPHANIE PELL: So however, this particular case resolves itself and we’re all just gonna have to stay tuned for that. It should nevertheless serve as a clarion call for Congress to address the use of AI in surveillance and in weapons systems, as a matter of public policy. We don’t want these issues decided when two powerful entities get into a fight, for many reasons, that is a, lose situation. It circumvents a necessary deliberative process by a branch of government congress that should be weighing in on these policy decisions, and it undermines national security by placing. The Department of War at Odds with a leading AI company that has an important role to play in our national defense, especially when rightly or wrongly the U.S. is currently engaging in an armed conflict. And again, as I noted before, it undermines the rule of law when companies are bullied and punished for exercising their contractual and First Amendment rights.  

[00:23:35] GUEST HOST JOSIE STEWART: Valerie, what are your thoughts on what legislators can do from here? I know you’ve talked a lot about the report that you have coming out on, DOGE and AI use within the federal government, and we’reseeing now these really high risk settings come to the forefront of the conversation. What might protections look like or what might action look like on this front?  

[00:23:58] GUEST VALERIE WIRTSCHAFTER: So just on the government adoption side, like this type of black listing, I think really hobbles federal government from being able to use the best tools, which is of course, I think, the goal of Congress and the executive branch. The Trump administration had the AI action plan and a, key pillar in that was. To leverage AI to deliver the highly responsive government American people expect and deserve. And now civil servants don’t have access to the best tools. I’ve looked at some of these political bias questions and Claude was actually better than some of its competitors, at least in its more recent models at deflecting from political questions. And so this idea of political bias in LLMs is a huge thing. Anthropic’s models we’re actually making quite a bit of progress in declining to answer things that were overtly politicized. and so I think turning off these systems is gonna be hard. But I know it’s already happening. I’ve heard it’s already happening. and, they really work quite well. These tools when they work in concert with each other, and now we’ve lost a tool for our federal government employees to be able to use with other models or for coding, which Claude really excels at. So I think that is a, it’s a huge challenge, is that, we’re undermining the mission that. This government is trying to advance and that Congress is trying to advance as well. On the governance side, this is effectively like governance by the executive, which shouldn’t be the case, right? Using, sort of executive power to legislate how AI companies, define risk and measure risk and, deploy. But I hope it does spur some action to actually have those conversations at the place where they should be happening. I’m not totally optimistic that they will. We’ve been waiting for quite some time at this point. options are pretty limited, but I do see sort of two welcome signs. There is widespread consensus, I think, across the political spectrum from former national security professionals, tech policy leaders, business, civil society, that the administration got this one wrong. there was an open letter that was signed by some retired generals, civil liberty organizations, tech policy leaders, everypolitical persuasion. and I think that bipartisan united front is super important and, that kind of pressure is really valuable. The other side of the coin of course is the business imperative, whether OpenAI intended, for what happened to happen. that sort of like murkiness around what happened there, with open AI’s users and their contract vis-a-vis the Pentagon and how it was similar or not. Anthropic’s I think that consumers are a little bit speaking with their wallets. I think if companies can stand together on some of these issues as well, that I think is really critical. But I don’t know if we’ll see that. But certainly, on the consumer side of the coin, especially as AI is in this space where there are huge contracts and huge deals that are being made, but profits are lagging, I think that’s another important lever to, to be pulled as well. 

[00:27:15] GUEST HOST JOSIE STEWART: While this plays out, I know we will be watching closely as you both are, but I appreciate you both sharing your insights with U.S. today and want to thank you for joining me today.  

[00:27:26] GUEST VALERIE WIRTSCHAFTER: Thanks so much. Thank you, Josie.  

[00:27:30] GUEST HOST JOSIE STEWART: Thank you Stephanie and Valerie for joining me today. your insights have been very valuable. As we continue to watch this play out. For our listeners, please explore more in-depth content on tech policy issues at Tech Tank on the Brookings website, accessible at brookings.edu. This concludes another insightful episode of the Tech Tank Podcast, where we make bits into palatable bites. Until next time, thank you for listening. 

[00:27:53] CO-HOST NICOL TURNER LEE: Thank you for listening to Tech Tank, a series of round table discussions and interviews with technology experts and policymakers. For more conversations like this, subscribe to the podcast and sign up to receive the Tech Tank newsletter for more research and analysis from the Center for Technology Innovation at Brookings.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).