AI and financial stability: Mitigating risks, harnessing benefits

Panel of experts speaking at the FSOC/Brookings AI Conference
Brookings Institution
Editor's note:

This post is a summary of a conference held on June 6-7, 2024. Watch the full videos here and read the preview here. Quotes were edited for clarity.

On June 6-7, the Financial Stability Oversight Council (FSOC) and the Brookings Institution convened leaders across government, industry, academia, nonprofits, and trade associations to discuss the financial stability implications of artificial intelligence (AI). Sandra Lee, The Treasury Department’s Deputy Assistant Secretary for FSOC, highlighted that the conference aimed “to promote thoughtful discussion on the risks associated with AI and how to mitigate these risks while still harnessing AI’s benefits.” Over two days at Treasury and the Brookings Institution, participants shared views on policy ideas and the past, present, and future of AI in financial services. A summary of key highlights follows, and videos of the full conference can be found here.

AI and systemic risk

Treasury Secretary Janet Yellen in her keynote address noted that Treasury had recently issued a public request for information on the uses, opportunities, and risks of AI in the financial services sector. Secretary Yellen stated that AI, “when used appropriately, can improve efficiency, accuracy, and access to financial products.” She went on to note that “specific vulnerabilities may arise from the complexity and opacity of AI models; inadequate risk management frameworks to account for AI risks; and interconnections that emerge as many market participants rely on the same data and models.” She offered a potential approach for exploring some of these issues, stating that “scenario analysis could help regulators and firms identify potential future vulnerabilities and inform what we can do to enhance resilience.”

Acting Comptroller of the Currency Michael Hsu’s keynote touched on this same theme, noting that “AI can be used as a tool or a weapon.” Acting Comptroller Hsu discussed the evolution of AI, “where it is used at first to produce inputs to human decisionmaking, then as a co-pilot to enhance human actions, and finally as an agent executing decisions on its own on behalf of humans.” Acting Comptroller Hsu stated that “the risks and negative consequences of weak controls increase steeply as one moves from AI as input to AI as co-pilot to AI as agent.” He emphasized FSOC’s centrality in this discussion, noting that “the FSOC is uniquely positioned to contribute to this, given its role and ability to coordinate among agencies, organize research, seek industry feedback, and make recommendations to Congress.”

Regulation of AI usage in finance

How to best regulate AI usage-related risks in finance dominated much of the discussion throughout the conference. Lisa Rice, CEO of the National Fair Housing Alliance, argued that the appropriate regulatory framework for AI must involve substantial testing of technology before it can be released in society. She proposed “having a team of trusted institutions to work in a collaborative fashion to explore the models and really vigorously test them, to see how they perform, red-team them, blue-team them, to see if you can identify any bias or harm, and also see if you can compel them or constrain them to be fairer from the outset.” Terah Lyons, JPMorgan Chase Managing Director & Global Head of AI Policy, saw a “need for commercial organizations to have clarified guidance from regulators, supervisors, and other authorities with respect to the way that AI should be responsibly implemented.” Some participants were interested in seeing as much harmonization as possible with a shift from state to federal regulation and to international standards (given the EU AI Act). American University Law Professor Hilary Allen made the point that “there may be places where stakes are so high, hallucination is not worth the risk, that we need rules to ‘just say no’.”

Regulators’ ability to build expertise on AI and appropriately regulate AI was another common theme. Erie Meyer, Consumer Financial Protection Bureau Chief Technologist and Senior Advisor to the Director, stated: “We have to have the right talent in the room to do the work … to better meet the moment to understand how these firms are working, where the risks are, what rocks we should be looking under, and what we should do about them.” Fabio Natalucci, International Monetary Fund Deputy Director of the Monetary and Capital Markets Department suggested that regulators monitor developments; assess vulnerabilities, such as “whether this is an amplification of old mechanisms that we understand from before … that just operates faster in different contexts”; and determine whether the regulatory framework needs to be adjusted, including asking “whether the risk model that we use is appropriate or if we need new models.”

Compliance challenges and financial stability implications

Secretary Yellen noted that FSOC member agencies “have frameworks and tools that can help mitigate risks related to the use of AI, such as model risk management guidance and third-party risk management. That said, there are also new issues to confront, and this is a rapidly evolving field.” Acting Comptroller Hsu similarly echoed: “What starts off as responsible innovation can quickly snowball into a hyper-competitive race … In time, risks grow undetected or unaddressed until there is an eventual reckoning. We saw this with derivatives and financial engineering leading up to the 2008 financial crisis and with crypto leading up to 2022’s crypto winter.” EY Partner Anuj Mallick observed that they are seeing governance structures around AI start to evolve, where it’s not just how you deploy the technology and the governance that used to be around that, but it’s actually bringing in legal, compliance, risk functions into it, to be able to understand the actual outcome.”

Allen noted that, “if everybody is relying on the same kind of data, and everybody is using the same few algorithms, everyone is going to be acting in lockstep. We know from the run-up to 2008 that herd behavior is very dangerous when things go badly.” Brookings Senior Fellow Nicol Turner Lee also cited “the challenge of the same cloud computing companies, and the same third-party companies selling the same data to a variety of companies, which could end up with collusion or some type of price-fixing that we’re not aware of.”

Acting Comptroller Hsu hypothesized another scenario “the nightmare paperclip/Skynet scenario for financial stability does not require big leaps of the imagination. Say an AI agent is programmed to maximize stock returns… The AI agent concludes that to maximize stock returns, it should take short positions in a set of banks and spread information to prompt runs and destabilize them.” Samara Cohen, BlackRock Chief Investment Officer of ETF and Index Investments, added that “the potential for confidence to be undermined by various forms of cybersecurity issues, by deepfakes, by the intentional misuse of data in a model is critically important and something we would look to the regulatory system to safeguard.” Discover Executive Vice President Keith Toney echoed: “The more risk would be if I was on a single cloud platform, and if that cloud provider gets compromised.”

AI in exacerbating bias or facilitating financial inclusion

Broad consensus emerged that AI has the potential both to exacerbate bias and to facilitate financial inclusion. Virginia Commissioner of Insurance Scott White noted a hypothetical use of AI and algorithms for micro-pricing as well as the emergence of “algorithmic models that can now process [huge datasets], so it can amplify potential biases.” Dominic Delmolino, Amazon Web Services Vice President of Worldwide Public Sector Technology & Innovation, relayed that thinking has evolved:I used to believe that if I just had all the data, I could solve any problem. Now, it’s not that the data informs me but how the data I select, for what purpose and for what domain, and for what use, that becomes that much more important.” Rice noted that AI “can see race and gender quicker than the human eye can, so we’re finding all kinds of ways that these systems are perpetuating bias and have been discriminating against people, locking them out of the financial markets and the housing markets—and now we have to train regulators and also train the industry on ways we can use AI to innovate and protect consumers and expand the market responsibly.”

On the other hand, many participants noted the potential for AI to be part of the solution towards financial inclusion. As the conversation highlighted the discrimination throughout the financial services arena, Turner Lee commented that “when humans are in charge in financial services, maybe we’re actually defeating” the objective of reducing discrimination. Jo Ann Barefoot, CEO and co-founder of the Alliance for Innovative Regulation, noted that “there is potential that this problem of the consumer of financial services … not being attentive, or not being highly financially literate, or not being sophisticated, or being too busy, all of it—that the AI agent may be part of the solution.” A lively conversation between the panelists was moderated by CNBC’s Jon Fortt who observed that many Americans would rather “chase GameStop or buy Bitcoin or get on Draft Kings … Those are the things that are potentially going to distract that underbanked user and there’s much more of an incentive, than ‘Hey, the rate on your Discover Bank account is actually going to be a little bit better than the one that you’re getting.’”

Regulator usage of AI

Barefoot encouraged regulators to adopt AI: “What if we were using AI in our bank examinations for fair lending, to look for more data, more sophisticated analysis?” Brookings Senior Fellow Aaron Klein echoed this: “In 2007 … regulators had been telling us, banks had never been safer, as witnessed by the lack of any failures,” but “two years later, any human intelligence would have told you that the financial system was incredibly shaky in January 2007. Would an AI have done anything different? Had regulators been incorporating AI, would there have been a different response?”

Todd Conklin, Treasury Chief AI Officer and Deputy Assistant Secretary for Cybersecurity and Critical Infrastructure Protection, noted that federal agencies’ adoption of technology has already occurred but is gradual: “About 10 years ago, we started our first cloud modernization effort within our national security infrastructure, and a lot of that investment was to create the foundation for our AI analytics program. And we completed that modernization effort a few years ago and are now finally starting to see the fruit of that investment through our AI development.”

The conference concluded with several breakout sessions where participants engaged in free-flowing discussions synthesizing the discussions of AI and markets, human in the loop, and regulating AI. The conference was a step forward to implement FSOC’s recommendation in its 2023 annual report that “financial institutions, market participants, and regulatory and supervisory authorities further build expertise and capacity to monitor AI innovation and usage and identify emerging risks.” 


  • Acknowledgements and disclosures

    The authors would like to thank Riki Fujii-Rajani and Kyle Lee for their assistance.

    The Brookings Institution is financed through the support of a diverse array of foundations, corporations, governments, individuals, as well as an endowment. A list of donors can be found in our annual reports published online here. The findings, interpretations, and conclusions in this report are solely those of its author(s) and are not influenced by any donation.