Sections

Commentary

The urgent need to stand up a cybersecurity review board

A pile of destroyed desktops and screens are seen in a performance of Spanish Bip-Bip Foundation during the International Telecoms Fair (SIMO) in Madrid November 6, 2007. The foundation is a non-profit organization looking to breach the digital divide between those that have access to new technologies and those that do not.  REUTERS/Sergio Perez  (SPAIN)

Just as Bill Murray wakes up each morning in Groundhog Day to the tune of Sonny and Cher’s “I Got You Babe,” executives around the world today begin their days with a familiar piece of news: Their company has been breached. It takes Bill Murray’s weatherman character a few days to realize what’s happening to him and even longer to discover that he can change how he behaves. In cybersecurity that realization hasn’t happened, and, instead, we are living the same day over and over again, hoping that the same behavior will lead to a different tomorrow—one free of massive breaches. 

Changing this cycle requires first understanding the problem of widespread cyber vulnerabilities, and the federal government is beginning to take steps to do so—but not fast enough. In May, President Joe Biden signed an executive order that tasked the secretary of homeland security to stand up a Cyber Safety Review Board that would investigate major incidents affecting government computing systems and to disseminate the lessons learned from such incidents. More than six months later, the board exists only on paper, and cyber Groundhog Day marches forward, doomed to repeat the mistakes of the past. Amid widespread computer vulnerabilities, getting this board up and running should be a serious priority, one that has the potential to seriously improve the disastrous state of cybersecurity. 

When planes crash or major aviation incidents occur, an independent National Transportation Safety Board investigates using a multistakeholder model and provides an explanation with lessons learned for pilots and the aviation industry. The cybersecurity industry has no similar respected government body helping us create a shared history with lessons learned for our major incidents. In a recent report authored with Rob Knake, at Harvard University’’s Belfer Center for Science and International Affairs, we detail how to best design a cybersecurity review board capable of studying major breaches and disseminating lessons learned. As it stands, the cybersecurity industry lacks authoritative, independent investigations capable of understanding how breaches occur and how to carry out systematic improvements. Until such a system exists, major breaches are likely to continue, with predictably disastrous consequences.

A slew of major breaches in recent years have inspired an immense body of cybersecurity regulations, with little discernible improvement in computer security. In 2021 alone, the White House issued a new executive order forcing new cybersecurity rules on American firms, the Transportation and Security Administration promulgated fresh cybersecurity guidelines for pipeline operators, DHS released a binding operational directive to fix hundreds of vulnerabilities in federal computer systems, new laws in California and Colorado provided fresh cybersecurity guidance, and the New York Department of Financial Services offered new rules to banks about their cybersecurity responsibilities. And this is just a sampling of what the past year had to offer in terms of new cybersecurity rulemaking. 

As we ask in the report, if we know what we need to do, why do we have so many different security standards? Unfortunately, evaluating standards is hard because we lack ground truth about issues. To preserve flexibility and innovation and to allow rules to apply broadly, U.S. regulations are written with a great deal of discretion, which is good, but this discretion means it is often unclear if organizations are compliant. That lack of clarity has costs to the organization: Executives fight over programs; boards can’t judge status. And it has costs after a breach when lawyers with 20/20 hindsight say “if only” and “why didn’t…” No one is tasked with lessons learned. No one is expected to judge the clarity or effectiveness of regulations. Independent analysis and feedback is a key role played by the NTSB.

The announcement of a CSRB one day sometime in the future isn’t sufficient; we need it implemented now. The state of cybersecurity is not the only thing that reminds us of Groundhog Day. The metaphor of a cyber NTSB has a long history itself, going back to the 1991 National Academies of Science report “Computers at Risk.” The idea of an incident repository and independent analyses have come up again and again because we’ve all seen them working in other fields. 

Instead of a unified approach toward investigating major cybersecurity incidents, the federal government tends to respond in predictably disjoined ways. Following the 2017 breach of Equifax that resulted in the personal data of some 147 million Americans being exposed, the Senate Committee on Banking, Housing, and Urban Affairs held a hearing, the House Committee on Oversight and Reform issued a report, Senator Elizabeth Warren conducted an investigation, and the Consumer Financial Protection Bureau collected documents. Yet no comprehensive playbook for investigating future cyber incidents arose. The more recent SolarWinds attack has thus far triggered two hearings in the House, a Senate hearing, a report from New York State, and several private sector reports.

A comprehensive body for reviewing cyber incidents might have used the Equifax breach to establish an investigative process and produce documents that help defenders prevent such major incidents in the future. Defenders should not have to delve into the myriad hearings and reports from federal agencies. A CSRB should be doing the hard work of translating the attack into useful information for defenders. But providing actionable information requires a holistic approach in investigations, rather than identifying a single point of failure. In the case of the Equifax breach, the single technical cause was a contractor not patching an Apache Struts vulnerability. A holistic approach to understanding why the breach occurred would examine the organizational missteps, systematic failures, and human errors that resulted in the failure to patch. 

Most large organizations have staff working diligently to comply with regulations and protect themselves from problems. No company wants to be singled out and have its name associated with a famous failure. Market incentives means companies are inclined to avoid going public and afraid of being investigated and singled out. Consider the major breaches in recent history, and then ask yourself: Why does that particular example spring to mind? Is there a reason to think that company or agency did worse than others? Or was it a slow news day that caused that incident to stand out? 

Our report discusses two important ways to overcome the current incentives to avoid publicizing attacks and to instead learn from breaches. First and foremost, we must avoid rushing to judgement when companies get breached. Companies must make decisions about where to invest their resources. There will always be a project that could have been funded whose proponents will claim it would have made a difference. Second, policymakers need to make hard decisions about when to require cooperation with investigations. This is neither unique nor unduly burdensome, but mandating cooperation will likely be necessary in order to produce meaningful reports. Indeed, companies are already compelled to notify regulators and the public of breaches, and publicly traded companies must report any incident to investors. It turns out that despite concerns about the reputational costs of a breach, when companies discuss a breach, stock prices barely move, and there may even be a “no press is bad press” effect. It might be sensible to make the reporting rules a little less stringent on deadlines in order to get a more thorough and transparent investigation.

For decades, we’ve been amazed at the ongoing power of internet systems to improve our lives. Society’s mechanisms for understanding and learning from issues and errors has not kept pace. Until we implement an independent body that can review incidents and provide a shared history we’re doomed to repeat it. Until then, we are doomed to relive the same day and the same breaches, over and over and over again. 

Adam Shostack is a leading expert on threat modeling, and a professor, consultant, author and game designer.
Tarah Wheeler is a contributing editor to TechStream, a Cyber Project Fellow at the Belfer Center for Science and International Affairs at Harvard University‘s Kennedy School of Government, an International Security Fellow at New America leading a new international cybersecurity capacity building project with the Hewlett Foundation’s Cyber Initiative and a US/UK Fulbright Scholar in Cyber Security for the 2020/2021 year.
Victoria Ontiveros is a second year Master in Public Policy candidate at the Harvard Kennedy School, where she focuses on foreign policy and security studies.