Sections

Commentary

America’s anti-hacking laws pose a risk to national security

FILE PHOTO: A man takes part in a hacking contest during the Def Con hacker convention in Las Vegas, Nevada, U.S. on July 29, 2017. REUTERS/Steve Marcus/File Photo

When the Supreme Court handed down its decision in Van Buren v. United States, cybersecurity professionals nationwide breathed a sigh of relief. Asked to determine the scope of the United States’ main federal anti-hacking law, the court adopted a limited interpretation of the Computer Fraud and Abuse Act (CFAA). Had the ruling come out differently, it could have created more risk for so-called “white hat” hackers who search for flaws in software as a public service.

But even after Van Buren, white hats continue to face some lingering legal uncertainty under the CFAA and other laws. Meanwhile, the United States faces nothing short of a cybersecurity crisis, and U.S. authorities have begun to acknowledge that “black hat” hackers (particularly those overseas) appear largely unmoved by the threat of prosecution. That is, the specter of liability may be discouraging white hats from doing innocuous or beneficial security research, without meaningfully deterring malicious hacking. This topsy-turvy state of the law—and those who wield it as a cudgel to threaten researchers—is a weakness in U.S. national security.

The CFAA, which prohibits computer access “in excess of authorization,” was enacted in the 1980s after President Reagan got spooked by the 1983 Matthew Broderick movie “War Games.” It has long been feared and criticized for its broad, vague language. When interpreted expansively by prosecutors, private plaintiffs, and courts, the statute has been stretched to encompass activities that bear little resemblance to what most of us would think of as “computer hacking,” from cyberbullying to rapidly downloading academic articles from a database. Until last month’s Supreme Court decision, “[y]ou could indict a ham sandwich with the CFAA,” as Jeff Moss, the founder of the Black Hat and DEF CON security conferences, once quipped. Van Buren was the first time the court weighed in on the scope of the law since its original enactment more than three decades ago.

While the ruling left some issues unresolved (one footnote in particular has perplexed legal scholars), it did definitively answer one key question. Tasked with deciding whether breaking a contractual agreement about permissible computer usage—such as an employer’s acceptable-use policy for a work computer or a website’s terms of service (TOS)—also violates the CFAA, the court sensibly said no. To conclude otherwise, it reasoned, “would attach criminal penalties to a breathtaking amount of commonplace computer activity,” such as checking sports scores on one’s work machine or “embellishing” one’s profile on a dating app. The court also recognized that a broad reading of the CFAA would ensnare research that, while harmless, involves violating websites’ TOS. It cited a recent legal challenge to the CFAA brought by social-science researchers who feared prosecution for researching job and housing discrimination online using fake profiles (which would contravene the applicable TOS). Those social scientists joined other longtime CFAA reform advocates in welcoming the court’s decision, which was hailed as “especially good news for security researchers.”

For years, the CFAA and another law, the Digital Millennium Copyright Act (DMCA), have cast a pall of legal uncertainty over white-hat hackers’ work. Section 1201 of the DMCA forbids the circumvention of technological access-control measures for copyrighted works. Researchers risk violating this provision when searching for security flaws in consumer technologies, as Princeton Professor Ed Felten (who has personally been threatened with legal action under the DMCA) explained in 2013. The DMCA does allow some limited security testing, and a convoluted rulemaking process held every three years has yielded temporary exemptions for “good-faith security research.” But these are imperfect protections: Qualifying for the DMCA’s security-testing exception can be complex and is contingent upon not violating the CFAA. Plus, the exemptions are not permanent, meaning researchers must ask for them to be re-upped (or expanded) every three years. Success isn’t guaranteed, which seriously interferes with long-term planning and investment in security research. These shortcomings of the DMCA, which were not at issue in Van Buren, remain in place after the ruling.

The DMCA’s convoluted process for granting exemptions, together with the CFAA’s historical overbreadth and vagueness, have produced a significant chilling effect on security research in the United States. One 2017 study found that companies are generally unwilling to grant security researchers permission to audit their products and that researchers are significantly concerned about the legal threats they face. A study the following year by the Center for Democracy and Technology documented these concerns in greater detail. Even students who are just starting out in their careers may feel the chill: MIT runs a legal clinic with Boston University’s law school just to counsel student researchers on legal risk and to help them respond to actual or threatened litigation.

This cloud of FUD may prevent important security research from taking place, drive researchers out of the field of cybersecurity, and dissuade other individuals from entering it to begin with. But America can ill afford to lose good talent in this area. There is already a cybersecurity labor shortage in the U.S. of hundreds of thousands of jobs, with estimates ranging from about 350,000 to almost half a million. Globally, that number increases to over three million.

Meanwhile, the United States has been rocked by a mounting ransomware crisis and a devastating series of hacks, such as the attack on Microsoft Exchange and Russia’s far-reaching SolarWinds hack. Critical infrastructure is no longer being spared, with malicious hackers targeting municipal water treatment plants, hospitals, grocery stores, and the supply chains for gasoline and beef. No wonder the Biden administration made improving the nation’s cybersecurity a “top priority” in its first six months.

The dire state of both public- and private-sector cybersecurity is a national emergency. The unnecessary barriers that hamper professional entry and retention in the cybersecurity field deserve to be viewed as a matter of national security. One major barrier is the continuing chilling effect of legal risk on good-faith cybersecurity research. Van Buren lowered that barrier somewhat, but there is much more to be done. That’s why many commentators tempered their celebrations of the decision with cautionary notes. As CDT put it, “the Court’s decision does not remove all ambiguity surrounding the CFAA,” leaving open questions that browser maker Mozilla predicted “will likely have to be settled via litigation over the coming years.” Per the Cato Institute, “[i]t remains to be seen” whether Van Buren will put an end to “private misuse” of the CFAA.

Where might such private misuse come from post-Van Buren? Immediately after the ruling, companies eager to dissuade security researchers from examining their products offered creative interpretations of the ruling’s impact on the CFAA. Case in point: mobile voting app company Voatz, which is perhaps best known for the time it referred a college student to state authorities for doing research unflattering to the company. After Van Buren came out, Voatz scrambled to downplay the ruling, telling researchers that they can still be prosecuted “even if your purpose is noble” and that “the safest bet” is for researchers to get companies’ buy-in for their research. Put another way, Voatz believes companies should get to control the research process, which, conveniently, would let them hide security flaws from public scrutiny.

This “my way or the highway” view of the CFAA is disingenuous given Voatz’s track record and embellishes the Supreme Court’s interpretation of the law, which declined to hinge CFAA liability on the whims of private parties. Nevertheless, it shows how little has changed in the case’s aftermath, as Van Buren’s ambiguities leave wiggle room for unscrupulous organizations like Voatz to keep intimidating security researchers.

Amid a worsening cybersecurity crisis, a threat to good-faith researchers is a threat to national security. Washington needs to rectify the impediments that prevent would-be defenders from stepping up to help. What does that look like? For one, making up the jobs shortfall. As the Department of Homeland Security has recognized, this will require a multi-pronged approach that includes diversifying the candidate pipeline, investment in workforce training initiatives, and reducing other barriers to entry, including by countering systemic racism and welcoming nontraditional career paths and auto-didacts. DHS recently concluded a successful cybersecurity hiring spree, though there are still years of brain drain left to rectify across the executive branch.

Another key priority must be updating federal statutes to conclusively dispel their chill on research. This requires legislative action by Congress. It is slow and inefficient to set national policy by running individual court cases up the flagpole and hoping the Supreme Court takes them. The same is true of the DMCA’s triennial rulemaking process, which is a terrible way to encourage serious, long-term investment in desperately needed security research. Congress should reform the CFAA and the DMCA—or even pass a new standalone law—that establishes clear, strong, and lasting protections for good-faith security research activities.

Various proposals have been put forth over the years for how to do that. A 2018 proposal by Thyla van der Merwe and Daniel Etcovitch suggests codifying a statutory safe harbor under the CFAA and DMCA that borrows from the 2018 DMCA Section 1201 exemptions’ definition of “good-faith security research.” More recently, the cybersecurity firm Rapid7 suggested a legislative amendment that would create an affirmative defense for good-faith security researchers in CFAA civil lawsuits. These proposals aren’t perfect (nor is that 2018 DMCA exemption), but they’re a solid starting point.

Ideally, qualifying for statutory protections should be straightforward and should not require researchers to jump through hoops to prove what color their proverbial hat is. But that’s easier said than done. Commentators crafting legislative proposals have struggled to define what constitutes good-faith security research without being either over- or underinclusive. A big sticking point has been the concern that too lax a definition of white-hat activity would let black hats evade accountability. That is, those who want to reform the CFAA and DMCA recognize that the laws’ scope is too broad, but they fear that protecting beneficial activity would cut too far in the other direction.

It is a peculiarly American phenomenon that some of the major obstacles to fixing our cybersecurity emergency are an obsession with private property rights at the expense of the common good, an urge to punish those who point out one’s flaws, and worrying about criminalizing too few people. But those misplaced priorities are what got us here, and the current paradigm is not working. A different approach is urgently needed, as a matter of national security. Congress must clear the legal minefield that imperils good-faith security researchers.

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory.

Microsoft provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.