In 2017, the notice-and-comment rulemaking process entered the popular consciousness in a way it never previously had. The reason: The Federal Communications Commission decided to rescind a rule relating to net neutrality that it had only issued a few years prior. The original rule garnered nearly 4 million comments, then the largest response ever. The rescission shattered that record, eliciting almost 22 million comments.
All of a sudden, a fairly obscure mechanism for agency policymaking was the talk of the town. Late night comic John Oliver urged his viewers to weigh in against the rescission, and thousands obliged. And, as an investigation by the New York Attorney General released earlier this year has now revealed, far more underhanded efforts to influence the process also took place. For instance, one college student alone generated 7.7 million comments (i.e., around a third of the total) by using a computer algorithm. And several broadband companies hired so-called lead generators who collectively submitted in excess of 8 million comments (i.e., another third of the total), often by misappropriating living or deceased individuals’ identities and submitting computer-generated comments on their behalf.
The entire ugly episode has undermined confidence in the notice-and-comment process and sparked an investigation by Congress. At the same time, it has laid bare some of the fundamental tensions that have always surrounded public participation in agency rulemaking. When enacting the Administrative Procedure Act, Congress was not entirely clear on the extent to which it intended the agency to take into account public opinion as reflected in comments or merely to sift the comments for relevant information. This tension has simmered for years, but it never posed a major problem since the vast majority of rules garnered virtually no public interest.
Even now, most rules still generate a very anemic response. Internet submission has vastly simplified the process of filing a comment, however, and a handful of rules generate “mass comment” responses of hundreds of thousands or even millions of submissions. In these cases, as the net neutrality incident showed, individual commenters and even private firms have begun to manipulate the process by using computer algorithms to generate comments and, in some instances, affix false identities. As a result, agencies can no longer ignore the problem.
Nevertheless, technological progress is not necessarily a net negative for agencies. It also presents extraordinary opportunities to refine the notice-and-comment process and generate more valuable feedback. Moreover, if properly channeled, technological improvements can actually provide the remedies to many of the new problems that agencies have encountered. And other, non-technological reforms can address most, if not all of, the other newly emerging challenges. Indeed, if agencies are open-minded and astute, they can both “democratize” the public participation process, creating new and better tools for ascertaining public opinion (to the extent it is relevant in any given rule), and “technocratize” it at the same time, expanding and perfecting avenues for obtaining expert feedback.
Challenges of new technology
As with many aspects of modern life, technological change that once was greeted with naive enthusiasm has now created enormous challenges. As a recent study for the Administrative Conference of the United States (for which I served as a co-consultant) has found, agencies can deploy technological tools to address at least some of these problems. For instance, so-called “deduplication software” can identify and group comments that come from different sources but that contain large blocks of identical text and therefore were likely copied from a common source. Bundling these comments can greatly reduce processing time. Agencies can also undertake various steps to combat unwanted computer-generated or falsely attributed comments, including quarantining such comments and issuing commenting policies discouraging their submission. A recently adopted set of ACUS recommendations partly based on the report offer helpful guidance to agencies on this front.
Unfortunately, as technology evolves, new challenges will emerge. As noted in the ACUS report, agencies are relatively unconcerned with duplicate comments since they possess the technological tools to process them. Yet artificial intelligence has evolved to the point that computer algorithms can produce comments that are both indistinguishable from human comments and at least facially appear to contain unique and relevant information. In one recent study, an algorithm generated and submitted 1001 “deepfake” comments in connection with an agency rulemaking, and the officials screening the comments were unable to flag them as computer-generated.
Imagine if an algorithm were to comb the literature on the subject of a particular rule and submit millions of distinct comments containing seemingly relevant arguments and citations. At least at present agency staffing levels, this would crash the entire notice-and-comment process. Agencies are not, as a general matter, equipped to sift through such a large volume of distinct comments. Yet existing law, which requires the agency to consider the “relevant matter presented” in comments, arguably mandates that agencies actually consider and respond to such submissions, at pain of being sued if they fail to do so. The fact that a human being did not generate the comments is arguably irrelevant (especially since a human being produced the algorithm). Congress could address this issue by amending the statute to clarify that agencies need only consider comments composed by actual human beings, though artificial intelligence may eventually evolve to the point that an algorithm could generate a sophisticated and relevant comment that an agency should take into account.
Democratizing the rulemaking process
If technology is the problem, however, it also may provide at least part of the solution. On the one hand, the internet has made it far easier to comment on individual rules, and members of the public clearly expect that the agency will take their opinions into account. Though agencies have always been clear that notice and comment is not an “up-or-down vote,” they often treat it as having a democratic element. Indeed, agencies have occasionally cited the percentage of comments favoring a particular approach as evidence in favor of pursuing it, and the popular response clearly affects how members of Congress and other political actors perceive particular rules. Moreover, regardless of the official line on the purpose of notice and comment, it is abundantly clear that most members of the public view it as something like a plebiscite. Why else would John Oliver mount a mass comment campaign or public influence firms mine the internet for names to falsely affix to auto-generated comments?
If technology is the problem, however, it also may provide at least part of the solution.
The problem with treating the rulemaking process as a vote is that soliciting public comment is a uniquely terrible way of ascertaining public opinion. Anyone with even rudimentary training in survey design knows that allowing participants to opt in ensures that the result is wildly unrepresentative. People with extreme views are the ones who deem it worthwhile to participate. Prior to the rise of the internet, however, soliciting paper comments was the only cost-effective way of hearing from the public.
If agencies are willing to reconceive the commenting process, new technology offers at least a partial solution to this problem. Though the APA mandates that agencies undertake notice and comment for any legislative rule, it does not foreclose the possibility of supplementing it. In those instances in which an agency deems public opinion to be relevant to its decision, it can employ a variety of technology-enabled tools to ensure that it is getting a truer sense of popular sentiment than a mere comment solicitation could provide.
One possible approach offered by Connor Raso and Bruce Kraus would involve allowing “upvoting” comments. Rather than taking the time to file a comment herself, an interested party can do the equivalent of “liking” someone else’s comment. This reduces the barrier to entry and increases the likelihood that a more demographically and ideologically representative sample of individuals participate. At the same time, it is also susceptible to manipulation, as a clever programmer could easily design an algorithm to repeatedly “like” (or “dislike”) her own or someone else’s comment, and agencies would likely need to consider implementing a reCAPTCHA system or some other mechanism to ensure that the submissions are genuine and unique.
Another possible approach that I have explored in a different context would involve convening a citizen advisory panel. The agency could assemble a demographically representative group of citizens, provide them with briefing materials, and then ask them to offer their input on the issue the agency is considering. To ensure that the agency achieves demographic representativeness and avoids conflicts of interest, the agency could either task an independent team of officials with assembling the panel or contract out the function to a private firm. This approach ensures that the public perspective is not only representative but also well-informed. Of course, agencies have always been able to convene citizen advisory committees, but the cost of doing so was once prohibitive (around $200,000 per year, by my estimate). One of the few silver linings of the COVID-19 pandemic, however, is the fact that videoconferencing has become far more sophisticated and far cheaper. Agencies are already using videoconferencing to expand participation in other contexts, and convening an advisory committee by Zoom or another online platform would eliminate virtually all associated expenses other than staff time.
A third approach is a public opinion poll. In many ways, this is the most attractive and cost-effective approach, and it does not require any advances in technology. However, it also poses the most legal challenges. Under the Paperwork Reduction Act (PRA), agencies must undergo an elaborate, months-long approval process prior to circulating any survey instrument to ten or more individuals. To the extent agencies express interest in conducting such polls, Congress might consider amending the PRA to facilitate this process.
By taking any of these approaches, agencies could alleviate some of the pressure on the notice-and-comment process. No one could credibly attack an agency for ignoring public opinion expressed in comments in favor of public opinion ascertained by a much more reliable mechanism. Interest groups and even individual commenters would hopefully take notice and modify their behavior accordingly, abandoning efforts to flood the notice-and-comment process, which would become increasingly fruitless, and instead focus on engaging with the more constructive methods of public participation.
While “Technocratizing” it at the same time
Of course, notice-and-comment is not exclusively or even primarily a democratic process. Most regulatory scholars and practitioners view it as a technocratic process, in which the agency is gathering decentralized knowledge from experts dispersed throughout society. Indeed, most rules deal with esoteric subjects that are of little to no interest to the general public, which often lacks any particularly relevant information to contribute. Here, too, technology can actually improve the notice and comment process and render it more valuable to the agency and stakeholder communities alike.
First, much as virtual meeting platforms facilitate the convening of citizen advisory panels, the same technology could be used to assemble panels of technical experts. Rather than attempting to hold an in-person discussion among the key players, which can become prohibitively expensive when the leading experts must be flown in from throughout the country, agencies could simply arrange virtual meetings. In so doing, agencies must be careful of triggering the so-called Federal Advisory Committee Act, which can be avoided by having the experts offer individual input rather than collaborating on a group recommendation.
Second, AI tools can help agencies process the comments they do receive. Consulting firms such as Deloitte have developed programs that allow agencies to sort comments based on subject matter, sentiment, and other dimensions. These tools go well beyond deduplication software and actually allow agencies to process conceptually distinct comments efficiently.
As technology continues to evolve and computer-generated comments become more prevalent, agencies will likely need to abandon the practice of having human beings review every comment and rely primarily or even exclusively upon these sorts of tools to perform the initial screening function. A human will, of course, need to review the output and decide how to proceed, but some use of AI is likely to become a practical necessity. Law firms already make extensive use of AI in the eDiscovery context, and there is no reason why agencies cannot or should not follow suit.
Third, and relatedly, as agencies begin to deploy AI screening tools, computer-generated comments could become an asset rather than a liability. Bridget Dooling and Michael Livermore (two of the other consultants for the previously mentioned Administrative Conference project) have argued that computer algorithms could alert potential commenters to topics of interest, screen rule text to identify technical errors, or even comb the technical literature and prepare a credible comment to submit to an agency. To the extent that a computer generates a comment that consists of nothing more than credible-sounding gibberish (e.g., furnishing studies that do not stand for the propositions for which they are cited), the agency’s algorithm should be able to identify the comment’s logical flaws and discount it. If, on the other hand, an external algorithm formulates a legitimate argument, the agency’s algorithm should flag it and call it to the human decisionmaker’s attention. In this sense, the agency can actually harness the intelligence of algorithms, which surpasses human intelligence in certain key respects, and leverage the expertise of private sector players who write the algorithms.
If technology runs the risk of breaking the notice-and-comment process as we know it, it appears to be a case of creative destruction. On the one hand, agencies will almost certainly be overwhelmed with technological innovations that challenge their ability to continue with business as usual in the notice and comment space. On the other hand, technology offers a solution to at least some of those very problems and otherwise enhances the process in a way that makes it more useful and satisfactory to agencies, key institutional players, and the general public alike.
At present, traditional notice-and-comment is serving neither the democratic nor technocratic function especially well, at least in high profile rules, as the circus surrounding mass commenting campaigns serves both to frustrate everyday citizens who feel that their views are being ignored and to drown out information-rich comments with masses of comments that contain nothing more than simple expressions of opinion. By thoughtfully deploying new technologies and carefully deciding which rules would benefit from a clearer sense of public opinion and which from a richer array of expert inputs (including those from non-human experts), agencies can re-tool an outmoded process for the 21st century while allowing it to more effectively accomplish the twin goals it was originally designed to serve.
ACUS disclaims responsibility for any private publication or statement of any ACUS employee. The article expresses the author’s views and does not necessarily reflect those of ACUS, the federal government, or the Brookings Institution. The author did not receive any financial support from any organization or person for this article or from any organization or person with a financial or political interest in this article. He is currently not an officer, director, or board member of any organization with a financial or political interest in this article.