A conversation on financial stability with Federal Reserve Governor Lisa Cook

LIVE

A conversation on financial stability with Federal Reserve Governor Lisa Cook
Sections

Research

A comprehensive and distributed approach to AI regulation

Proposing the Critical Algorithmic Systems Classification (CASC)

August 31, 2023


  • A defining challenge of AI regulation is creating a framework that is comprehensive, but still results in rules that are tailored to the nuances of AI in different applications, such as in educational access, hiring, mortgage pricing, rent setting, or healthcare provisioning.
  • This paper proposes a new regulatory approach—the Critical Algorithmic System Classification, or CASC—to allow federal regulators to flexibly govern algorithms used in critical socioeconomic determinations.
  • The CASC framework will help preserve crucial consumer and civil rights protections in the algorithmic age, with sensible restrictions and without major changes to the structure of the federal government.
Data scientist querying, analysing and visualizing complex set on virtual screen.
Introduction

While algorithmic systems have become widely used for many impactful socioeconomic determinations, these algorithms are unique to their circumstances. This challenge warrants an approach to governing algorithms that comprehensively enables application-specific oversight. To address this challenge, this paper proposes granting two new authorities for key regulatory agencies: (1) administrative subpoena authority for algorithmic investigations, and (2) rulemaking authority for especially impactful algorithms within federal agencies’ existing regulatory purview. This approach requires the creation of a new regulatory instrument, introduced here as the Critical Algorithmic Systems Classification, or CASC. The CASC enables a comprehensive approach to developing application-specific rules for algorithmic systems and, in doing so, maintains longstanding consumer and civil rights protections without necessitating a parallel oversight regime for algorithmic systems.

The Need for Comprehensive and Distributed AI Regulation

Algorithmic decision-making systems (ADSs) are ubiquitous in many critical socioeconomic determinations—including educational access, job discovery and hiring, employee management, consumer financial services, property appraisal, rent setting, tenant screening, medical provisioning, medication approval, and more. The majority of all decisions in these crucial applications are affected by or made entirely by ADSs. This proliferation of ADSs is a defining issue of modern economic and social policy, with considerable implications for income equality, social mobility, health outcomes, and even life expectancy. While the use of algorithms and data analytics does at times improve social outcomes, many individual and systemic harms have been documented from erroneous data, algorithmic failures, discriminatory impact, and overestimation of algorithmic capacity.

Some of these socioeconomic determinations are already partially subject to federal law. However, federal agencies are frequently ill-equipped to review and sufficiently regulate the ADSs that fall under their legal authority. Many agencies lack critical capacity regarding algorithmic oversight, including: the authority to require entities to retain data, code, models, and technical documentation; the authority to subpoena those same materials; the technical ability to audit ADSs; and the legal authority to set rules for their use. These limitations are a major barrier to the federal government’s goal of promoting trustworthy and responsible AI, as expressed in documents such as the White House Blueprint for an AI Bill of Rights, the Office of Management and Budget (OMB) Memorandum M-21-06, and the National Institute of Standards and Technology (NIST) AI Risk Management Framework.

While capacity challenges are shared across many federal agencies, the specifics of each ADS—the type of algorithms used, the data they manipulate, the sociotechnical processes they contribute to, and the risks they pose—vary greatly. The role of algorithms in key socioeconomic determinations is so manifold and diverse that it is not feasible or desirable to set all algorithmic standards or enforcement through a centralized process (although some properties, such as disclosure and non-discrimination, may be appropriate universal requirements). This is well demonstrated by the highly detailed and contextually specific nature of the federal rulemakings and guidance that have been proposed so far, including those on hiring algorithms, automated valuation models, and health information technology systems. It is also further evidenced by the significant challenges faced by the European Union (EU) in attempting to draft a single comprehensive Artificial Intelligence (AI) Act, which may lead to a legal framework that lacks sufficient tailoring to specific sectors and algorithmic applications. Instead of a centralized process or single set of rules, federal agencies should be granted sufficiently flexible authority to adapt to the bespoke considerations of impactful ADSs in their domain.

Introducing the Critical Algorithmic Systems Classification (CASC)

The proliferation of ADSs in critical socioeconomic determinations is widespread but manifests uniquely in many different contexts. This is a central challenge of AI governance and necessitates a regulatory approach that is comprehensive but also enables application-specific rulemaking and oversight by sectoral agencies. This paper proposes a novel legislative approach to this dual challenge, which would include two key interventions:

  1. Granting administrative subpoena authority for covered agencies to investigate and audit ADSs that affect processes related to each covered agency’s statutory authority.
  2. Creating a new regulatory instrument, the Critical Algorithmic System Classification (CASC), to empower covered agencies to issue and enforce regulations on ADSs in critical socioeconomic determinations within each agency’s statutory authority.

These two interventions (jointly called the “CASC approach”) would broadly enable federal agencies to proportionately tackle significant extant and future risks of ADSs that operate within the preexisting scope of U.S. governance. This paper first introduces the key concepts and governance structure of the CASC approach, then discusses its potential advantages and drawbacks.

Key Terms:

  • Algorithmic Decision-making System (ADS) – any computational process (including those based on statistics, machine learning, artificial intelligence, or other data processing techniques and excluding passive computing infrastructure) whose results serve as a basis or component for a decision or judgment.
  • ADS Category – any number of ADSs, regardless of algorithmic approach or the developing entity, that largely play the same role in a process as determined by a covered agency. ADSs for analyzing resumes, mortgage pricing, or college admissions could each potentially be an ADS category.
  • Critical Algorithmic System Classification (CASC) – a legal designation that can be applied to an ADS category through the federal rulemaking process, leading to legally binding and enforceable rules for that ADS category.
  • CASC System – an ADS category that has been designated as CASC through the proposed federal rulemaking process.
  • Covered agencies – an enumerated list of federal agencies with significant sectoral regulatory roles over socioeconomic determinations, potentially including: the Consumer Financial Protection Bureau, the Department of Labor and the Occupational Safety and Health Administration, the Department of Education, the Equal Employment Opportunity Commission, the Environmental Protection Agency, the Federal Deposit Insurance Corporation, the Federal Housing Finance Agency, the Federal Communications Commission, the Federal Reserve Board, the Department of Health and Human Services, the Department of Housing and Urban Development, the Office of the Comptroller of the Currency, the Securities and Exchange Commission, the Department of the Treasury, and the Department of Veterans Affairs.

Enabling Algorithmic Review and Auditing through Administrative Subpoena Authority

Covered agencies would be granted authority to collect necessary data, documentation, and technical artifacts (including code and model objects), as well as perform interviews about the development and deployment of ADSs through administrative subpoenas. Both developers and deployers (including vendors and contractors as necessary) of ADS systems could be subject to the administrative subpoenas. Covered agencies would be empowered to use these administrative subpoenas to perform algorithmic audits of individual ADSs, perform systemic reviews of the impact of an ADS category, inform the rulemaking process for CASC designation, and enforce rules for CASC systems.

A covered agency may only issue administrative subpoenas for ADSs which are significantly impactful on processes that fall within the congressionally delegated authority of that covered agency. This ensures that covered agencies are narrowly empowered to investigate and audit the ADSs whose function falls primarily within each agency’s statutory regulatory responsibilities, thereby preventing regulatory overreach and mitigating regulatory overlap between agencies. Agencies would also need to provide appropriate notice to the entities developing and/or deploying the ADS; ensure protection of any private data obtained through the subpoena; and avoid disclosure of any trade secrets or intellectual property through the subpoena process or related investigations.

Regulating ADSs through a Critical Algorithmic System Classification (CASC)

The proposed CASC would be a new legal designation that would empower covered federal agencies to set and enforce rules over qualifying ADSs. Through the federal rulemaking process, a covered agency would have to demonstrate that a category of ADS meets the legal criteria for the CASC, and in doing so could set and enforce standards for the commercial use of that type of CASC system. The CASC does not intend to widen the scope of federal regulation, but rather intends to provide sufficient legal authority and regulatory tools for covered agencies to oversee ADSs used in the field of their existing congressionally delegated authority.

Through the federal rulemaking process, an agency would need to demonstrate that a category of ADSs meets three criteria related to the risk of harm, the extent of impact, and the scope of existing agency authority in order to apply the CASC to that ADS category.

  1. Risk of harms to health care access, economic opportunity, or access to essential services: A covered agency must demonstrate that this ADS category can pose risks to health care access, including through health care provisioning, approval, billing, and insurance; to equal opportunities, including equitable access to education, housing, credit, employment, promotion, and other opportunities; or access to critical resources or services, such as financial services, safety services, emergency services, or social services.
  2. Extent of impact: A covered agency must demonstrate that this ADS category (aggregated across all providers) impacts a significant population of people based on scale or coverage.
    • Scale – all the deployed ADSs of one category collectively affect over a significant and specified number of U.S. residents; or
    • Coverage – all the deployed ADSs of one category collectively affect over 25% of a specific affected population of U.S residents, such as a protected class or a specific occupation.
  3. Scope of authority: A covered agency must demonstrate that this ADS category is making determinations or affecting processes that are already regulated under the congressionally delegated authority of the covered agency.

In demonstrating that an ADS category meets the CASC criteria, a covered agency would be empowered to set and enforce rules for the commercial development and deployment of those CASC systems. Covered agencies would be empowered to establish rules for the function of the CASC systems that would mitigate the identified risks, specifically pertaining to the following qualities:

  • Disclosure – informing affected persons about the use of a CASC system.
  • Transparency and explainability – informing affected persons about the computational process that resulted in a specific outcome of a CASC system at both individual and systemic levels.
  • Correction of inaccurate data – enabling affected persons to view and correct input data used as part of a CASC system.
  • Efficacy and robustness – requiring a CASC system to meet quantitative standards of performance as well as undergo relevant testing and evaluation both pre-deployment and through ongoing monitoring during deployment.
  • Non-discrimination – requiring a CASC system to meet standards such that it does not discriminate or lead to disparate impact, based on any protected class.
  • Data privacy preservation – requiring that CASC systems ensure they do not reveal or expose sensitive covered data.
  • Human alternative – requiring the deployers of CASC systems to provide an alternative non-algorithmic process when reasonably justified by an affected person.
  • Storage of data, code, models, and technical documentation – requiring the developers and deployers of CASC systems to maintain data, code, models, and technical documentation relevant to the CASC system for a specified period.

The covered agency would not set rules regarding all the above qualities by default, but instead would select those pertinent to mitigating the risks established in the CASC rulemaking process. Covered agencies would be empowered to seek legal remedies or relief on behalf of affected persons, including injunction, restitution, and civil penalties, for failure to meet CASC regulations. The authorities for a specific type of CASC system would be exclusive to that agency and could not be duplicated by a different agency. Agency rulemaking for CASC systems would be subject to the rulemaking requirements** under the Administrative Procedures Act, ensuring that the public is informed and that stakeholders are able to contribute to and prepare for new CASC regulations.

The administrative subpoena authority and the CASC rulemaking authority would complement one another, enabling a process that generally consists of four stages:

  1. A covered agency discovers and documents a category of ADSs that is related to its statutory authority and may meet the CASC criteria.
  2. The agency employs its administrative subpoena authority to comprehensively review the development, deployment, and impact of this category of ADSs in the market.
  3. If determined to meet the CASC criteria, the agency proceeds through the rulemaking process to designate the ADS category as a CASC, using the systemic review to inform rules for the development and use of the ADS.
  4. The agency continues to employ administrative subpoena authority to monitor the use of that now CASC-designated ADS, evaluating the need for updates to pertinent rules and, if necessary, ensuring compliance through litigation.
Advantages of the CASC Approach

The CASC is a novel approach to governing algorithms that comprehensively addresses the proliferation of ADSs by enabling sectoral agencies to perform algorithmic audits (through administrative subpoenas) and then issue application-specific regulations (through the process outlined above). This meaningfully distinguishes the CASC from other proposed AI legislation, as it will enable federal agencies to continuously adapt to the growing role of ADSs in crucial socioeconomic determinations under their legal authority.

The CASC approach improves governance in several specific situations, including removing practical obstacles to sensible algorithmic regulation, clarifying uncertainty in legal authorities written before the modern proliferation of ADSs, and addressing the lack of preexisting oversight authorities over some ADS categories that affect critical socioeconomic determinations at scale. Some regulatory agencies have a limited mandate to govern ADSs but face practical challenges arising from how ADSs have changed an industry. This is the case for the Equal Employment Opportunity Commission, which cannot presently directly enforce anti-discrimination law on the development and sale of a discriminatory ADS system by a vendor. Similarly, vendors of algorithmic credit scores are technically excluded from the Equal Credit Opportunity Act, despite their enormous impact on access to credit. Several key regulatory agencies also lack sufficient administrative subpoena authority to systemically review or audit ADSs.

Further, there are several areas in which preexisting regulatory authority does not expressly and unambiguously apply to ADSs, even though they are inextricably linked to a regulated area. The CASC would make clear that federal agencies can regulate ADSs that impact, for example, federal employment discrimination laws, the Occupational Safety and Health Act, the Fair Housing Act, and other civil rights legislation. Lastly, the CASC approach could enable algorithmic oversight over some ADSs that are not currently supervised but do meet the CASC criteria, such as ADSs for higher education admissions and pricing, which could be governed by the Department of Education.

While plugging these significant gaps, the CASC is also intentionally limited in scope, narrowly addressing a key shortfall of the federal government’s ability to govern commercial ADSs that affect key socioeconomic determinations at a large scale. By relying on administrative subpoenas and the federal rulemaking process, the CASC enables new agency authorities that rest on well-established legal standards for regulation. Therefore, the CASC can be seen as a minimal but impactful intervention to systemically address harms from ADSs in critical socioeconomic determinations.

The fact that the CASC allows for the distributed governing of ADSs by sectoral regulators is a key benefit of this approach, as compared to giving similar authorities to a new agency or solely empowering the Federal Trade Commission. Creating a central algorithmic regulatory agency could potentially lead to two parallel regulatory mechanisms—one for human processes governed by sectoral regulators and one for ADSs governed by an algorithmic regulator. This parallel structure would be constantly challenged by overlapping and intertwined authorities between agencies, as the human and algorithmic components of socioeconomic determinations are inseparable. The central regulator would also lack the necessary domain knowledge of existing sectoral agencies. Further, as ADSs play a larger and larger role in critical socioeconomic decision-making, the workload of the central regulator would expand, while that of the sectoral regulators would shrink, creating a long-term imbalance. While a new regulatory agency warrants consideration for areas such as data privacy and online platform governance, the CASC approach is a better solution for governing ADSs used for critical socioeconomic determinations.

The CASC approach also has significant advantages for ensuring the continued economic and technological leadership of the United States. The extent-of-impact requirement of the CASC would act as an exemption for innovative small businesses that are developing new ADSs. This is the case because a new category of ADS would not immediately reach the threshold number of affected persons. This would enable start-ups to develop new categories of ADSs, while identifying the necessary best practices and safeguards. Further, the CASC criteria ensures that the majority of ADSs—such as for imagining interior design, offering movie recommendations, or aiding wildlife identification—remain clearly out of scope. This is appropriate, as the societal impact of most ADSs is not sufficient to demand governmental intervention.

The CASC enables regulators to focus on a relatively small number of vendors of ADSs. As more companies shift to ADSs provided by vendors (for tasks such as hiring, worker management, health care allocation, educational access, and others), the regulation of these vendors becomes the point of least friction—enabling significant improvements in the function of ADSs with minimal interference in the market. Requiring accuracy, non-discrimination, and transparency in these ADSs also offers a guarantee of quality to the companies procuring from these vendors, leading to a more efficient market for ADSs.

Passing a comprehensive approach to algorithmic regulation would also send a strong signal to the rest of the world that the U.S. is taking algorithmic risks seriously, and that its technology companies would be responsibly governed. The CASC approach would help ensure that the U.S. becomes not just the undisputed leader in AI, but in trustworthy AI, a reputation that would attract significant global business and investment over the coming decades. This message would also be heard domestically, encouraging the further development of the domestic AI assurance industry, which promises to be its own important market.

There are also meaningful international trade justifications for the CASC approach. The CASC allows for significant regulatory flexibility, enabling better international alignment. This is especially valuable in relation to the EU, which is currently passing a comprehensive regulatory framework for algorithmic systems, the EU AI Act. Strong alignment with the EU on ADSs ensures the continued function of this critical trade relationship while also strengthening regulatory oversight through shared market surveillance, sharing of best practices, and collaboration through international standards bodies.

Lastly, the CASC approach is a relatively “future proof” intervention, in that it enables continuous adaptation by federal agencies to the ongoing emergence of ADSs within covered agencies’ regulatory domains. This approach preempts the need for Congress to establish and routinely update a list of high-risk ADSs over which agencies have certain authorities. Further, this approach recognizes that federal agencies are best placed to prioritize which ADSs, due to their impact and risk of harms, necessitate going through the CASC process.

Shortcomings and Supplementary Interventions

Despite its advantages, there are significant drawbacks of the CASC approach, including being limited by the pace of the regulatory process, working retroactively rather than proactively, and not addressing agency capacity issues. Most glaringly, while the CASC rulemaking enables significant sectoral specificity in governing ADSs, this proposal is still a generic intervention that is far less tailored than a comprehensive updating of all U.S. civil rights and consumer protections law to address risks from ADSs. The comprehensive updating of these laws is an unquestionably better approach to policymaking; however, it may also be politically infeasible, and the CASC approach may function as a workable alternative.

The length of the regulatory process, especially when litigated by affected entities, could be sufficiently long as to seriously undermine the efficacy of the CASC approach. Even if many agencies only govern a relatively small number of CASC systems (e.g., two to five), the multi-year and resource-intensive process of creating new regulations, often interrupted by changes in presidential administrations or priorities, could delay CASC regulations to such a degree as to enable ongoing harms. For the CASC approach to be effective, this rulemaking process may need to be expedited. Further refining and clarifying the definitions in this proposal (especially “risk of harms” and other terms used in the CASC criteria) may also add legal clarity and ease rulemaking. Additionally, the new administrative subpoena authority could be exempted from the Paperwork Reduction Act, enabling easier information gathering for the key step of demonstrating an ADS meets the CASC criteria.

One partial solution to this challenge would be for Congress to mandate that federal agencies pass rulemaking for a pre-selected list of ADS categories or, alternatively, create a test or criteria for evaluating when an ADS category would qualify and thereby require a CASC rulemaking. Proposing a list of existing ADS categories that warrant CASC rulemakings or specific evaluative criteria that could require a CASC rulemaking are both valuable directions for future research.

Another meaningful criticism of the CASC approach is that its dependence on rulemaking means it would be inherently retroactive, lagging behind new categories of ADSs. To address this, the CASC approach could be paired with a rights-based approach to ensure that all algorithms meet a few universal characteristics. This would potentially include universal disclosure to affected persons, non-discrimination, and honesty in descriptions and advertising for ADSs. This could be backed by a private right of action, such that individuals are ensured, and can privately enforce, basic algorithmic rights.

Lastly, covered agencies would need expertise and staff capacity to execute on the ADS regulations enabled by the new administrative subpoena authority and CASC rulemaking authority. Other interventions could be paired with the CASC approach to address this problem, such as expanding funding for federal agencies, supporting other technology expertise hiring programs, or developing centralized resources and expertise to aid federal agencies in regulating ADSs.

Conclusion

The CASC approach is a novel and potentially impactful approach to enabling the comprehensive governance of ADSs through sectoral regulatory agencies and application-specific rulemaking. It benefits from employing existing governance mechanisms, namely administrative subpoena authority and the federal rulemaking process, without necessitating a new agency. Further, this approach has sensible constraints on its scope, while providing a durable approach to governing ADSs in critical socioeconomic decision-making.

However, the CASC approach has meaningful shortcomings in that its rulemakings are inherently retroactive, it does not broadly ensure algorithmic rights for ADSs that do not qualify as CASC ADSs, and it does not resolve capacity issues at federal agencies. To address these limitations, the CASC could be paired with a more general rights-based approach to algorithmic systems as well as additional funding for federal regulatory agencies. Lastly, it is worth caveating that the CASC approach attempts to be a generic solution to algorithmic challenges that are highly diverse and contextualized within many domains, which would likely lead to inefficiencies in implementation.

Despite these drawbacks, the CASC approach would be a meaningful policy intervention to significantly address the proliferation of ADSs used for critical socioeconomic determinations at scale—a central and unsolved challenge of governing algorithmic systems.

Notes:

**At the time of publication, this report stated: “Agency rulemaking for CASC systems would be subject to the formal rulemaking requirements under the Administrative Procedures Act.” The phrase “formal rulemaking” has a specific definition under the Administrative Procedures Act which the author did not intend. The word “formal” has been removed. (Back to top)