While algorithmic systems have become widely used for many impactful socioeconomic determinations, these algorithms are unique to their circumstances. This challenge warrants an approach to governing algorithms that comprehensively enables application-specific oversight. To address this challenge, this paper proposes granting two new authorities for key regulatory agencies: (1) administrative subpoena authority for algorithmic investigations, and (2) rulemaking authority for especially impactful algorithms within federal agencies’ existing regulatory purview. This approach requires the creation of a new regulatory instrument, introduced here as the Critical Algorithmic Systems Classification, or CASC. The CASC enables a comprehensive approach to developing application-specific rules for algorithmic systems and, in doing so, maintains longstanding consumer and civil rights protections without necessitating a parallel oversight regime for algorithmic systems.
Algorithmic decision-making systems (ADSs) are ubiquitous in many critical socioeconomic determinations—including educational access, job discovery and hiring, employee management, consumer financial services, property appraisal, rent setting, tenant screening, medical provisioning, medication approval, and more.1 The majority of all decisions in these crucial applications are affected by or made entirely by ADSs. This proliferation of ADSs is a defining issue of modern economic and social policy, with considerable implications for income equality, social mobility, health outcomes, and even life expectancy. While the use of algorithms and data analytics does at times improve social outcomes, many individual and systemic harms have been documented from erroneous data, algorithmic failures, discriminatory impact, and overestimation of algorithmic capacity.
Some of these socioeconomic determinations are already partially subject to federal law. However, federal agencies are frequently ill-equipped to review and sufficiently regulate the ADSs that fall under their legal authority. Many agencies lack critical capacity regarding algorithmic oversight, including: the authority to require entities to retain data, code, models, and technical documentation; the authority to subpoena those same materials; the technical ability to audit ADSs; and the legal authority to set rules for their use. These limitations are a major barrier to the federal government’s goal of promoting trustworthy and responsible AI, as expressed in documents such as the White House Blueprint for an AI Bill of Rights, the Office of Management and Budget (OMB) Memorandum M-21-06, and the National Institute of Standards and Technology (NIST) AI Risk Management Framework.2
While capacity challenges are shared across many federal agencies, the specifics of each ADS—the type of algorithms used, the data they manipulate, the sociotechnical processes they contribute to, and the risks they pose—vary greatly. The role of algorithms in key socioeconomic determinations is so manifold and diverse that it is not feasible or desirable to set all algorithmic standards or enforcement through a centralized process (although some properties, such as disclosure and non-discrimination, may be appropriate universal requirements). This is well demonstrated by the highly detailed and contextually specific nature of the federal rulemakings and guidance that have been proposed so far, including those on hiring algorithms, automated valuation models, and health information technology systems.3 It is also further evidenced by the significant challenges faced by the European Union (EU) in attempting to draft a single comprehensive Artificial Intelligence (AI) Act, which may lead to a legal framework that lacks sufficient tailoring to specific sectors and algorithmic applications. Instead of a centralized process or single set of rules, federal agencies should be granted sufficiently flexible authority to adapt to the bespoke considerations of impactful ADSs in their domain.
The proliferation of ADSs in critical socioeconomic determinations is widespread but manifests uniquely in many different contexts. This is a central challenge of AI governance and necessitates a regulatory approach that is comprehensive but also enables application-specific rulemaking and oversight by sectoral agencies.4 This paper proposes a novel legislative approach to this dual challenge, which would include two key interventions:
- Granting administrative subpoena authority for covered agencies to investigate and audit ADSs that affect processes related to each covered agency’s statutory authority.
- Creating a new regulatory instrument, the Critical Algorithmic System Classification (CASC), to empower covered agencies to issue and enforce regulations on ADSs in critical socioeconomic determinations within each agency’s statutory authority.
These two interventions (jointly called the “CASC approach”) would broadly enable federal agencies to proportionately tackle significant extant and future risks of ADSs that operate within the preexisting scope of U.S. governance. This paper first introduces the key concepts and governance structure of the CASC approach, then discusses its potential advantages and drawbacks.
- Algorithmic Decision-making System (ADS) – any computational process (including those based on statistics, machine learning, artificial intelligence, or other data processing techniques and excluding passive computing infrastructure5) whose results serve as a basis or component for a decision or judgment.6
- ADS Category – any number of ADSs, regardless of algorithmic approach or the developing entity, that largely play the same role in a process as determined by a covered agency. ADSs for analyzing resumes, mortgage pricing, or college admissions could each potentially be an ADS category.
- Critical Algorithmic System Classification (CASC) – a legal designation that can be applied to an ADS category through the federal rulemaking process, leading to legally binding and enforceable rules for that ADS category.
- CASC System – an ADS category that has been designated as CASC through the proposed federal rulemaking process.
- Covered agencies – an enumerated list of federal agencies with significant sectoral regulatory roles over socioeconomic determinations, potentially including: the Consumer Financial Protection Bureau, the Department of Labor and the Occupational Safety and Health Administration, the Department of Education, the Equal Employment Opportunity Commission, the Environmental Protection Agency, the Federal Deposit Insurance Corporation, the Federal Housing Finance Agency, the Federal Communications Commission, the Federal Reserve Board, the Department of Health and Human Services, the Department of Housing and Urban Development, the Office of the Comptroller of the Currency, the Securities and Exchange Commission, the Department of the Treasury, and the Department of Veterans Affairs.7
Enabling Algorithmic Review and Auditing through Administrative Subpoena Authority
Covered agencies would be granted authority to collect necessary data, documentation, and technical artifacts (including code and model objects), as well as perform interviews about the development and deployment of ADSs through administrative subpoenas.8 Both developers and deployers (including vendors and contractors as necessary) of ADS systems could be subject to the administrative subpoenas. Covered agencies would be empowered to use these administrative subpoenas to perform algorithmic audits of individual ADSs, perform systemic reviews of the impact of an ADS category, inform the rulemaking process for CASC designation, and enforce rules for CASC systems.
A covered agency may only issue administrative subpoenas for ADSs which are significantly impactful on processes that fall within the congressionally delegated authority of that covered agency. This ensures that covered agencies are narrowly empowered to investigate and audit the ADSs whose function falls primarily within each agency’s statutory regulatory responsibilities, thereby preventing regulatory overreach and mitigating regulatory overlap between agencies. Agencies would also need to provide appropriate notice to the entities developing and/or deploying the ADS; ensure protection of any private data obtained through the subpoena; and avoid disclosure of any trade secrets or intellectual property through the subpoena process or related investigations.
Regulating ADSs through a Critical Algorithmic System Classification (CASC)
The proposed CASC would be a new legal designation that would empower covered federal agencies to set and enforce rules over qualifying ADSs. Through the federal rulemaking process, a covered agency would have to demonstrate that a category of ADS meets the legal criteria for the CASC, and in doing so could set and enforce standards for the commercial use of that type of CASC system.9 The CASC does not intend to widen the scope of federal regulation, but rather intends to provide sufficient legal authority and regulatory tools for covered agencies to oversee ADSs used in the field of their existing congressionally delegated authority.
Through the federal rulemaking process, an agency would need to demonstrate that a category of ADSs meets three criteria related to the risk of harm, the extent of impact, and the scope of existing agency authority in order to apply the CASC to that ADS category.
- Risk of harms to health care access, economic opportunity, or access to essential services: A covered agency must demonstrate that this ADS category can pose risks to health care access,10 including through health care provisioning, approval, billing, and insurance; to equal opportunities, including equitable access to education, housing, credit, employment, promotion, and other opportunities; or access to critical resources or services, such as financial services, safety services, emergency services, or social services.11
- Extent of impact: A covered agency must demonstrate that this ADS category (aggregated across all providers) impacts a significant population of people based on scale or coverage.
- Scale – all the deployed ADSs of one category collectively affect over a significant and specified number of U.S. residents;12 or
- Coverage – all the deployed ADSs of one category collectively affect over 25% of a specific affected population of U.S residents, such as a protected class or a specific occupation.
- Scope of authority: A covered agency must demonstrate that this ADS category is making determinations or affecting processes that are already regulated under the congressionally delegated authority of the covered agency.
In demonstrating that an ADS category meets the CASC criteria, a covered agency would be empowered to set and enforce rules for the commercial development and deployment of those CASC systems.13 Covered agencies would be empowered to establish rules for the function of the CASC systems that would mitigate the identified risks, specifically pertaining to the following qualities:
- Disclosure – informing affected persons about the use of a CASC system.
- Transparency and explainability – informing affected persons about the computational process that resulted in a specific outcome of a CASC system at both individual and systemic levels.
- Correction of inaccurate data – enabling affected persons to view and correct input data used as part of a CASC system.
- Efficacy and robustness – requiring a CASC system to meet quantitative standards of performance as well as undergo relevant testing and evaluation both pre-deployment and through ongoing monitoring during deployment.
- Non-discrimination – requiring a CASC system to meet standards such that it does not discriminate or lead to disparate impact, based on any protected class.14
- Data privacy preservation – requiring that CASC systems ensure they do not reveal or expose sensitive covered data.15
- Human alternative – requiring the deployers of CASC systems to provide an alternative non-algorithmic process when reasonably justified by an affected person.
- Storage of data, code, models, and technical documentation – requiring the developers and deployers of CASC systems to maintain data, code, models, and technical documentation relevant to the CASC system for a specified period.
The covered agency would not set rules regarding all the above qualities by default, but instead would select those pertinent to mitigating the risks established in the CASC rulemaking process. Covered agencies would be empowered to seek legal remedies or relief on behalf of affected persons, including injunction, restitution, and civil penalties, for failure to meet CASC regulations. The authorities for a specific type of CASC system would be exclusive to that agency and could not be duplicated by a different agency.16 Agency rulemaking for CASC systems would be subject to the rulemaking requirements** under the Administrative Procedures Act, ensuring that the public is informed and that stakeholders are able to contribute to and prepare for new CASC regulations.
The administrative subpoena authority and the CASC rulemaking authority would complement one another, enabling a process that generally consists of four stages:
- A covered agency discovers and documents a category of ADSs that is related to its statutory authority and may meet the CASC criteria.
- The agency employs its administrative subpoena authority to comprehensively review the development, deployment, and impact of this category of ADSs in the market.
- If determined to meet the CASC criteria, the agency proceeds through the rulemaking process to designate the ADS category as a CASC, using the systemic review to inform rules for the development and use of the ADS.
- The agency continues to employ administrative subpoena authority to monitor the use of that now CASC-designated ADS, evaluating the need for updates to pertinent rules and, if necessary, ensuring compliance through litigation.
The CASC is a novel approach to governing algorithms that comprehensively addresses the proliferation of ADSs by enabling sectoral agencies to perform algorithmic audits (through administrative subpoenas) and then issue application-specific regulations (through the process outlined above). This meaningfully distinguishes the CASC from other proposed AI legislation, as it will enable federal agencies to continuously adapt to the growing role of ADSs in crucial socioeconomic determinations under their legal authority.17
The CASC approach improves governance in several specific situations, including removing practical obstacles to sensible algorithmic regulation, clarifying uncertainty in legal authorities written before the modern proliferation of ADSs, and addressing the lack of preexisting oversight authorities over some ADS categories that affect critical socioeconomic determinations at scale. Some regulatory agencies have a limited mandate to govern ADSs but face practical challenges arising from how ADSs have changed an industry. This is the case for the Equal Employment Opportunity Commission, which cannot presently directly enforce anti-discrimination law on the development and sale of a discriminatory ADS system by a vendor.18 Similarly, vendors of algorithmic credit scores are technically excluded from the Equal Credit Opportunity Act, despite their enormous impact on access to credit.19 Several key regulatory agencies also lack sufficient administrative subpoena authority to systemically review or audit ADSs.
Further, there are several areas in which preexisting regulatory authority does not expressly and unambiguously apply to ADSs, even though they are inextricably linked to a regulated area. The CASC would make clear that federal agencies can regulate ADSs that impact, for example, federal employment discrimination laws, the Occupational Safety and Health Act, the Fair Housing Act, and other civil rights legislation. Lastly, the CASC approach could enable algorithmic oversight over some ADSs that are not currently supervised but do meet the CASC criteria, such as ADSs for higher education admissions and pricing, which could be governed by the Department of Education.20
While plugging these significant gaps, the CASC is also intentionally limited in scope, narrowly addressing a key shortfall of the federal government’s ability to govern commercial ADSs that affect key socioeconomic determinations at a large scale. By relying on administrative subpoenas and the federal rulemaking process, the CASC enables new agency authorities that rest on well-established legal standards for regulation. Therefore, the CASC can be seen as a minimal but impactful intervention to systemically address harms from ADSs in critical socioeconomic determinations.
The fact that the CASC allows for the distributed governing of ADSs by sectoral regulators is a key benefit of this approach, as compared to giving similar authorities to a new agency or solely empowering the Federal Trade Commission. Creating a central algorithmic regulatory agency could potentially lead to two parallel regulatory mechanisms—one for human processes governed by sectoral regulators and one for ADSs governed by an algorithmic regulator. This parallel structure would be constantly challenged by overlapping and intertwined authorities between agencies, as the human and algorithmic components of socioeconomic determinations are inseparable. The central regulator would also lack the necessary domain knowledge of existing sectoral agencies. Further, as ADSs play a larger and larger role in critical socioeconomic decision-making, the workload of the central regulator would expand, while that of the sectoral regulators would shrink, creating a long-term imbalance. While a new regulatory agency warrants consideration for areas such as data privacy and online platform governance, the CASC approach is a better solution for governing ADSs used for critical socioeconomic determinations.
The CASC approach also has significant advantages for ensuring the continued economic and technological leadership of the United States. The extent-of-impact requirement of the CASC would act as an exemption for innovative small businesses that are developing new ADSs. This is the case because a new category of ADS would not immediately reach the threshold number of affected persons. This would enable start-ups to develop new categories of ADSs, while identifying the necessary best practices and safeguards. Further, the CASC criteria ensures that the majority of ADSs—such as for imagining interior design, offering movie recommendations, or aiding wildlife identification—remain clearly out of scope. This is appropriate, as the societal impact of most ADSs is not sufficient to demand governmental intervention.
The CASC enables regulators to focus on a relatively small number of vendors of ADSs. As more companies shift to ADSs provided by vendors (for tasks such as hiring, worker management, health care allocation, educational access, and others), the regulation of these vendors becomes the point of least friction—enabling significant improvements in the function of ADSs with minimal interference in the market. Requiring accuracy, non-discrimination, and transparency in these ADSs also offers a guarantee of quality to the companies procuring from these vendors, leading to a more efficient market for ADSs.
Passing a comprehensive approach to algorithmic regulation would also send a strong signal to the rest of the world that the U.S. is taking algorithmic risks seriously, and that its technology companies would be responsibly governed. The CASC approach would help ensure that the U.S. becomes not just the undisputed leader in AI, but in trustworthy AI, a reputation that would attract significant global business and investment over the coming decades. This message would also be heard domestically, encouraging the further development of the domestic AI assurance industry, which promises to be its own important market.21
There are also meaningful international trade justifications for the CASC approach. The CASC allows for significant regulatory flexibility, enabling better international alignment. This is especially valuable in relation to the EU, which is currently passing a comprehensive regulatory framework for algorithmic systems, the EU AI Act. Strong alignment with the EU on ADSs ensures the continued function of this critical trade relationship while also strengthening regulatory oversight through shared market surveillance, sharing of best practices, and collaboration through international standards bodies.
Lastly, the CASC approach is a relatively “future proof” intervention, in that it enables continuous adaptation by federal agencies to the ongoing emergence of ADSs within covered agencies’ regulatory domains. This approach preempts the need for Congress to establish and routinely update a list of high-risk ADSs over which agencies have certain authorities. Further, this approach recognizes that federal agencies are best placed to prioritize which ADSs, due to their impact and risk of harms, necessitate going through the CASC process.
Despite its advantages, there are significant drawbacks of the CASC approach, including being limited by the pace of the regulatory process, working retroactively rather than proactively, and not addressing agency capacity issues. Most glaringly, while the CASC rulemaking enables significant sectoral specificity in governing ADSs, this proposal is still a generic intervention that is far less tailored than a comprehensive updating of all U.S. civil rights and consumer protections law to address risks from ADSs. The comprehensive updating of these laws is an unquestionably better approach to policymaking; however, it may also be politically infeasible, and the CASC approach may function as a workable alternative.
The length of the regulatory process, especially when litigated by affected entities, could be sufficiently long as to seriously undermine the efficacy of the CASC approach. Even if many agencies only govern a relatively small number of CASC systems (e.g., two to five), the multi-year and resource-intensive process of creating new regulations, often interrupted by changes in presidential administrations or priorities, could delay CASC regulations to such a degree as to enable ongoing harms.22 For the CASC approach to be effective, this rulemaking process may need to be expedited. Further refining and clarifying the definitions in this proposal (especially “risk of harms” and other terms used in the CASC criteria) may also add legal clarity and ease rulemaking. Additionally, the new administrative subpoena authority could be exempted from the Paperwork Reduction Act, enabling easier information gathering for the key step of demonstrating an ADS meets the CASC criteria.
One partial solution to this challenge would be for Congress to mandate that federal agencies pass rulemaking for a pre-selected list of ADS categories or, alternatively, create a test or criteria for evaluating when an ADS category would qualify and thereby require a CASC rulemaking. Proposing a list of existing ADS categories that warrant CASC rulemakings or specific evaluative criteria that could require a CASC rulemaking are both valuable directions for future research.
Another meaningful criticism of the CASC approach is that its dependence on rulemaking means it would be inherently retroactive, lagging behind new categories of ADSs. To address this, the CASC approach could be paired with a rights-based approach to ensure that all algorithms meet a few universal characteristics. This would potentially include universal disclosure to affected persons, non-discrimination, and honesty in descriptions and advertising for ADSs. This could be backed by a private right of action, such that individuals are ensured, and can privately enforce, basic algorithmic rights.
Lastly, covered agencies would need expertise and staff capacity to execute on the ADS regulations enabled by the new administrative subpoena authority and CASC rulemaking authority. Other interventions could be paired with the CASC approach to address this problem, such as expanding funding for federal agencies, supporting other technology expertise hiring programs, or developing centralized resources and expertise to aid federal agencies in regulating ADSs.
The CASC approach is a novel and potentially impactful approach to enabling the comprehensive governance of ADSs through sectoral regulatory agencies and application-specific rulemaking. It benefits from employing existing governance mechanisms, namely administrative subpoena authority and the federal rulemaking process, without necessitating a new agency. Further, this approach has sensible constraints on its scope, while providing a durable approach to governing ADSs in critical socioeconomic decision-making.
However, the CASC approach has meaningful shortcomings in that its rulemakings are inherently retroactive, it does not broadly ensure algorithmic rights for ADSs that do not qualify as CASC ADSs, and it does not resolve capacity issues at federal agencies. To address these limitations, the CASC could be paired with a more general rights-based approach to algorithmic systems as well as additional funding for federal regulatory agencies. Lastly, it is worth caveating that the CASC approach attempts to be a generic solution to algorithmic challenges that are highly diverse and contextualized within many domains, which would likely lead to inefficiencies in implementation.
Despite these drawbacks, the CASC approach would be a meaningful policy intervention to significantly address the proliferation of ADSs used for critical socioeconomic determinations at scale—a central and unsolved challenge of governing algorithmic systems.
**At the time of publication, this report stated: “Agency rulemaking for CASC systems would be subject to the formal rulemaking requirements under the Administrative Procedures Act.” The phrase “formal rulemaking” has a specific definition under the Administrative Procedures Act which the author did not intend. The word “formal” has been removed. (Back to top)
- For examples, see these resources on: educational access, job discovery and hiring, employee management, financial services (such as mortgages and property appraisal), rent setting, tenant screening, medical provisioning, and medication approval.
- The White House, Blueprint for an AI Bill of Rights (Washington, D.C., 2022) https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf; “Executive Order 13859 of February 11, 2019, Maintaining American Leadership in Artificial Intelligence,” Federal Register, 84 FR 3967 (January 14th, 2019): 3967-3972, https://www.federalregister.gov/documents/2019/02/14/2019-02544/maintaining-american-leadership-in-artificial-intelligence; Office of Management and Budget. Guidance for Regulation of Artificial Intelligence Applications, by Russell T. Vought, (Washington, D.C. 2020) https://www.whitehouse.gov/wp-content/uploads/2020/11/M-21-06.pdf; National Institute of Standards and Technology, AI Risk Management Framework: Initial Draft (Washington D.C., 2022) https://www.nist.gov/system/files/documents/2022/03/17/AI-RMF-1stdraft.pdf; National Institute of Standards and Technology. AI Risk Management Framework. (Washington D.C., 2023) https://www.nist.gov/itl/ai-risk-management-framework
- For examples, see the Equal Employment Opportunity Commission’s guidance on algorithmic hiring and the Americans with Disabilities Act https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence; a six-agency collaboration to implement quality control standards for automated valuation models https://s3.amazonaws.com/files.consumerfinance.gov/f/documents/cfpb_automated-valuation-models_proposed-rule-request-for-comment_2023-06.pdf; and the Office of the National Coordinator for Health Information Technology’s proposed updates to certification of clinical decision support software https://www.healthit.gov/topic/laws-regulation-and-policy/health-data-technology-and-interoperability-certification-program
- Application-specific rulemaking for algorithmic systems performed by sectoral agencies has many advantages, including better framing an algorithmic system in its broader societal context, better accounting for related non-algorithmic processes performed by individuals or institutions, deeper expertise from the sectoral agencies to be applied to the policy problem, and stronger connections from relevant stakeholders to the agency. For a longer discussion, see: “The AI Bill of Rights makes uneven progress on algorithmic protections,” Alex Engler. November 21, 2022. https://www.brookings.edu/2022/11/21/the-ai-bill-of-rights-makes-uneven-progress-on-algorithmic-protections/
- Passive computing infrastructure would generally not be considered within scope. The term “passive computing infrastructure” means any intermediary technology that does not influence or determine the outcome of a decision, including web hosting; domain registration; networking; caching; or cybersecurity (as first proposed in the draft Algorithmic Accountability Act, see: “H.R.6580 – Algorithmic Accountability Act of 2022,” 117th Congress (2022). https://www.congress.gov/bill/117th-congress/house-bill/6580/text).
- This definition is intended to typically limit the scope of the proposed CASC to software, rather than physical products, except in rare cases. This should be seen as a feature of the proposal, as physical products—such as cars and medical devices—have distinct regulatory processes that should be handled separately.
- This list of covered agencies should be subject to debate. Some agencies, such as the CFPB, may not need the administrative subpoena authority. Other agencies, such as VA, may not have regulatory purview over relevant private sector models. CPSC, presently excluded from this list, may not have any authority over socioeconomic determinations, but rather only over consumer products. Still, other agencies, such as DOJ, maybe have sufficient subpoena and regulatory authority already. In any situation, it is likely that agency-specific caveats and stipulations would be necessary in fleshing out CASC-style legislation.
- Administrative subpoenas are a legal tool by which agencies can compel testimony and the sharing of documents in order for that agency to complete its duties. As of 2012, federal agencies have been granted administrative subpoena authority in over 300 instances. For more information on the administrative subpoenas in the federal government, see: “Administrative Subpoenas in Criminal Investigations: A Brief Legal Analysis,” Congressional Research Service. March 17, 2005. https://www.everycrsreport.com/reports/RL33321.html
- The CASC is aimed at commercial ADSs and would not meaningfully impact the government use of ADSs at the federal, state, or local level unless procured from private companies. Different policy interventions, such as further implementing E.O. 13960 or forthcoming guidance on AI from the Office of Management and Budget, may be necessary to augment the CSC concerning public sector use of ADSs. See: “Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government,” the Federal Register. December 8, 2020. https://www.federalregister.gov/documents/2020/12/08/2020-27065/promoting-the-use-of-trustworthy-artificial-intelligence-in-the-federal-government
- Risks to health care access should be considered distinct from risks to health care, as this proposal is not intended to regulate medical devices, which nearly always include algorithms, but are already regulated by the Food and Drug Administration, see: “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices,” the U.S. Food and Drug Administration. October 5, 2022. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices).
- This definition is adapted from the White House Blueprint for an AI Bill of Rights, see: The White House, Blueprint for an AI Bill of Rights (Washington, D.C., 2022) https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf
- There is no definitively correct number for a threshold to apply the CASC, however, a number in the hundreds of thousands, perhaps 500,000, would cover many of the ADS categories that have raised national concerns. An alternative approach would be to use some form of cost benefit analysis, enabling a variable scale of impact. However, this may add to the burden of the regulatory process.
- ADSs that are developed and released as open-source software, or otherwise shared or used without compensation, such as for scientific research, should not be covered. The intention of this proposal is to cover solely the commercial use of ADSs.
- The list of protected classes varies across federal laws. To address this, CASC-style legislation could either state that the list of protected classes in the most pertinent civil rights legislation applies or, alternatively, include a specific list to apply to all CASC-designated ADSs.
- As defined by the proposed American Data Privacy and Protection Act, see: “H.R.8152 – American Data Privacy and Protection Act,” 117th Congress (2022). https://www.congress.gov/bill/117th-congress/house-bill/8152/text#toc-H2505DD6E75214E79A8CB1B2E0A7EDDCD
- In situations of regulatory overlap between covered agencies, agencies can either collaborate, as is the case with the quality control standards for automated valuation models (see endnote ii), or employ interagency memoranda of understanding to ensure exclusive oversight of ADSs, see examples in endnote 3 of the Business Roundtable report “Reducing Regulatory Overlap in the 21st Century,” Business Roundtable. (June 2019). https://s3.amazonaws.com/brt.org/BRT.Reducing-RegulatoryOverlapinthe21stCentury.2019.05.31.pdf
- The approach described in this paper is distinguished by both being comprehensive, in that it broadly enables the government to address the emerging use of algorithms across many important socioeconomic applications, while still allowing for sectoral approaches grounded in the context and domain of each category of ADS. Other legislative proposals, including the Algorithmic Accountability Act of 2022 (S.3572) and the EU AI Act, have a comprehensive approach to AI governance but do not sufficiently enable application-specific rules and implementation by sectoral agencies.
- For more information, see: “The EEOC wants to make AI hiring fairer for people with disabilities” by Alex Engler. https://www.brookings.edu/articles/the-eeoc-wants-to-make-ai-hiring-fairer-for-people-with-disabilities/
- For a more complete discussion, see: “Unfair Artificial Intelligence: How FTC Intervention Can Overcome the Limitations of Discrimination Law,” Andrew Selbst and Solon Barocas. August 9, 2022. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4185227
- For more context, see “Enrollment algorithms are contributing to the crises of higher education,” Alex Engler. September 14, 2021. https://www.brookings.edu/articles/enrollment-algorithms-are-contributing-to-the-crises-of-higher-education/
- Already, this growing industry includes companies such as Weights & Biases, Credo AI, Armilla AI, Fiddler AI, Mozilla.ai and others.
- A 2009 report from the Government Accountability Office examined sixteen case studies and found the average time to complete a rulemaking was four years, see: “FEDERAL RULEMAKING Improvements Needed to Monitoring and Evaluation of Rules Development as Well as to the Transparency of OMB Regulatory Reviews,” the Government Accountability Office. April 2009. https://www.gao.gov/assets/gao-09-205.pdf