Had Democrats decisively won control of the White House and the Senate, there would be a robust conversation around legislatively expanding the federal government’s authority for technology oversight. While this conversation would take a backseat to issues like fighting the COVID-19 pandemic and shoring up the economy, legislation on new data privacy and algorithmic consumer protections could have had a chance in the first term of the Biden administration. Even a technology oversight agency would have been possible, and perhaps still is, pending the results from the Georgia special elections. Yet, even without a Democratic majority in the Senate, there are meaningful steps that the Biden administration can take to further reasonable oversight of the technology sector, and specifically the largely unregulated use of artificial intelligence (AI) and algorithmic decision-making.
The Biden administration can reverse Trump-era executive orders and agency regulations, instead requiring federal agencies to enforce existing discrimination laws on algorithmic systems and expanding their regulatory capacity to do so. President-elect Biden should push Congress to enact new algorithmic consumer protections in any new legislative compromises on privacy or antitrust, and further support the revival of the Office of Technology Assessment.
These efforts are likely not sufficient in the long term. The digital economy accounts for over 9% of GDP–larger than the finance sector–and was growing at 6.8% per year before the pandemic. Since the rise of the modern regulatory state in the 1970s, perhaps no other segment of the overall economy has experienced such growth while remaining largely unregulated. Even that framing understates the importance of data systems and algorithms, which are affecting nearly every part of our society. While the economic growth is undeniable, the mass proliferation of data systems and algorithms—especially in the form of permissionless innovation—has enabled extensive societal harms. A new regulatory agency, or expanded capacity of an existing agency such as the Federal Trade Commission, is necessary. For now, however, the Biden administration should take the available steps to curtail the direct harms, especially algorithmic discrimination, enabled by AI.
“While the economic growth is undeniable, the mass proliferation of data systems and algorithms—especially in the form of permissionless innovation—has enabled extensive societal harms.”
Executive Actions to Enforce Existing Laws
The use of algorithmic decision-making in many industries poses serious challenges for regulatory enforcement of existing laws. Health insurance companies implement risk prediction tools that are likely prioritizing care in racially biased ways, which is illegal for any providers receiving federal funds from programs like CHIP and Medicaid or insurers participating in the Affordable Care Act exchanges. AI systems used to perform automated video interviews are undoubtedly deeply flawed as well, and there is cause for investigation to see if they discriminate against people with disabilities in violation of the Americans with Disabilities Act. The algorithms that manage Uber and Lyft drivers can make decisions that obscure the lines between employee and independent contractor for purposes of enforcing the Fair Labor Standards Act. However, the Office of Civil Rights within the Department of Health and Human Services, the Equal Opportunity Employment Commission, and the Department of Labor, respectively, may not be equipped to handle these questions.
The Biden administration should be more proactive than the Trump White House, which took minimal action to avoid problems associated with the use of algorithms. Following a February 2019 executive order, the Office of Management and Budget (OMB) issued its final guidance on the regulation of artificial intelligence on Nov. 17, 2020. While there are serious and reasonsed aspects of the document, it makes clear that AI innovation, and not regulatory protection, is the foremost priority of the Trump administration. For instance, while it seems encouraging that the guidance includes mention of “fairness and non-discrimination” as well as “disclosure and transparency,” these were part of a long list of required considerations before implementing new regulations. A careful reading of this document—such as the section offering a series of non-regulatory interventions—suggests it is meant to deter, not encourage or enable, new regulatory safeguards for AI applications.
“[T]he status quo is untenable, as it functionally inoculates companies from anti-discrimination laws when they use algorithms.”
It is possible that this is an overly critical reading of the White House guidance—to its credit, it does prompt agencies to think more about AI regulation. However, the status quo is untenable, as it functionally inoculates companies from anti-discrimination laws when they use algorithms. Since this guidance from OMB Director Russell Vought downplays the well-documented discriminatory harms from some AI systems, the Biden administration should amend this guidance to ensure that current law is appropriately enforced on algorithmic systems. Specific agency-level guidance needs reversal as well, such as a rule from the Department of Housing and Urban Development that places an insurmountable burden of proof on claims of discrimination against algorithms used for mortgage loan or rental applications.
In order to execute on this guidance, federal agencies will need the capacity to investigate algorithmic systems. These ex-post audits should focus on highly impactful, large-scale AI systems, and especially those that have already been implicated by nonprofits and academic evaluations. While it may seem as though this would require the models themselves, this is far less important than access to the underlying datasets and model outputs. Therefore, regulatory agencies should use available administrative subpoena powers to gain access to the relevant corporate datasets. There are already about 335 of these authorities across the federal government.
The auditing of these massive data systems may be a daunting task for many agencies. In order to support this new responsibility, the Biden administration should direct the United States Digital Service (USDS) to hire a line of data scientists and engineers to support the algorithmic regulatory capacity of federal agencies. USDS has been a successful initiative to bring technical talent into the federal government to modernize services, and it is well-positioned to bring in the talent needed to build secure data environments and conduct algorithmic audits. The Biden administration should make this endeavor a policy priority, using hiring authority within the Office of Science and Technology Policy (OSTP) and Presidential Innovation Fellowships to bring in more expertise.
The chief data scientist, a role within OSTP created under the Obama administration and dormant during Trump’s tenure in office, warrants special consideration. DJ Patil, the first and only person yet to hold this job, acted as a national cheerleader for bringing data to bear on the nation’s toughest problems, especially precision medicine and data-driven justice. He has been—and remains today—a prominent voice for ethical use of algorithms in the private sector. This role needs to be reinstated, with a renewed emphasis on the dissemination of industry best practices and high standards in the use of AI. Taking lessons from the many disastrous AI deployments, guidance from standards bodies such as NIST and ISO/IEC JTC, as well as academic research, the next chief data scientist should push the technology sector toward specific safeguards around their use of AI. The industry should also be encouraged to take verifiable action, such as embracing independent audits of their AI systems. Although more limited than direct oversight, pushing for higher industry standards has been effective in reducing recurrence of known problems, as demonstrated by agencies like the National Transportation Safety Board and the Chemical Safety and Hazard Investigation Board.
Opportunities for Congressional Action
Recently, there have been a number of serious proposals around the need for a new tech regulatory agency, including from Public Knowledge, the Stigler Committee on Digital Platforms, and the Shorenstein Center. The U.S. is hardly alone, as the United Kingdom released a proposal for a new regulator to address online harms last year, and the European Union continues to implement GDPR and work toward an artificial intelligence regulatory proposal. With Democrats unable to take decisive control of the Senate, legislation to create a new regulatory body seems unlikely. However, even without a Democratic majority in the Senate, the Biden administration may be able to curb some particularly harmful tech practices through legislation.
“[E]ven without a Democratic majority in the Senate, the Biden administration may be able to curb some particularly harmful tech practices through legislation.”
The new 117th Congress is poised to continue discussions around privacy legislation and perhaps the sweeping antitrust proposal put forth by Rep. David Cicilline (D-R.I.) as well. Privacy legislation likely has the best chance of passage, and there may be an opportunity to include new consumer protections. In fact, four Republican senators put forth the SAFE DATA Act, which would attempt to improve data security protections, mitigate algorithmic bias, and prevent content forgeries such as deepfakes. While this bill has been criticized by civil society organizations as insufficient to make meaningful change on data privacy, it also signals a potential willingness to expand the regulatory role of the Federal Trade Commission. The Biden administration should work actively toward compromise privacy legislation, and also support additional consumer protections that can win bipartisan support. This should include, for instance, mandated disclosure of AI systems, so that AI systems cannot masquerade as real people for commercial and political purposes.
Members of both parties have also expressed skepticism about the role of facial recognition in criminal justice due to pervasive and biased false matches as well as its potential to violate privacy rights. Therefore, there may be consensus around implementing new safeguards, such as requiring a minimum quality of video footage and proving that the system has been robustly tested (tests which many of the current systems would likely fail). Various localities have recently banned the use of facial recognition by police, including San Francisco, Boston, and as of this election, Portland, Oregon, and Portland, Maine.
Congress should certainly ban the use of facial affect analysis—a related use of algorithms to analyze facial expressions—in most decisions of import. This is especially necessary for employment decisions and employee supervision, as this practice has proliferated in automated job interviews. This type of software attempts to use facial movement and facial structure to predict the quality of an employee, an application that dramatically overestimates the ability of this technology and is nearly guaranteed to exacerbate bias.
“The fast pace of technological development and proliferation has left Congress behind, especially since the defunding of the original Office of Technology Assessment in 1995.”
Lastly, Biden should also endorse the revival of the Office of Technology Assessment, which seeks to provide Congress with additional capacity and more informed technology sector oversight. The fast pace of technological development and proliferation has left Congress behind, especially since the defunding of the original Office of Technology Assessment in 1995. Further, congressional offices have reduced staff funding in their Washington offices, leading to younger, less-experienced staff working on a larger number of policy issues. This is hardly an ideal outcome for handling the novel and complex challenges posed for AI and technology governance.
A Balanced Approach to Artificial Intelligence
The Biden administration has the tools to take substantive steps to make citizens safer from the reckless use of AI systems. Even without congressional action, the Biden administration can ensure that crimes and discrimination committed by algorithms are no longer immune from the law. Working with Congress, the Biden administration can start to address the worst algorithmic harms within ongoing legislative conversations.
These actions can also help build societal trust in AI applications, which is declining as one part of the broader “techlash” against large technology companies. In 2019, 84% of Americans said that AI should be carefully managed, and a 2017 poll from Morning Consult found that over 70% of both Democrats and Republicans support regulations for AI. For the next generation of AI-driven services and products to be widely adopted, Americans will need to trust them. By adopting better safeguards that can prevent the worst algorithmic offenses, many of which also tend to make the news, the federal government will be aiding the private sector in the long run.
“For the next generation of AI-driven services and products to be widely adopted, Americans will need to trust them.”
Of course, this is not to argue that the Biden administration should only work to rein in the use of AI. The administration should continue the extensive new funding streams from the National Science Foundation for National AI Institutes. The administration should also keep engaging the Global Partnership on AI, working toward democratic norms for AI’s role in the world. Investing in AI research and working toward global consensus on AI are both critically important. However, it is AI oversight that needs a course correction. To keep citizens safe from the overuse and abuse of algorithmic systems, the Biden administration has much work to do—but it also has the tools to get started.
The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.
The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.