Sections

Research

How to improve technical expertise for judges in AI-related litigation

Judge Thad Balkman welcomes a full court on the first day of a trial of Johnson & Johnson over claims they engaged in deceptive marketing that contributed to the national opioid epidemic in Norman, Oklahoma, U.S. May 28, 2019.  REUTERS/Nick Oxford - RC1B6406EEF0
Editor's note:

This report from The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative is part of “AI Governance,” a series that identifies key governance and norm issues related to AI and proposes policy remedies to address the complex challenges associated with emerging technologies.

Introduction

Artificial intelligence (AI) refers to the capacity of machines to perform tasks that are typically associated with human decision-making. AI computer systems use a variety of datasets and sensory inputs to make decisions in real time and to modify future decisions with additional data and experience with minimal human intervention or additional programming. In conjunction with machine learning, AI touches on nearly all aspects of modern life. As digital technologies take the place of certain human activities, legal disputes surrounding AI are inevitable.

Already, such issues have reached both federal and state courts. Judges are tackling emerging AI issues and creating case law that will impact the future course of technological innovation. For example:

  • Tort lawsuits are addressing who is liable when a semi-autonomous or autonomous vehicle harms a pedestrian or a passenger. Accidents with legal implications include a Google Car that sideswiped a public bus in 2016, a Tesla driver’s death in an autopilot-involved crash, a motorcyclist who collided with an autonomous Chevy Bolt electric vehicle in 2017, and a pedestrian struck and killed by an autonomous Uber test vehicle in 2018. In some cases, it will be difficult to identify the exact cause of poor AI decision-making in a particular instance because, by design, decision algorithms use multiple datasets and sensory inputs that evolve over time. It may be even harder to identify the responsible party or parties from among the owner of the actual machine that caused the harm, the software developers, the machine manufacturer, or the various contributors that provided data inputted into the decision-making system.
  • AI precision-medicine algorithms predict patient risks, assist with diagnosis and treatment selection, and prioritize patient care where resources are limited. Health-oriented AI technologies are likely to draw medical malpractice dilemmas, for example, regarding who is responsible when the AI that a radiologist uses to read images misses a cancer. Similarly, where AI monitors vital signs to predict heart attack risk, who is liable for the harm of a missed event or the cost of responding to a false alert? These applications also raise several intersecting legal and ethical issues, including how to protect patient data when AI predicts the medical issues a person is likely to face and how to guard against discrimination while still planning for preventative care.
  • The use of AI in criminal law contexts raises critical legal and ethical issues. For example, what are the constitutional implications of using citywide acoustic sensors to detect the location of gunshots? Does a signal from a sensor form a valid basis for door-to-door or backyard-to-backyard police searches when a sensor that uses AI technology indicates latitude, longitude, and street address of a purported shot fired? In addition, much has been written on how training datasets for facial recognition systems can introduce bias in recognition in a manner that disproportionately misidentifies or falsely matches minorities as potential perpetrators. Finally, there are serious implications for using AI risk-assessment tools for pretrial release, sentencing, and to predict recidivism. The use of datasets and algorithms built on prior, discriminatory practices—for example, improperly based on race or socio-economic factors—may have the effect of amplifying those biases, creating further harm and leading to inaccurate predictions about those who are most likely to miss court appearances or commit repeat offenses.

In these and other cases, judges must understand the role that AI and machine learning play in the legal system itself. Lawyers, as well as judges and their staff, use machine learning to improve case law searches for relevant legal authority to cite in briefs and decisions. Document production and technology-assisted reviews use AI to search for relevant documents to produce and to mine those documents for the information most important to a party’s claims without attorneys’ having to review every document. Some scholars and practitioners are already using AI to predict the outcome of cases based on algorithms based on tens of thousands of prior cases. Recent research suggests that such outcome predictions may have around a 70% accuracy rate. AI is ushering in a new era of quantitative legal decision forecasting.

“AI is ushering in a new era of quantitative legal decision forecasting.”

In short, courts face no small task in: identifying the legal and ethical issues at stake, including concerns about transparency, due process, and data privacy; understanding the AI technology at issue in order to make sound legal rulings; and appreciating the potential implications of legal rulings on future technological advances and individual rights.

With that in mind, it is vital to improve judges’ ability to understand the technical issues in AI-related litigation. There are several things court systems and professional organizations should do to enhance the technical capabilities of judges:

  • Educate the judiciary on the strategies currently available to inform judges about the key technological innovations at issue in a litigation.
  • Encourage research and pilot programs on additional innovations to provide judges with the technical expertise that can help ensure sound legal decision-making.
  • Work with state bars and other legal professional organizations to familiarize attorneys with AI-driven technology, the broader implications of AI-related legal cases, and the methods that they can propose to judges to provide neutral information on relevant AI to the court.
  • Bring together leading professionals with AI expertise to work in concert to educate members of the judiciary about AI.
  • Engage all relevant stakeholders to inform and recommend policy and legislation that addresses AI systematically–for example, on issues of privacy, constitutional rights, or whether certain types of AI constitute products or services for liability purposes–before judges and juries must make individual AI-related case determinations, especially those that may have long-term implications for technological innovation and individual rights.

Educate the court about available tools to better understand AI

A variety of strategies are available to educate judges on the AI-related technology at issue in a litigation. These include the use of science or technology tutorials, court-appointed technical advisers or special masters, and court-appointed experts.

I have written previously on the use of technology tutorials to allow judges to ask the parties, experts, or technical advisers to identify and educate a judge about scientific and technological issues central to a litigation. The goal of a technology tutorial is to transform the courtroom into a classroom. Tutorials may include a demonstration of how a certain method, software, or product works, an overview of key technical terminology, or a presentation of how a certain innovation developed over time. Judges may request, or the parties may recommend, live presentations by one or more experts or by the parties, question and answer sessions, or videotaped tutorials or demonstrations. Tutorials provide a forum for judges to ask questions about AI technology outside the context of the parties advocating on behalf of a particular motion or at trial.

“[S]afeguards should be added to ensure that there is limited opportunity for technical advisers to introduce bias into the judicial decision-making process.”

In addition, federal judges have the authority to appoint technical advisers, use specially trained clerks, or request special masters to provide them with the technical expertise they need. These advisers usually do not testify at trial; their role is to educate the judge on technical issues involved in a case. To avoid any undue influence and to ensure that the parties’ views on science and technology are given proper weight, safeguards should be added to ensure that there is limited opportunity for technical advisers to introduce bias into the judicial decision-making process. These safeguards include limiting the scope of case-specific materials that a technical adviser reviews or analyzes and defining the content and nature of any scientific or technical help to be provided to the judge. Technical adviser appointments are mentioned in federal case law as early as 1950, and their use is becoming increasingly common.

Finally, judges may appoint experts for exceptional circumstances and particularly complex AI issues in a litigation. Judges may appoint such experts on a party’s motion or on the judge’s own initiative. A court-appointed expert must advise all parties of any findings, may be deposed by any party, may be called to testify at trial, and can be subjected to cross examination by any party. While still fairly novel, court-appointed experts have been used in a number of litigations, and their use is becoming more common.

Regardless of the strategy used, it is crucial that the process provides technical information and expertise to judges in as neutral a fashion as possible. Sound, subsequent judicial decision-making is best served by an educational, neutral delivery of information on AI through a tutorial, technical adviser, or court-appointed expert. While it may seem counter to the underpinnings of our adversarial system, the judge should require that the parties work together to provide the court with mutually agreed-upon recommendations on the format, topics, and ideal presenters for tutorials, or a joint list of potential technical advisers, or court-appointed experts (should one be needed).

These methods, however, should not be read to preclude the parties from putting forth their own, separate expert witnesses at trial. Moreover, it is important that, regardless of the strategy used, these methods conform to the basic expectation underlying the adversary system that, with limited exceptions, judges should not receive advice and guidance without the parties to the dispute having an opportunity to contest that advice and guidance.

Encourage research and courtroom pilot programs on additional strategies to educate judges about AI-related technology

In addition to the practices outlined above, court systems and professional organizations should undertake research and pilot programs for the judiciary to explore other strategies that may be useful in assisting judges in developing the technical understanding needed for AI-related lawsuits. One example would be a pilot program to conduct technology tutorials at the appellate level. To date, trial court judges have been the primary users of tutorials. Scholars and judges alike have questioned the reliability of technical information obtained by judges informally outside the existing factual record, including information that has appeared in appellate and Supreme Court decisions. Providing tutorials on key AI-related issues to appellate judges may reduce reliance on non-validated resources, such as amicus briefs, that have not undergone third-party fact checking or materials that have not been subjected to cross examination by the parties at trial. Tutorials may help limit the perceived need for judges’ and clerks’ own independent research to supply additional facts in appellate decisions.

“[G]iven the need for technical expertise in AI-related litigations, specialized technical courts may provide a viable solution in some instances.”

Another proposal that has re-emerged recently is the creation of expert panels or “science courts” with special jurisdiction to hear complex scientific or technological disputes. Some judges, legislators, and legal scholars have argued that technically trained judges or panels should decide scientific issues instead of generalist judges or layperson juries to ensure that the scientific or technological components of litigation are decided “correctly” by those with the relevant expertise. Though this concept remains controversial, the idea of science or technology courts has gained renewed interest in our increasingly complex world. Further research is warranted to explore whether these panels meet our societal expectations of judicial fairness and process. It is also unclear whether complex scientific issues can be separated from policy and legal issues that arise, as well as which court should decide these issues where they overlap. As AI makes its way into all aspects of our daily lives, it may be seen as unreasonable to take away general courts’ (and juries’) role in legal decisions surrounding AI. Nonetheless, given the need for technical expertise in AI-related litigations, specialized technical courts may provide a viable solution in some instances.

Work with state bar organizations to train legal practitioners and encourage judges to develop technical expertise

Initiatives to provide judges with the technical understanding needed for AI-related litigation are unlikely to succeed without support from the parties involved in a dispute and their counsel. It is important for state bars and other legal professional organizations to familiarize attorneys with the broader implications of AI-related legal cases and the methods that they can suggest to judges to provide them with neutral information on AI.

Attorneys may be understandably hesitant to recommend to either their client or a judge the use of tutorials, technical advisers, or other strategies with which they are unfamiliar. However, education of not just the judiciary but other legal professionals can help ensure their necessary buy-in to implement existing strategies, to participate in pilot programs of new approaches, and to assist judges in receiving technological understanding needed for sound judicial decision-making.

Harness the expertise of AI professionals to educate the courts and the public

AI professional organizations and their members should work together to provide additional educational opportunities and resources to both federal and state courts. There are a number of models that professional organizations can follow to provide valuable assistance and technical expertise to judges generally. The Federal Judicial Center, the research and education agency of the federal judiciary, provides information to and educates judges on areas of emerging science and technology through written pocket guides, online tutorials and modules, and in-person workshops. The legal community often relies on the center’s “Reference Manual on Scientific Evidence” to better understand and evaluate the relevance and reliability of scientific and technical evidence being proffered by experts in litigation. The center’s new “Technology and the Bench” initiative will help provide federal judges with critical information on areas where technology and legal issues overlap. In addition, the National Academies of Sciences provides valuable opportunities for members of the judiciary (and the technology community) to discuss areas of emerging technology that are likely to appear in lawsuits.

“AI’s pervasive nature will require coordination among a variety of technology professionals to educate the public about AI and machine learning.”

It is essential that AI professional organizations and relevant stakeholders join in these efforts and work together to develop additional neutral, educational content to the courts. Providing judges with silos of subject matter-specific training is unlikely to be effective, as AI reaches into all aspects of life–data privacy, financial transactions, transportation, human health and safety, and the delivery of goods and services. AI’s pervasive nature will require coordination among a variety of technology professionals to educate the public about AI and machine learning.

AI professionals should also consider developing a universal glossary of key technical terms or a basic set of general reference materials designed specifically to provide the judiciary with general background on AI that would be broadly applicable, regardless of any particular case assignment. Reference manuals on other science and technology-related topics have proven highly effective in delivering useful content to judges.

In addition, AI professional organizations should consider forming a technical adviser referral system or panel that is available to assist judges or arbitrators in cases with unique AI issues. Such a panel or referral system, when requested, could provide judges with the names of potential independent AI experts, including computer scientists, engineers, data analysts, and software programmers. The American Association for the Advancement of Science assists the legal profession by providing an independent expert referral system for issues related to science. This type of system could be adapted to address AI-related technical expertise needs of judges. However, additional research is needed to assess whether judges are aware of this resource, how frequently judges have used this resource, and any changes that could improve its usefulness to them.

Finally, skilled individuals with technical expertise must be able to convey effectively, to judges and juries, relevant key concepts in their respective fields. This means that data scientists, programmers, and software developers will need training in communicating with lay audiences about the key aspects of the technology likely to be at issue in AI-related lawsuits. This is especially true given recent findings of low public literacy on topics related to technology generally and AI concepts in particular.

Address AI systematically through policy, not through individual lawsuits

Others have written extensively on the costs and benefits of judicial policymaking, and that is not the focus of this policy brief. Nonetheless, it bears repeating that when legislators, policymakers, and regulators fail to anticipate and act on issues of emerging technology, judges are left in the unfortunate position of being the first branch of government to evaluate new issues related to AI.

“[W]hen legislators, policymakers, and regulators fail to anticipate and act on issues of emerging technology, judges are left in the unfortunate position of being the first branch of government to evaluate new issues related to AI.”

Legislation guided by sound policy considerations would establish means to provide courts with the guidance they need to avoid setting legal precedent that unintentionally stifles technological advancement on the one hand, or improperly interferes with individuals’ freedoms, safety, privacy, or right to fair compensation for harm on the other. Because both state and federal law impact AI-related litigation, actors at the state and federal level must work together to try to ensure federal and state policies and legislation are aligned. That would help address AI issues while also elevating the level of expertise available to decision-makers.


The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Microsoft provides support to The Brookings Institution’s Artificial Intelligence and Emerging Technology (AIET) Initiative, and Google provides general, unrestricted support to the Institution. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

Authors

  • Footnotes
    1. Sullivan HR and Schweikart SJ. Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?, AMA J Ethics 2019;21(2):E160-166. doi: 10.1001/amajethics.2019.160
    2. Katz DM, Bommarito MJ II, Blackman J (2017) A general approach for predicting the behavior of the Supreme Court of the United States. PLoS ONE 12(4): e0174698. https://doi.org/10.1371/journal.pone.0174698
    3. Federal Rules of Evidence, Rule 706 (Court-Appointed Expert Witnesses)
    4. Kantrowitz, A. The Science Court Experiment. Jurimetrics Journal, Vol. 17, No. 4 (Summer 1977), pp. 332-341.
    5. National Research Council, Reference Manual on Scientific Evidence, 3rd ed. (Washington, DC: National Academies Press, 2011), https://doi.org/10.17226/13163. The Federal Judicial Center introduced its Reference Manual on Scientific Evidence in 1994 in the wake of the Supreme Court’s Daubert decision, which imposed a duty on judges to assess the qualifications of parties’ expert scientific witnesses.
    6. National Research Council, Reference Manual on Scientific Evidence.