To set policy around AI liability, it might be useful to try to resolve the question in a particular AI use case such as self-driving cars rather than approach the problem in a comprehensive way. Who should be responsible for injury or damage caused by self-driving cars and what should be the standard of liability? This is likely to become an increasingly urgent issue as self-driving cars become more and more prevalent on the nation’s roadways and indeed around the world.
First, an important clarification: Safety engineers and regulators have adopted a classification of cars based on their level of driving automation from Level 0, meaning no automation, to Level 5, where vehicles can drive safely in all conditions with no human involvement. Tesla’s Autopilot and Full Self-Driving capability are considered under Level 2 with partial automation capabilities that can perform both the steering and the acceleration/braking functions. But Tesla warns its users that these capabilities “are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment” and that these features “do not make the vehicle autonomous.” Despite these warnings, which seemed to shield Tesla from liability for accidents involving Autopilot, a Florida jury recently held Tesla partially responsible for a fatal accident involving a Tesla car operating in Autopilot mode and required it to pay $243 million in damages.
Yet the key liability issues arise once cars reach Levels 4 and 5, where driving is handled entirely by the car and there might be no driver in the vehicle at all. Waymo’s self-driving taxis are an example of level 4 autonomous vehicles. Within the specific conditions under which they are designed to operate safely, their “operational design domain” (ODD), Waymo self-driving cars operate completely autonomously with no human driver in the car. Tesla’s new robotaxis that the company launched in Austin, Texas, in late June similarly operate in autonomous mode. Absent a negligent, reckless, or malicious decision by a passenger to use the intervention button Tesla provides, the company would have liability if an accident resulted from the car’s poor performance.
Policymakers seeking to address these liability issues might consider four answers that scholars have discussed. The first is the traditional product liability approach under a negligence standard in which the plaintiff must show that there was a design or manufacturing flaw in the self-driving car and this flaw led to the accident that caused the injury or property damage. Absent such a showing, victims would not be compensated. The second approach is a proposed strict product liability approach under which the self-driving car manufacturers would be liable for any damages their cars produced, regardless of whether the car was defective. Victims would be compensated without having to prove that a design or manufacturing defect caused the accident.
The third and fourth approaches tackle the liability issue outside the contours of product liability law. They rely on a new legal construct of a “computer driver,” and they ask under what conditions should the computer driver be held liable for accidents it causes? Under the “reasonable human driver” standard, the car manufacturer would be held liable for damages whenever its computer driver fails to avoid an accident that a reasonable (that is, competent, unimpaired, and attentive) human driver would have avoided. The victims would have to demonstrate that the computer driver’s behavior would have been unreasonable if engaged in by a human driver, but they would be compensated if they could make this showing.
Under the fourth approach of a “reasonable computer driver” standard, the driving performance of the self-driving car would be compared to an industry yardstick—an average level of performance, an industry-determined level of expected performance or a state of the art standard focusing on what level of performance is technically and economically feasible.
For the reasons outlined below, this discussion concludes that policymakers should maintain the traditional negligence product liability standard but supplement it with a negligent driving regime based on the reasonable human driver standard. The supplement of the reasonable human driver standard introduces a liability standard that judges and juries have the domain expertise to administer. The strict product liability regime turns out to resemble a negligent driving approach and would be workable if combined with the reasonable human driver standard. The computer driver approach turns out to be the product liability negligence approach under a different name. It would not work as a comprehensive response to the risks of self-driving cars but should remain available for plaintiffs to use in addition to litigation based on the reasonable human driver approach.
Law professor Bryant Walker Smith reached roughly the same conclusion. He argues that car manufacturers should be held liable whenever their cars perform unreasonably and then suggests that a self-driving car performs unreasonably in a particular situation if “either (a) a human driver or (b) a comparable automated driving system could have done better under the same circumstances.”
In a 2014 Brookings report, UCLA law professor John Villasenor summarized the case for allowing the courts to address the liability of self-driving cars under existing product liability standards. He applied the existing standards of a design or manufacturing defect, an information defect, or a failure to instruct humans on the safe and appropriate use of the product to the self-driving car case.
The nuances of product liability law are well-known to lawyers and might provide fertile ground for injured parties to seek compensation when car manufacturers have been careless or lacking in foresight. Given the high burden of proof in product liability cases, Villasenor is right to conclude that holding manufacturers responsible for their demonstrable failings should not be a significant barrier to deployment of reasonably safe self-driving vehicles. He is also right that preempting state laws in this area merely to make it easier for manufacturers to escape liability is not needed to spur innovation. Bryant Walker Smith has also made the case that existing product liability law is “probably compatible with the adoption of automated driving systems.”
However, Villasenor’s conclusion that existing product liability law is “well equipped to address and adapt to the autonomous vehicle liability questions that arise in the coming years” is not the end of the story. While traditional product liability law is one avenue for injured parties to seek redress for injuries or damages in self-driving car cases and should not be abandoned, it suffers from two defects as a comprehensive response to the risks created by self-driving cars.
The first is the enormous information asymmetry between the manufacturer and even the most knowledgeable and well-resourced plaintiff. The details of self-driving training, testing, mitigation measures, upgrades, and so on are confidential business information. Discovery in court proceedings could expose some of this information to plaintiffs if they knew what to ask for. But even then, demonstrating that the company failed to take reasonable precautions would require safety engineering expertise that is typically available only within the self-driving companies themselves. The chances of beating a car manufacturer determined to defend itself in court are slim.
Think what it would mean for plaintiffs to have to pass a “risk-utility” test in attempting to prove that the manufacturer was responsible for a self-driving car accident that caused injury or damage. This test, used in many cases to determine the presence of a design defect, requires plaintiffs to demonstrate that there was a reasonably available alternative to the system the car manufacturer used that would have avoided the accident. In effect, the plaintiff’s outside expert would face the daunting challenge of having to demonstrate that the car manufacturer missed an affordable upgrade that would have prevented the accident. To be sure, this is a difficulty in many product liability cases, but it is especially likely to arise when a product exists at a technological frontier as self-driving cars do.
But it is not just a matter of a difficult burden of proof. Maybe there was no reasonably available software alternative that would have avoided the self-driving car accident. Maybe that’s just as good as the systems get with current technology. The self-driving car accident occurred. It was caused by the misbehavior of the self-driving car. No fault can be traced to any person or legal entity. Still, some parties were injured or suffered property damage through no fault of their own. Will the legal system really leave them without recourse?
The second approach holds manufacturers liable in cases where something inexplicable went wrong with the car, but the manufacturer cannot be held to account for it under the negligence standard of product liability. In these cases, strict liability should apply. Plaintiffs would not have to show the manufacturer was at fault but would simply collect compensation from the manufacturer for injury or damage.
Law professor David Vladek suggests four reasons for such a strict liability system for self-driving cars. First, it satisfies “basic notions of fairness, compensatory justice, and the apportionment of risk in society” to provide redress in these cases “for persons injured through no fault of their own.” Second, the self-driving car manufacturers “are in a position” to absorb the costs of these “inexplicable accidents” and it is not unreasonable that they should bear them since they benefit from the self-driving cars they create.
Third, the strict liability system spares everyone the “enormous transaction costs” that can be expected if the only alternative is to litigate even in circumstances where fault cannot be established. Fourth, the predictable nature of the strict liability system is better for innovation than the uncertainties of endless product liability litigation.
Law professor Steven Shavell also embraces a strict liability regime, with the added twist that he thinks the payment should go to the state rather than the harmed individuals, since this would give purchasers of self-driving cars the incentive to demand greater safety from car manufacturers. But that, of course, leaves victims without compensation for injuries.
While attractive as a way to ensure fundamental fairness for injured parties and avoid pointless litigation, this strict liability approach has a fundamental defect. It can only function as a replacement for a product liability negligence regime for self-driving cars, not a supplement to it.
To see this ask the question: When does the strict liability regime kick in, and when should the negligence product liability regime apply? It is easy to say that strict liability applies only when the self-driving car accident is truly “inexplicable” and untraceable to human fault. But no one can know this at the outset in a particular case. This can only be established as part of litigation. So all self-driving accidents would have to be litigated. But this defeats one of the purposes of the strict liability regime, which is to avoid pointless litigation. To achieve its anti-litigation purpose, it cannot be layered on top of a negligence product liability regime but must prevent litigation suits from starting and move parties harmed in a self-driving car accident directly into the no-fault system, where they simply claim compensation for injury or damage.
In effect, this means that all self-driving car cases where the car failed to avoid an accident will be litigated under a strict liability standard. Car manufacturers will have to pay damages even when they have not been negligent in designing the car’s self-driving system. This might be all to the good, since as Vladeck notes, “the complexity and sophistication of driver-less cars, and the complications that will come with the fact patterns that are likely to arise, are going to make proof of wrongdoing in any individual case extremely difficult.” It might be simpler, as Vladeck says, to infer the presence of a defect in the self-driving car on the theory that the accident itself is proof of a defect.
But this puts too much of a burden on the manufacturer. What if the manufacturer could prove that its self-driving car, even though it failed to avoid the accident that produced injury or damage, performed in a way a reasonable human driver would have? Maybe it did not stop in time and ran into another car. But a competent, attentive human driver would have taken the same actions in those circumstances. Without an opportunity to prove that a human driver would not have been held liable for damages in a particular case, because they behaved reasonably in the circumstances, car manufacturers would face prohibitively high liability costs.
The key to the third and fourth liability approaches is to adopt a negligent driving standard in assessing liability for accidents involving self-driving cars, rather than trying to run all claims for compensation through the product liability system. This is often the way accidents involving human drivers are assessed when plaintiffs allege injury or damage from an automobile accident. The court looks to whether the driver involved in the accident exhibited reasonable driving behavior and if not, then it holds the driver responsible for compensating the victims. The difference between the third and fourth approaches is how they define reasonable driving behavior.
Law professor William H. Widen and safety engineer Philip Koopman propose that policymakers create a new category of “computer driver” whose driving behavior can be evaluated as if it were a human driver. This applies the same familiar standard to self-driving cars that judges and juries already have domain expertise with and asks them to do nothing different in a self-driving case than in a case where a human driver is involved. The self-driving cars are responsible for an accident when their driving behavior would be deemed negligent if engaged in by a reasonable (that is, competent, attentive, unimpaired) human driver.
Current versions of self-driving cars are notorious for doing things that no reasonable human driver would do, such as driving on the wrong side of the road or driving into wet cement. In addition to providing recourse for injured parties, the reasonable human driver standard would provide an economic incentive for self-driving car manufacturers to provide cars at least capable enough to avoid these “stupid” mistakes. Indeed, it promotes innovation to create a self-driving car that performs at least as well as an unimpaired and competent human driver.
This approach does not get plaintiffs bogged down in endless product liability litigation where the chances of success are so limited. It also provides a car manufacturer with a way to defend itself in some circumstances involving accidents caused by one of its self-driving cars. In an accident involving a self-driving car, the computer driver of the car can be held liable in exactly the same way a human driver can. If the computer driver does not match or exceed “the driving safety performance outcomes we expect of an attentive and unimpaired” human driver, as Widen and Koopman put it, it is liable for any injuries or damages it produces. But if the computer driver behaved reasonably as measured by what a competent, attentive, unimpaired driver would have done, the car manufacturer would not be liable for damages. A new law could implement this idea, according to Widen and Koopman, by stating that computer drivers owe “a duty of care to automated vehicle occupants, road users, and other members of the public.”
This approach addresses the “inexplicable” self-driving car accidents that Vladeck seeks to deal with through his strict product liability approach. Even when a self-driving car accident cannot be traced to a design or manufacturing defect, plaintiffs can still recover damages if they can show that a reasonable human driver would not have caused the accident. There might be a design flaw that produced the accident, and if there is, the self-driving car manufacturer should seek to detect it and remedy it. But plaintiffs do not have to prove its existence and will not be denied a remedy if they cannot meet that burden or if a legally sufficient defect does not exist.
In order to hold a legal entity responsible for compensation, the new law that establishes the category of computer driver and creates a duty of care for computer drivers would also need to stipulate that the financially responsible party in cases where a computer driver is found liable would be the manufacturer of the self-driving car. This avoids getting bogged down in philosophically interesting but practically useless speculations about when computer drivers have achieved enough autonomy to become legal actors in their own right.
Many would think that the reasonable human standard is too lenient. David Vladeck, for instance, assumes that self-driving cars generally outperform human drivers, and they should perform up to “the standards achievable by the majority of other driver-less cars.” He thinks car manufacturers should be held liable for accidents when their self-driving cars do not live up to this standard. In effect, his strict product liability approach can be thought of as a version of the negligent driving approach, where the driving standard is the reasonable computer driver.
Law professor Kevin Webb explicitly adopted a version of this “reasonable computer driver” standard, calling it the “reasonable car standard.” He thinks that the car manufacturer can be held liable “only when the car does not act in a way that another reasonable AV would act.”
There is considerable force to this idea. Why not expect more from computer drivers? Why go to all the trouble of developing and deploying self-driving cars if the result is only the current level of traffic safety? Shouldn’t the right liability standard give self-driving car manufacturers an incentive to produce cars that exceed human driving capabilities?
Under this “reasonable computer driver” standard, courts would hold a car manufacturer liable when the computer driver’s safety performance fell below what a reasonable computer driver would have done and plaintiffs suffered injury or damage as a result. This reasonable computer driver standard could be defined in principle through industry standards, average or typical driving performance of self-driving cars, or an assessment of what the state of the art of the current technology allows.
Such a standard might be more protective of plaintiffs in circumstances, such as speed of reaction to unforeseeable events, where self-driving cars typically perform better than human drivers. If brand X’s self-driving car would have avoided a collision but brand Y’s self-driving car did not, why exonerate brand Y just because no human in that situation could have reacted fast enough to avoid the collision? This more stringent standard would force the industry to keep up with the latest development or face liability consequences for failing to do so.
However, the reasonable computer standard would be less protective than the reasonable human driver standard in cases where the technology does not match human driving skills. As Widen and Koopman put it, it would allow “a potential outcome in which AVs much more dangerous than human drivers would be considered ‘reasonable’ if that is the best the industry can do.” It allows an industry defense even when a currently deployed self-driving car does something wildly stupid that causes injury. It seems unreasonable to allow a manufacturer to escape liability when it can show that no model of self-driving cars on the road today could have avoided making the same stupid mistake in those circumstances, even though a reasonable human would have avoided the problem easily.
The reasonable human standard seems to be the minimum standard policymakers and the public should demand from self-driving car manufacturers. It is also consistent with their promises and proclamations to regulators and the press. Substituting a standard of what the industry is capable of right now would undermine this minimum goal.
This ambiguity of whether a reasonable computer driver standard is more or less protective for plaintiffs illustrates the fundamental problem with adopting it. It is not at all clear what the reasonable computer driver standard would be in any particular case. Using it would inevitably involve courts and juries guessing what other self-driving cars might have done in similar circumstances, what they should have done to be in accordance with an industry code, or whether the state of the art would have allowed car manufacturers to deploy cars that would have avoided the accident.
Indeed, the reasonable computer standard risks collapsing back into the product liability design defect standard by forcing plaintiffs to engage in an assessment of what self-driving capabilities are technically and economically feasible. If policymakers want a standard that avoids those litigation pitfalls, it would be better to stay with the reasonable human driver standard.
What about the consequence that the reasonable human standard would exonerate a self-driving car manufacturer in circumstances when the rest of the industry would have done better? The answer is that plaintiffs should still have the route of traditional product liability negligence litigation to handle such cases. It is true that such cases are hard to win, but no harder than cases that would be based on the proposed reasonable computer standard.
The best way forward, then, is to combine an approach that assesses the performance of the car under a reasonable human driver standard with the traditional negligence approach under product liability law. It is only fair to admit, however, that this combined liability system by itself does not create a very powerful incentive for car manufacturers to produce self-driving cars that exceed the current human safety record.
In seeking to move self-driving car manufacturers to a higher level of safety, policymakers should keep in mind that human drivers are pretty safe. Given the amount of driving on the nation’s roads (around 3.2 trillion miles in 2022) and the number of traffic fatalities (42,795 in 2022), human drivers are involved in a fatal accident only about one in every 100 million miles driven.
That is the admirable safety record policymakers and the public should expect self-driving cars to match or exceed. A liability standard for individual cases that holds self-driving cars to this minimum level of performance is no small thing.
Policymakers rightly will demand more from self-driving cars. Self-driving car manufacturers promise increased safety, and they should be held to such a higher standard. But it is unlikely that this higher safety goal for self-driving cars will be achieved effectively by a liability standard for individual cases. As we have seen, industry is likely to be able to defeat any product liability litigation standard that relies on economic and technical feasibility analyses. As a result, litigation in individual cases, should not be viewed as aiming to improve road safety. It is primarily a less ambitious attempt to compensate people for injuries or property damage in a self-driving car accident through no fault of their own. If it has any effect on the level of safety for the public, it would be to aid in preventing self-driving cars from degrading the current high level of safety provided by human drivers.
If policymakers want to move the self-driving car industry to a level of performance exceeding the current level of safety provided by human drivers, this might be done more effectively through a regulatory requirement, rather than through standards for liability in litigation of individual accident cases. For instance, if there is an expectation that computer drivers can and should react to a suddenly appearing pedestrian faster than a human driver would, regulators can design a test and make a specified faster-than-human response time a performance requirement for self-driving cars.
Regulators will have to move beyond the current recall systems operated by the National Highway Traffic Safety Administration and by some state regulators such as California’s Department of Motor Vehicles. More needs to be said about establishing a forward-looking and protective regulatory framework for self-driving cars. But if policymakers want to move the industry to a higher level of safety, they will have to devise an upgraded regulatory system that supports this goal.
The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).
Commentary
Setting the standard of liability for self-driving cars
August 8, 2025