The stunning, some would say miraculous, scientific achievement that brought us vaccines against COVID-19 in record-shattering time hit a snag with news reports that some people who took the Johnson and Johnson injection experienced life-threatening thrombosis. On April 12 the FDA and CDC reported six cases of a “rare and severe type of blood clot” among the approximately seven million doses administered, and “paused” use of the J&J vaccine pending analysis of more data. The decision was quickly hailed as a legitimate, cautionary step. By April 14 there were two more reported cases, with one reported death.
As the week progressed, with intensive review of the data underway, questions inevitably surfaced, including how the vaccine risk compares to hazards of normal life. For example, every year more than 38,000 people die in crashes on U.S. roadways, which translates to a rate of about 12 in 100,000, or more than 900 times greater than the odds of dying from the vaccine related blood clots. Roughly one in six Americans fall ill each year from foodborne pathogens, and 3000 die. We invest in programs to increase safety, but we wouldn’t “pause” all driving and eating until the risks fall to zero. (Call-backs of selected cars to fix identified defects, and removing foods with known pathogens from grocery shelves, are more obviously justified because the risks are specific.) It is likely that such considerations influenced the FDA and CDC. In any case, when the hold was lifted 11 days later, the numbers had climbed to 15 reported cases and three deaths; the revised recommendation emphasized that the risk was still miniscule compared to the risk of not being vaccinated, but added a requirement that J&J attach a warning label.
Many people were already skeptical about COVID vaccines: according to polling data from the week before the pause, unvaccinated Americans were split evenly about their willingness to be vaccinated. Although popular acceptance had begun to rise, thanks to efforts of responsible commentators, clergy, community activists, and of course the persistent pleas from the White House and public health experts, the pause and its cancellation likely added to the level of lingering doubt and may have hardened resistance. Concerns grew that abundant evidence of risk for not being vaccinated might be inadequate to calm anxieties among a sizable population.
The episode provides a crisp reminder that risk assessment, though fundamentally mathematical, is more complicated. Human emotions, intuitions, and values constrain—or perhaps enrich—judgment that would otherwise be based only on cold hard statistics. Knowledge of the laws of probability is often not sufficient to motivate what might be viewed as “correct” decisions: even top government epidemiologists, acting “out of an abundance of caution,” chose action that seemed at odds with the underlying data. But the impulse toward zero risk tolerance is not surprising. In Spring 2002, the eminent economist and game theorist, Thomas Schelling (who won the Nobel prize a few years later), recounted how a friend, coincidentally an expert in probability theory, would not leave home until the sniper in suburban Washington who had already killed ten innocent victims was caught. As Schelling wryly noted, once the sniper was stopped his friend went back to driving on his usual routes, thereby increasing his chances of violent death by orders of magnitude. Clearly, something other than the simple analytics of probability intervened in this person’s “rational” decision making; it most likely influenced the pause decision by FDA and CDC scientists too.
What can be done to counterbalance the tendency among many people, including scientific experts, to veer toward risk-minimizing choices that may produce unintended negative consequences? In the J&J case, those consequences included, perhaps most obviously, that people might postpone or forego vaccination altogether, the effects of which could be catastrophic. But are we ready to impose a level of risk-aversion or risk-acceptance, based on mathematical computation, that may run counter to people’s comfort levels? Are we ready to install the equivalent of a mathematical “veto” over the advice of our elected and appointed officials? We expect scientists at the CDC and FDA to exercise judgment informed by data—but would not want them driven exclusively by data. Good leaders are not robots: we rely on their instincts and values as well as their expertise, and often must tolerate decisions and recommendations that transcend or, in some cases, may even seem to defy, empirical evidence. We value transparency and honesty, and should applaud leaders who, as in this case, clearly have the public interest at heart and were willing to amend their initial decision in the light of new evidence. The question is not whether they reached an ephemeral “optimal” solution, but whether their decision was based on sufficient and appropriate deliberation.
The challenge to policy makers was formidable, and with the labeling requirement they sought a delicate balance: between emphasizing the low level of risk and shifting ultimate responsibility for accepting it to individuals. Reminding the public that the danger is relatively low compared to many other routine activities hopefully will curb some of the understandable—albeit misguided—resistance to vaccines without proof of their 100% efficacy. Though imperfect, the compromise followed familiar precedent, e.g., the requirement that pharmaceutical companies label their products with known side effects. The rationale there, too, is that prospective users of various drugs are free to choose, hopefully with their physicians’ guidance, whether to take the risk.
Still, uncertain behavioral responses to complex messages suggest that even trustworthy information can produce additional risks (!). Which begs the question: are there situations in which the government should mandate what might be called the “correct” behavior? If Schelling’s friend had been coaxed (or forced) out of his home, for example, there would have been a loud public outcry. And for good reason: government coercion would be difficult to justify unless it could be shown that staying indoors causes other people harm or hazard.
But some externalities do justify coercive intervention: we prohibit smoking in elevators and levy fines for littering our roads and polluting our air and water. Resistance to mandatory wearing of masks provides a painful reminder of how difficult it is to enforce socially responsible behavior. On the other hand, whether the social good—for example, sending children back to school—justifies an intrusion into teachers’ personal risk-aversion thresholds, is fraught with other legal and moral dilemmas. Under what conditions individual rights can be abridged for the sake of the social order is a familiar question, which the US and many countries are facing again because of the pandemic.
Finally, I return to the nagging counterfactual. How many people who heard about the J&J pause decided to not get vaccinated at all? The FDA and CDC should be commended for sharing the data and making public health and safety the number-one priority. Still, it might have been prudent for them to weigh the possibility that some number of people, maybe thousands or more, would now choose to decline or postpone their shots, subject themselves to a greater risk of infection, and cause increased spread and surges of illness, hospitalization, and death. This possibility has significant international implications: the pause might have heightened anxiety in other countries regarding, for example, the AstraZeneca vaccine (which also raised alarm about potential side effects), with dire consequences especially in places where it’s hard to administer required double doses. (Advantages of the J&J vaccine, which also need to be integrated in the risk analysis, are its “one-shot” requirement and less expensive refrigeration.) Simply put, vaccines will save millions of lives globally—even if that outcome is not 100% free of side effects. How to specify the implied benefit/cost model is not obvious, and anyway its estimates would not provide a scientific basis for establishing the acceptable threshold, which in democratic societies is ultimately a political decision.
At least two lessons from the J&J experience should inform a more comprehensive approach to risk assessment and policy, for COVID and generally: (1) Telling people the objective level of danger for any activity (flying, getting vaccinated, smoking) may not be sufficient to influence behavior, but imposing a risk threshold may violate norms of individual choice and ignore powerful emotional responses. And (2), although free choice can result in unacceptably high levels of social harm, the perceived benefits of stopping any activity need to be weighed against the costs of unintentionally inducing people to make even riskier choices.
Science brought us vaccines, thank God. And science needs to focus on the biology of blood clots and other side effects we haven’t yet experienced. But rational policy-making transcends medical science and demands attention to psychological, economic, legal, political, ethical, and other perspectives. The risks of ignoring those aspects of the problem are substantial.
Many thanks to John Andelin and Dorothy Robyn for comments on an earlier draft, and to John Hudak and Christine Stenglein for fine editing.