Amid the calamity of the COVID-19 pandemic, national leaders from Brazil to the United States are tweeting misleading medical advice. Social media influencers are peddling conspiracy theories about what causes the disease. And around the internet, fraudsters are hawking miracle cures. According to one preliminary study, recent months have seen as much misinformation as reliable material on social media. In some cases, misinformation about fake cures and treatments has proven life-threatening and even fatal.
Amid growing concern over what the WHO director-general called the “infodemic” accompanying the pandemic, social media platforms are proactively deleting conspiracy theories and promoting links to trusted agencies like the Centers for Disease Control in the United States. This proactive attempt to curtail misinformation has happened more quickly than in previous cases of rapid spread of viral health misinformation, such as material casting doubt on the efficacy of vaccines. Now that companies have shown they can act quickly and decisively to curb certain content, it is worth considering whether the near-blanket liability protections granted to social media companies for content posted on their platforms should apply to questions of public health.
This may sound appealing, but the history of eliminating egregious medical advertising suggests that eliminating liability protections will be far from a panacea. When the United States cracked down in the 1930s on deceptive advertising for drugs, it did so with a canny piece of legislation that ought to provide some inspiration for regulators today. Chipping away at liability protections has emerged as the favorite tool of Washington to hold big platforms to account, but it is a blunt instrument that legislators should be wary of deploying.
Chipping away at Section 230
Under the current intermediary liability regime, platforms are mostly protected by Section 230 of the Communications Decency Act (1996) that contains what Jeff Kosseff calls “the 26 words that made the internet.” That Section states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This essentially means that platforms cannot be held liable for content posted on them.
But in 2018, Congress created what amounts to the first major chink in the platforms’ liability protections. The near unanimous passage of the FOSTA-SESTA bill allows platforms and web hosting services to be held liable for content related to sex work. As the pandemic progresses, we may find ourselves asking whether health information should also be carved out from Section 230. After all, if platforms endanger lives by enabling the spread of fake cures, might it be simplest to hold them legally liable for that dangerous content?
Although this might sound appealing, it could prove hard to enforce and a disaster to implement. First, it is unclear that FOSTA-SESTA (full name: Stop Enabling Sex Traffickers Act and Allow States and Victims to Fight Online Sex Trafficking Act) worked as intended. Meant to fight sex trafficking and limit advertisements for paid sex work, there is a good argument to be made that the FOSTA-SESTA bill has actually done the opposite. It has endangered sex workers by driving their business even further underground, and it has not stopped the ads.
Second, the boundaries of public health information are blurry. Who will decide where wellness begins and health ends, for example? At a time when much of the worst disinformation around COVID-19 comes from top politicians, allowing the Trump administration to decide when health information is correct and when it is not might make for an additional public health disaster. A president who has publicly toyed with the idea of ingesting bleach to fight COVID-19 probably can’t be trusted to enforce a law around health misinformation.
Third, the worst offenders are sometimes older forms of media, like Fox News. Any new regulation would have to encompass them too—something that the current administration and Republicans in Congress would be loath to do.
Curbing fake drug ads
Instead of fighting over Section 230, we might instead turn to other, more effective tools at our disposal. One source of inspiration is the history of how the United States vastly reduced the dangerous scourge of false medical advertising a century ago.
In the 19th century, there were two types of medicines in the United States: standard drugs (or what we now call prescription drugs) and patent medicines, which did not list ingredients and trademarked their names, such as Lydia Pinkham’s Vegetable Compound.
Standard drugs were mostly not advertised because they were prescribed by doctors. But revenue from advertising for patent medicine supplied nearly 50 percent of all advertising income for newspapers at the turn of the 20th century. Newspapers bore no responsibility for the content of those advertisements, which often made false and outlandish claims.
Patent medicine ads purported that their products were better than “ethical drugs” and doctors because they supplied simpler, quicker, and cheaper solutions. Why bother getting a prescription from a doctor when Hamlin’s Wizard Oil could “break up a cold on the lungs in a night,” as one ad claimed in a West Virginia paper in 1904.
Unproven health claims abounded for decades. Edward Bernays, often known as the “father of public relations,” sought to convince women to smoke cigarettes in public in the 1920s, in part by claiming that they prevented coughs. That same decade, advertising transformed Listerine from a battlefield disinfectant and floor cleaner into a cure for “halitosis,” an “ominously clinical-sounding” condition that was “largely unheard of” before Listerine’s ad campaigns, as Tim Wu relates in The Attention Merchants.
By the late 1930s, though, a tragedy inspired federal regulation to stem the most egregious ads. After 100 people died taking a drug called elixir sulfanilamide, Congress passed the Food, Drug, and Cosmetic Act (FDCA) in 1938. Manufacturers now had to prove that a drug was safe, and drugs needed approval from the Food and Drug Administration before being marketed.
The 1938 act transformed medicine—and medical advertising. Prescription drugs rose from 32 percent of consumer spending on pharmaceuticals in 1929, to 83 percent by 1969. Advertising went where the dollars were: doctors with prescribing privileges, not patients. Before the FDCA, over 90 percent of pharmaceutical marketing was directed at patients; the opposite was true three decades later in the 1960s. This removed many of the false claims that had long confronted newspaper readers every day.
A larger cultural shift accompanied this regulatory shift. Around 1900, many Americans believed it was their right to choose what medicines they took. By the time World War II ended, most believed that doctors should decide for them. Regulation helped to precipitate a massive change in attitudes toward medical expertise.
This history suggests several specific policy approaches that largely avoid the heated debate around Section 230. First, it reminds us to reinforce existing policy mechanisms, such as the FDCA. But that will mean increasing investment in them. As COVID-19 started to spread rapidly, for example, the New York Attorney General sent a cease-and-desist letter to far-right radio show host Alex Jones to stop selling fake treatments online. With greater funding and personnel, law enforcement could reduce the price gouging and the sale of fake cures online. Much of this could happen at the state level, which is not the case for Section 230.
Second, federal regulation can intervene in medical content, but it may be best housed in an agency like the Food and Drug Administration. This would enable medical experts to supervise how and where drugs could be advertised. The Federal Trade Commission could be another arm for ensuring that companies enforce their own policies. Advertising seems the most obvious content to address.
Third, reputational self-preservation matters but is not enough. As advertising became a profession, firms became reluctant to lose their reputations over advertising harmful products. Here too, we see a parallel with social media platforms that have clamped down on COVID-19 disinformation far faster than anti-vaxxer content. Since late April, YouTube’s new policy is to delete any content with COVID-19 advice that was “medically unsubstantiated,” meaning that it contradicted WHO guidelines. Societal pressure and attitudes can help. But as some popular influencers continue to push health misinformation on multiple platforms, it is clear that platforms have much more work to do, whether by downranking those influencers, by holding popular figures to a higher standard, or by better enforcing their own guidelines.
Firms cannot be trusted alone. Medical advertising helped fuel the opioid crisis, and in order to provide additional oversight of the industry, professional organizations such as the American Medical Association might be recruited to help oversee which types of drugs or treatments can be advertised.
The Pure Food and Drugs Act of 1906, the first federal drug legislation, issued guidelines for product labelling. The Sherley Amendment of 1912 outlawed labels with false therapeutic claims that were meant to defraud consumers. But these initial efforts were not enough. It was hard to prove that companies meantto pull the wool over consumers’ eyes. It also seemed that labelling alone did not sufficiently protect consumers—something we have seen in recent years with calorie labels doing little to reduce obesity. Only when 100 people died in 1938 did the FDA create regulations with teeth that required drugs to be proven safe before they could be advertised.
It took U.S. regulators over three decades to make a real difference in medical content the first time. A century later, we cannot afford to wait that long. And we can use many less politicized tools than Section 230 to create lasting change.
Heidi Tworek is an associate professor at the University of British Columbia and a non-resident fellow at the German Marshall Fund of the United States.
Google, the parent company of YouTube, provides financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.