In recent years, researchers have developed medical robots and chatbots to monitor vulnerable elders and assist with some basic tasks. Artificial intelligence-driven therapy apps aid some mentally ill individuals; drug ordering systems help doctors avoid dangerous interactions between different prescriptions; and assistive devices make surgery more precise and more safe—at least when the technology works as intended. And these are just a few examples of technological change in medicine.
The gradual embrace of AI in medicine also raises a critical liability question for the medical profession: Who should be responsible when these devices fail? Getting this liability question right will be critically important not only for patient rights, but also to provide proper incentives for the political economy of innovation and the medical labor market.
Every week, the TechStream newsletter brings you the latest from Brookings’ TechStream and news and analysis about the world of technology. To sign up and get this newsletter delivered to you inbox, click here.
Widespread misinformation around the US election
With no immediate winner to be declared in the U.S. presidential election, the coming days and weeks will not only present a major test for American democracy, but also a significant challenge to the election officials and internet platforms struggling to contend with misinformation—including false claims by President Trump and his surrogates about rampant voter fraud.
The good news is that tech sector and government officials appear better prepared for misinformation than they were four years ago. Twitter, Facebook, and YouTube all rolled out at least some measures to label or remove patently false information about the election, and the impact of foreign influence operations appears to have been minimal.
The bad news is that the White House is now the biggest vector for creating and amplifying electoral misinformation. With vote counting still under way on Wednesday, Trump took to Twitter to baselessly claim that the election was being stolen. Although Trump’s premature declaration of victory and claims of voter fraud isn’t a surprise—he signaled he planned to do so well ahead of Election Day—his online supporters are likely to spread those claims widely online, setting up a major challenge for the platforms.
Since May 2020, Operation Warp Speed has been charged with producing and delivering 300 million doses of a COVID-19 vaccine by January 2021. While the scientific and technical challenges are daunting, the public-health challenge may be equally so. In the last two decades, anti-vaccination groups disseminating misinformation about vaccines have proliferated online, far outstripping the reach of pro-vaccine groups. Viruses that had become rare, such as measles, have now experienced outbreaks because of declining vaccination rates. The anti-vaccination movement was already poised to sabotage the COVID-19 vaccine uptake.
And now, as with all aspects of COVID-19, politics has crept into the vaccine conversation in ways that threaten to derail public confidence in a potential treatment key to halting the pandemic. Without trust in the efficacy of a vaccine, it is far less likely that society will reach herd immunity through vaccination.
So how can politicians convince large swathes of the American public to take a vaccine once it becomes available? The answer may be counterintuitive, but simple: Keep mum, and let the scientists and public-health experts share the facts with the American people.
Casey Newton, founder of the new “Platformer” newsletter covering Big Tech and democracy and the former Silicon Valley editor for The Verge, has followed developments in content moderation more closely than most. Here, he speaks with Lawfare’s Quinta Jurecic and Evelyn Douek about what’s changed over the last four years in how platforms, policymakers, and reporters approach content moderation.
A little more than a year ago, China had almost no diplomatic presence on Twitter. A handful of accounts, many representing far-flung diplomatic outposts, operated without apparent coordination or direction from Beijing. Today, the work of Chinese diplomats on Twitter looks very different: More than 170 of them bicker with Western powers, promote conspiracies about the coronavirus, and troll Americans on issues of race. The quadrupling in the past year and a half of China’s diplomatic presence on a site blocked within China suggests that turning to Western platforms to influence the information environment beyond China’s borders is no longer an afterthought but a priority.
In pursuing increasingly assertive tactics to shape how China is perceived online, Beijing has borrowed elements of Russia’s playbook. China’s “wolf warrior” diplomats—a phrase that comes from a jingoistic Chinese film franchise and refers to a new approach among the Chinese diplomatic corps to more aggressively defend their home country online—propagate conflicting conspiracy theories about the origins of the novel coronavirus that are designed to sow chaos and deflect blame. It is using these so-called warriors, together with its sprawling state media apparatus and, at times, covert trolling campaigns, to amplify false theories on social media and in the news. And it is doing all this by leaning on the propaganda outlets run by Moscow, Caracas, and to a lesser extent, Tehran, and the network of contrarian agitators they leverage to promote anti-Western content.
But Beijing has also developed several of its own plays. Its diplomats engage with Twitter accounts that bear hallmarks of inauthenticity, underscoring the challenge of generating grassroots support for its campaigns on a platform that is banned at home. It has deployed hashtag campaigns and dedicated social media accounts to flood conversations about its human rights record with positive content.
We are witnessing a broad shift in Beijing’s information approach, driven by the COVID-19 pandemic but with implications that will outlast it. How this emerging strategy is implemented going forward will have consequences for the contest between democracies and autocracies and the use of information manipulation by authoritarian leaders to shore up their grip on power at home and weaken their democratic competitors.
Journalists face a tough dilemma when reporting on hacked documents. Authentic documents obtained by illicit means and leaked to the public can provide information that is very much in the public interest, but reporting on them can at the same time play into an information operation launched by whomever hacked and leaked the documents. Researchers are trying to understand how to balance those interests, and here Lawfare‘s Quinta Jurecic and CEPA’s Alina Polyakova talk to Janine Zacharia, a lecturer in the Department of Communication at Stanford, and Andrew Grotto, who is fellow at Stanford’s Cyber Policy Center, about their new playbook for reporters, which offers recommendations on how newsrooms can responsibly cover hacks and disinformation campaigns without propagating or participating in them.
Last weekend, Tori Saylor, Michigan Governor Gretchen Whitmer’s deputy digital director, watched as President Donald Trump used yet another rally to attack her boss. She knew what would come next: “I see everything that is said about and to her online. Every single time the President does this at a rally, the violent rhetoric towards her immediately escalates on social media. It has to stop. It just has to,” Saylor tweeted.
Saylor was describing a dynamic that has now become familiar to researchers of online speech: Offensive speech on the internet tends to arise in response to political events on the ground. After Trump has attacked his opponents at a rally or other event, his online followers have, in some cases, taken that as a cue to attack those same opponents. For the president, it provides a useful amplifying tool. For the opponents being targeted, it represents a nightmare of online harassment.
But what about Trump’s online speech? Just as he targets his opponents in rallies and speeches, he also takes to Twitter to dole out criticism and ad hominem attacks. Here, we examine three recent tweets from the president and whether his tweets have a similarly negative impact on the quality of other online speech. These three tweets offer a case study in how elite speech online can impact the incidence of harmful speech. The tweets in question are not obviously threatening in nature—they fall into a well-documented trend of Trump attacking politicians on Twitter while remaining in-bounds of platforms’ content moderation policies. But that does not mean that they do not impact the overall quality of online discourse. Our findings highlight the challenges platforms face as they define their content moderation guidelines and systems in the lead-up to, and aftermath of, the election.
The European Union is often considered a global frontrunner in setting rules for the digital sphere. When it came into force in 2018, the General Data Protection Regulation (GDPR) revised and harmonized outdated data protection rules that had been in place since 1995, established a regime based on data protection as a fundamental human right, and set a global standard for modern privacy protection. Since its establishment, the regulation has inspired other regions to follow suit.
The European Union is now on the verge of writing another potentially standard-setting law for the digital sphere—the Digital Services Act (DSA). The DSA may have an even greater impact than the GDPR on the way major internet firms do business. Whereas the GDPR harmonized and, in many countries, raised data protection standards, the DSA is not limited to one specific policy field, but aims to establish a comprehensive framework for how “digital services” operate in Europe. It will cover services ranging from Uber and Amazon to the App Store and Facebook, and its rules will span liability, competition, employment, and advertising. This goes way beyond the rules in the E-Commerce Directive (ECD), which the DSA is meant to replace and expand upon. The ECD contains liability exemptions and “notice and takedown” obligations for platforms regarding illegal content, similar to those in Section 230 of the Communications Decency Act in the United States. Such rules have been credited for enabling the internet as we know it. Any major changes to the liability regime, though unlikely, would fundamentally alter the way businesses and people use the web.
But aside from liability questions, the EU wants to adapt the ECD to a changed digital sphere. When the ECD came into effect in 2000, Google was two years old, Amazon four, and Facebook would not go live until fours year later, in 2004. In the 20 years since the ECD came into force, digital business models have changed substantially. European lawmakers, the tech industry and civil society now sense the opportunity to rewrite and introduce rules for these business models with major implications for how the internet functions.
Maria Ressa is a Filipino-American journalist and co-founder of Rappler, a news-site based in Manila that has distinguished itself for its investigative, adversarial coverage of President Rodrigo Duterte and his administration. The Philippine government has responded by persecuting Ressa in the courts, where she is currently fighting a conviction of “cyberlibel.” Here, she speaks with Lawfare‘s Evelyn Douek about the Philippine internet and its dynamics, which represent an important warning for the global web.