Around the world, governments are rushing to develop so-called “vaccine passports.” The state of New York, the United Kingdom, and the European Union have all announced initiatives aimed at enabling people with proof of vaccination to engage in some activities that had been prohibited by COVID-19 restrictions. While each initiative has its idiosyncrasies, they are generally attempts to certify that a person has been vaccinated for COVID, through a range of objects broadly referred to as “passports.” This can be as simple as a paper record but typically refers to an electronic record that can be used to verify that a person received a vaccine.
But in the rush to roll out passports and allow their citizens to return to some semblance of pre-pandemic life, governments are failing to establish how to govern systems to verify vaccination status and how to resolve the disputes that will inevitably arise from them. Rather than focusing on technical methods for verifying vaccination status, governments should be working to establish clear guidance on how vaccine status information can and can’t be used in the first place; who can compel its disclosure; under what conditions it can be used to restrict a persons’ rights; and how to resolve conflicts over its use.
Without well-defined policies on how public institutions will allow vaccination to affect access to services and resources, clear articulations of private discretion to use vaccination status to impact peoples’ fundamental rights, and a system for resolving disputes arising out of abuse of these systems, few people will have confidence or trust in the equity of the system. Without clarity on our rights, or how we can enforce them when they’re violated, it’s hard for the public to have anything other than concern about digital systems used to verify immunization.
On this week’s edition of Lawfare‘s Arbiters of Truth miniseries about online information ecosystems, Evelyn Douek and Quinta Jurecic speak with Jameel Jaffer and Ramya Krishnan of the Knight First Amendment Institute about their lawsuit seeking an answer to the question of whether the president violates the First Amendment by blocking Twitter users.
Though they were once the darlings of society, major tech companies have decidedly fallen out of favor. A range of real and perceived abuses by the large tech companies have spurred widespread skepticism and distrust. The shift in public opinion has not gone unnoticed: Policymakers from D.C. to Brussels have not only brought major anti-trust lawsuits against large firms like Facebook and Google, but are also contemplating major overhauls of their tech regulatory regimes. In the United States in particular, the desire for reform has crystalized in a contested debate over how to best overhaul Section 230 of the Communications Decency Act, which provides online platforms with immunity for liability from most content posted by their users.
Yet the ongoing debate around Section 230 feels like a shouting match between strangers with earmuffs, each yelling in a different language hoping that the louder they yell the more easily they will be understood. The chaotic nature of this debate stems in part from the difficulty in providing a unified theory of the harms associated with the law, and efforts to overhaul the law will never succeed without a clear understanding of online harms more broadly. Although Section 230 is not the proximate cause of any of those harms, the law is typically criticized for either enabling or failing to prevent them. Harms that are not related to Section 230’s focus on liability protections for content and content moderation, such as privacy violations, nonetheless provide much of the energy behind efforts to reform it. Reforming the law effectively thus first requires a good understanding of the individual and generalized harms posed by digital platforms, as well as key legal and policy challenges to implementing content-moderation regimes.
Last week, the CEOs of major tech companies appeared before Congress to testify for the first time since the Jan. 6 assault on the U.S. Capitol, which was fueled in part by online misinformation. On this week’s Arbiters of Truth, Evelyn Douek and Quinta Jurecic of Lawfare sit down with Issie Lapowsky, a senior reporter at Protocol, to discuss the testimony and platforms’ struggle contain far-right extremism online.
When students at Istanbul’s Boğaziçi University, Turkey’s top academic institution, gathered in January to protest President Recep Tayyip’s Erdoğan’s eleventh-hour appointment by fiat of a government loyalist as their new rector, they turned to a new platform to take their message beyond the walls of their campus. Clubhouse was still in beta, only available on iPhones and by invitation, when Turkish student protesters discovered it. But it allowed up to 5,000 people to join chat rooms in which they could converse with strangers in a user-moderated audio discussion. The app became a hub of opposition politics and students began to host discussions about Erdoğan’s abuses of power. Thousands across the country flocked into chat rooms to listen to protesters’ stories—often in hours-long discussions deep into the night. Activists and lawyers joined to offer advice, journalists to find sources, and many others just to stay informed. By the end of January, the app had grown so popular that even the country’s 62-year-old former prime minister-turned-opposition leader Ahmet Davutoğlu was scheduling talks on Clubhouse.
But this brief experiment in a free-wheeling corner of the web was not to last. On Feb. 3, a month after the Boğaziçi protests began and Clubhouse entered Turkey’s digital lexicon, the police took three students into custody. The students had spent the night before moderating a Clubhouse discussion about the protests, and although the police attributed their detentions to posts made on Twitter and Instagram, the students insisted that the Clubhouse room was the only connection they had to each other and that the arrests had to be linked to it. It all went downhill from there. As users questioned how safe Clubhouse really was from government surveillance, pro-Erdoğan journalists and pundits flooded the app. On Feb. 6, Erdoğan’s communications director joined. These loyalists began to lurk in chatrooms to intimidate speakers and to schedule their own sessions to counter criticisms of the government. In just a few days, Clubhouse had transformed from a seemingly safe space for Erdoğan’s critics into yet another digital battleground for his information wars.
Clubhouse’s untimely, inevitable devolution into an object of government monitoring illustrates the central conflict of the Turkish internet. For about a month, the app offered a space for large-scale political organizing and discussion away from Erdoğan-controlled media and his censors. But the app’s creators were unprepared for what would come next. While billed as a safe space for free speech and democratic dissent, Clubhouse proved no match for Erdoğan’s efforts to dominate it and gave way to surveillance, censorship, and co-optation. In this, Clubhouse is following the well-trod path of Facebook, YouTube, and Twitter, all of which entered the Turkish market as democratizing forces at critical political junctures, only to face years of government intervention and harassment. For social media companies, their future in Turkey depends on their ability to both sufficiently placate Erdoğan to remain active and to provide the country’s increasingly anti-Erdoğan digital communities with a safe space for public debate. That will only get harder as the president tightens his reins on Turkish digital media and escalates his online repression. Social media companies must avoid becoming tools in this crackdown.
YouTube is widely viewed as an engine of radicalization for users on the platform, but Brendan Nyhan, a professor of government at Darmouth College, presents a slightly different view: Though YouTube doesn’t push all users toward extreme content, for those who are already viewing such material, the platform reliably recommends them additional extremist videos. This week on Lawfare‘s Arbiters of Truth, Nyhan sits down with Evelyn Douek and Quinta Jurecic to discuss his new report published with the Anti-Defamation League, “Exposure to Alternative and Extremist Content on YouTube,” and how to understand YouTube’s role in the radicalization of its users.
In February, the video game publisher Victura announced it would launch what it described as a realistic video-game portrayal of the Second Battle for Fallujah. Based on dozens of interviews with troops who fought in the 2004 battle, Six Days in Fallujah was billed as more of a documentary than an action experience. “We track several units through the process and you get to know what it was like from day to day.” Peter Tamte, Victura’s CEO, told The Wall Street Journal. He explained that the game would avoid the politics of the Iraq war and the perspectives of civilians who experienced brutality at the hands of U.S. forces, since that was a divisive subject. Instead, the game would “engender empathy” for the U.S. Marines who fought in the battle.
This promotional campaign encountered immediate opposition. Veterans of the battle argued that a documentary story about a controversial battle in a controversial war could hardly be stripped of its politics while remaining true to its subject. “War is inherently political,” the Fallujah veteran John Phipps explained to The Gamer. “So to say you’re going to make an apolitical video game about war is nonsense. Show me a war that wasn’t started because of politics. You can’t. War is politics. It’s just a different form of politics.”
The controversy about Six Days in Fallujah is really a larger story about video games, militarism in the media, and the expanding boundaries of politics. Video games are not only a contested cultural space in America, but also a contested political space in which governments and corporations, journalists and activists, and players of every stripe, are competing to tell stories and shape perceptions about the world. This multi-billion dollar industry plays an increasingly important role in shaping the world-view of its participants and the politics of their societies. It is far past time that the policy community writ large treat this industry with a rigor equal to its influence.
Amid the deluge of misinformation surrounding last year’s presidential election in the United States, voters across the country encountered persistent false claims online that ballots had been inappropriately “thrown out.” Aimed at undermining confidence in the vote, the “discarded ballot” hoax spread widely across digital media, including in encrypted group chat applications used in diaspora communities.
The spread of disinformation on encrypted messaging applications poses a threat to diaspora communities, who have turned to WhatsApp and other messaging apps for the trust and intimacy they afford. Yet due to their encrypted and closed nature, conventional fact-checking and content moderation regimes are harder to implement. As a result, these platforms have become a promising new avenue for the spread of disinformation, particularly among diaspora communities. Last year in North Carolina, for instance, encrypted messaging applications were used to spread misleading information in a get-out-the-vote campaign targeting South Asian Americans.
The COVID-19 pandemic has accelerated digitalization around the world, but as life has shifted increasingly online, cybercriminals have exploited the opportunity to attack vital digital infrastructure. States across Africa, where digital capacity continues to lag behind the rest of the world, have emerged as a favorite target of cybercriminals, with costly consequences. In early October 2020, Uganda’s telecoms and banking sectors were plunged into crisis due to a major hack that compromised the country’s mobile money network, usage of which has significantly increased during the pandemic. At least $3.2 million is estimated to have been stolen in that incident, in which hackers used around 2,000 mobile SIM cards to gain access to the mobile money payment system. In June, the second-largest hospital operator in South Africa was hit by a cyber-attack in the midst of the COVID-19 outbreak, paralyzing the 6,500-bed private healthcare provider, forcing them to switch manual back-up systems.
In light of increased attacks, institutions such as the Central Bank of Nigeria and national cyber-response organizations in Tunisia, Ivory Coast, Morocco, and Kenya have sounded the alarm to businesses and citizens, urging them to improve security measures. But states across Africa still lack a dedicated public cybersecurity strategy. As a result, cybersecurity initiatives related to COVID-19 have been mostly led by the private sector, especially professional and sectoral federations. These are rarely enough, as it’s a long, hard grind for most companies just to cope with the business impact of the pandemic on their day-to-day activities.
Addressing these vulnerabilities in the context of heightened cyberattacks requires a coordinated and dedicated commitment to cybersecurity at a time when governments and organizations are already be strained by the health and economic consequences of the COVID-19 pandemic. African states and regional bodies have taken initial steps toward implementing a continent-wide strategy to improving cyber-resiliency, but the vulnerabilities exposed by the COVID-19 pandemic requires these efforts to be accelerated by building the institutional and coordinating mechanisms to better mitigate cybersecurity threats.
Across the African continent, the relentless spread of networks, sensors, artificial intelligence, and automation is driving a revolution to an unknown destination. Emerging technologies such as CCTV cameras with facial recognition systems, drones, robots, and “smart cities” are proliferating. Digitization is improving government revenue collection and curbing corruption. Cameras and facial recognition technologies are helping authorities respond to terrorist attacks. Drones are delivering life-saving medical supplies. Yet with each advance there is a cost. Sophisticated malware enables novel forms of criminality, surveillance technology powers new tactics of repression, drones unleash the prospect of an autonomous weapons arms race.
Emerging technology is having a powerful impact on the security and stability of African states. Yet the digital revolution’s ultimate legacy will be determined not by technology, but by how it is used. African countries that take advantage of the opportunities and limit the risks inherent in emerging technology may achieve greater peace and prosperity. Yet many countries could be left behind. As the continent recovers from the COVID-19 pandemic, its leaders face a choice between harnessing emerging technology to improve government effectiveness, increase transparency and foster inclusion, or as a tool of repression, division, and conflict.