Tomorrow’s tech policy conversations today
Within two months of its launch last fall, the popular chatbot ChatGPT had reached an estimated 100 million monthly users—making it the fastest growing consumer application in history. Now that its parent company OpenAI has released a new version of the chatbot’s underlying language model, anyone with access to the app will soon be able not only to write poetry and practice journalism, but even ace the LSAT and GRE. Like most leading-edge language applications today, ChatGPT relies on a machine learning algorithm known as a transformer to generate probability distributions over words and then recognize, translate, predict, or generate text. The quality of that text is often deeply impressive—so much so that even relatively restrained tech publications have concluded ChatGPT may be “coming for your job.”
Yet large language models may disrupt far more than just the economy. They also appear poised to challenge democracy too. At issue is not just the risk of automated misinformation campaigns, but the threat to traditional forms of democratic engagement. Democracy depends in part on how responsive those who govern are to the preferences of the governed: from public comments on proposed agency rules (“submit your comments and let your voice be heard,” exhorts regulations.gov) to legislators’ refashioned electronic mail boxes, citizens have long offered feedback to policymakers via the written word. By making it trivial to produce large quantities of credible text, language models threaten to weaken the signal those words provide. As the New Yorker recently observed, ChatGPT “will strain a political system in peril.”
In 1927, the political scientist Harold Lasswell wrote about political propaganda as “the management of collective attitudes by the manipulation of significant symbols.” Underlying Lasswell’s work were two sets of insights. One is that the mass public played a key role in political outcomes, such as success and failure in war. Second, that those public attitudes could also be manipulated. Scaling to the mass-level, however, required simplicity. This included the use of symbols and slogans that were memorable, such that they could frame “pictures”—or, cognitive shortcuts—that the public recalled when engaging elected officials to shape certain policies.
Nowhere has the use of propaganda been more ubiquitous than in war, especially because acquiescence or resistance is based on public sentiment and behavior. In World War II, Hollywood produced films that “created a communal viewing experience unlike any during World War I” intended to maintain resolve for the war. These films capitalized on the public’s predisposition to understand social life in terms of in- and out-groups, which shapes how people often interpret foreign policies, including the use of force.
In contemporary conflict, those symbols have increasingly taken the form of memes, defined as a “piece of media that is repurposed to deliver a cultural, social, or political expression, mainly through humor.” Online users have attempted to counter the Islamic State by creating memes satirizing the group’s barbarism, especially on specific “Troll ISIS Days.” Lebanese Internet-users have ruthlessly mocked Hezbollah’s leader, Hassan Nasrallah, with memes.
Memes have also been a staple of the war in Ukraine, providing a valuable window into key questions about how actors use memes for political purposes in war. Who is the audience, what is the message, and what events drive the production of these memes?
To explore these and related questions, we compiled an original dataset of memes posted by Ukrainians throughout the war. The memes were all taken from Reddit, a popular social media website that allows users to comment in discussion forums based on shared interests. Overall, our analysis of Ukrainians’ use of memes points to several findings that shed new light on how other countries may use memes during conflict. First, memes are not used in isolation from a particular military operation on the battlefield, such as an offensive or counter-offensive. Rather, they are concurrent and complementary to these military efforts, suggesting that they are meant to play a supporting role. Second, memes do not seem intended to directly influence diplomacy, but may further diplomatic efforts indirectly by bolstering popular support for the war. Third, memes target a diverse array of audiences, including Ukrainian citizens, expatriate audiences abroad, and Russians, especially soldiers’ families. This suggests that those creating and posting memes assume that success is a function of both domestic resolve as well as foreign material support.
In May of last year, around 8,700 leading hackers, developers, and cybersecurity firms in Russia converged on Moscow for one of the country’s largest hacker conferences: Positive Hack Days. Held annually since 2011, Positive Hack Days is in many ways reminiscent of American cybersecurity events such as Defcon or Blackhat, from its vendor-driven talks to its background music and social activities for participants.
Importantly, Positive Hack Days is organized by Russian cybersecurity company Positive Technologies—which the U.S. government sanctioned in April 2021 for supporting Russian government cyber operations. Reportedly, it discovers vulnerabilities in technology products, develops exploits for those vulnerabilities, and provides them to Russia’s Federal Security Service (FSB). It plays a key role in Russia’s national cyber threat response program (GosSOPKA), too. But Positive Technologies’ assistance to the Russian intelligence community doesn’t end there. It also hosts events that serve as recruitment hotbeds for Russia’s FSB and military intelligence agency (GRU), which ostensibly survey company talks, capture-the-flag competitions, and other hacking challenges to identify talent. Positive Hack Days appears to be one such gathering.
Last May’s conference offers a unique window into Russia’s cybersecurity community. At a time when the Putin regime is waging an illegal war on Ukraine and Western governments have slammed the Russian economy with sanctions, Russia’s technology industry is more isolated than ever. In the overall Russian technology sector, plenty of developers oppose the war or have left Russia entirely. The politically charged environment in Russia creates precarity for those that remain. Although the panels and discussions at Positive Hack Days focused on nationalism and the importance of Russia’s domestic technology sector, some participants articulated concerns associated with technological isolationism. However, many others expressed support for the Putin regime, particularly those who have capitalized on sanctions and tech isolation as an opportunity to expand their own cybersecurity products and services. Western governments cannot understand and prepare for the future of Russia’s cybersecurity sector, cyber talent base, and cyber capability development without analyzing the full range of perspectives and interests found at these gatherings, too.
In recent years, as progress in artificial intelligence (AI) has accelerated, nearly every major power has pledged to develop advanced AI capabilities and effectively integrate AI into their armed forces. Yet none have pursued those efforts as purposefully as China. Not only has Beijing issued an ambitious plan to make China the world’s leading AI power by 2030, but the Chinese Communist Party (CCP) has unveiled an aggressive innovation-driven strategy for the Chinese military, the People’s Liberation Army (PLA). Likewise, Xi Jinping, the General Secretary of the CCP, has consistently emphasized China’s commitment to AI development and “intelligent warfare”– most recently in his landmark report this fall to the 20th Party Congress.
If China’s strategic ambitions for AI are clear, how it intends to integrate AI into the PLA remains opaque. The CCP’s goals for militarized AI are still shrouded in mystery, even as the PLA clearly views AI as a technology that will be vital for driving next-generation warfare.
However, the recently established PLA Strategic Support Force (SSF) offers at least some clues into how Beijing aims to infuse AI capabilities throughout the military. Although the precise purpose of the SSF is not yet well understood, the organization has been charged with something like a mandate to innovate and tasked with integrating numerous “strategic functions.” Given the breadth of its organizational structure and mandate, the SSF appears to be at the forefront of the PLA’s efforts to modernize around new technologies like AI.
To better understand the SSF, we recently investigated whether it will have a “game-changing” impact on future conflicts in which mastering the information domain and integrating AI effectively are likely to dictate the winner. Our research into the SSF took a deep dive into open-source information, convened subject matter experts, and looked to scholarly analysis to form a more precise understanding of what role the SSF might be playing in the PLA’s AI innovation—and what role it definitely is not.
Polarization is widely recognized as one of the most pressing issues now facing the United States. Stories about how the country has fractured along partisan lines, and how the internet and social media exacerbate those cleavages, are frequently in the news. Americans dislike their political adversaries more than they used to. Meanwhile, disinformation and hate speech, often produced by actors with strong incentives to inflame existing social and political divisions, proliferate in digital spaces. The real-world consequences are far from trivial—consider the violence at the Capitol on January 6 or even the more recent assault on Nancy Pelosi’s husband. Although the extent to which political polarization leads individuals to violate democratic norms is a matter of debate, it is hard to imagine an event like the Capitol riot occurring absent such a polarized political climate.
Of particular concern is affective polarization, which refers to the animus individuals feel toward those who disagree with them politically. If the free exchange of ideas between non-likeminded people is a basic tenet of democracy, then affective polarization threatens to undermine democracy itself. In the United States, affective polarization now underlies partisan standoffs over everything from COVID-19 policy to climate change.
For social networks and digital platforms, polarization is both a challenge and opportunity. Social media companies are often blamed for driving greater polarization by virtue of the way they segment political audiences and personalize recommendations in line with their users’ existing beliefs and preferences. Given their scale and reach, however, they are also uniquely positioned to help reduce polarization. Based on our recent review of more than half a century’s worth of research into how best to bridge social divides, there are clear steps digital platforms can take to curb polarization.
In early December, President Biden hosted French President Emmanuel Macron for the first state visit of his presidency. Industrial policy was front and center on the diplomatic agenda, with Macron expressing concern over the signature achievements of the Biden administration’s “Buy America” effort: namely, the Inflation Reduction Act (IRA), which includes tax credits only for American-made electric vehicles, and the CHIPS Act, which provides investment and incentives for semiconductor manufacturing in the United States. The frustration did not come as a surprise. The protectionist overtones of the IRA and CHIPS Act have rankled allies in Europe ever since they were passed; prior to his trip, Macron even promoted a “Buy European Act” to counter the United States’ foray into industrial policy. Along with unprecedented restrictions on semiconductor components, the rhetoric and regulations coming out of the U.S. and Europe gesture toward a reversal of a decades-long trend of globalization and raise an important set of policy questions: why have the U.S. and EU been looking to reshore industries? What are the potential pitfalls of such protectionist approaches? And what alternative policies might accomplish the same goals?
The surprise launch of Sputnik 65 years ago, along with the Apollo moon landings, the two space shuttle disasters, and perhaps the movie Armageddon, may encapsulate the space age in our collective memory. But these events obscure a less dramatic, yet far more frequent, activity: near-daily commercial space launches. The American commercial space industry has grown rapidly in recent years, and in turn prompted global interest in replicating its successes. But as the recent failure of a Blue Origin New Shepard rocket demonstrates, moving beyond the longstanding “slow and steady” governmental approach into the Silicon Valley-inspired ethos of “fail fast, fail forward” brings new challenges. The proliferation of commercial space activity demands better coordination and stronger oversight to minimize technical accidents and political tensions.
The growth of the private space industry is extraordinary. So far this year, SpaceX has launched 31 rockets, already matching its total for 2021, at a pace of one launch every 6.4 days and ten times as many launches as every one of its American competitors. The company is building a new launch tower in Florida, providing launch services for NASA and the Department of Defense, and operates 2,500 Starlink satellites offering internet access to a broad range of customers. Blue Origin’s New Shepard is operational, with four launches this year of which three were successful, though its range is limited to suborbital flights. The company’s next model, New Glenn, is under development. Virgin Galactic, owned by Richard Branson but also based in the U.S., advertises “space for the curious.”
The rapid expansion of commercial space activity, as well as its integration into key government programs and services, represents a leap into uncharted waters. The rise of entrepreneurial “New Space” companies will challenge the capacity of both individual states and the international community to regulate and coordinate private space activity effectively. As the cost of placing payloads in space declines, the political and strategic importance of commercial space flight will only grow. Ensuring space is governed responsibly will be essential.
In the wake of the U.S. Supreme Court’s decision to overturn Roe v. Wade last summer, journalists and privacy advocates alike quickly sounded the alarm about the potential for prosecutors to use commercially collected data in abortion-related cases.
Fortunately, that concern has already translated into political action. Legislators in California recently passed A.B. 1242, a law which gives California-based tech and communications companies a way to resist requests for data on digital activities from being used in abortion prosecutions in other states. The law is thus the first in the nation to explicitly block out-of-state investigators from using digital information to query abortion-related actions that are legal in-state. Meanwhile, President Biden has tasked the chair of the Federal Trade Commission to “consider taking steps to protect consumers’ privacy when seeking information about and provision of reproductive health care services.”
Yet ensuring that private data is not misused in abortion-related cases is not the responsibility of policymakers alone. Technology firms also have a critical role to play. As our digital lives lead to evolving social norms about privacy and security, tech firms need to respond to activists, investors, consumers, and the broader public in order to maintain their license to operate. Taking action to stay in tune with social norms may require a combination of shifting data practices toward minimization, implementing end-to-end encryption for private communication, fostering adoption of third-party trustmarks for privacy and security, and producing better transparency reports.
In October of 2019, the professional esports player known as “Blitzchung” was being interviewed on a livestream discussing a match he had just won in Taiwan as part of a tournament for the game Hearthstone. Wearing a gas mask and goggles and speaking to the official Taiwanese Hearthstone stream, Blitzchung repeated a popular slogan of protesters in Hong Kong who had recently taken to the streets to protest China undermining the island’s independence: “Liberate Hong Kong, revolution of our time.”
Blitzchung, whose real name is Ng Wai Chung and who hails from Hong Kong, quickly found himself in the crosshairs of Activision Blizzard, the company behind Hearthstone. Blizzard shut down the stream, suspended and punished Blitzchung, and wrote a formal apology in Chinese on Weibo (but never released it in English). Blizzard declared Blitzchung to be in violation of player rules that forbid conduct that could be offensive or might harm the company’s image, banned him from competing for a calendar year, and demanded he forfeit thousands of dollars in prize money. Blizzard also fired the two streaming journalists who were interviewing Blitzchung and banned them from covering future Activision Blizzard events.
The “Blitzchung affair,” as it came to be known, highlights how video games pose unique challenges to free speech. Western companies complying with Chinese censorship demands—in this case, attempting to suppress advocacy for a free Hong Kong—as a cost of doing business isn’t new, but the role of video games as an important venue for speech and a central battleground for free speech remains underappreciated. Conflicts over free speech in video games go far beyond Hearthstone. Whether in the chat features of video games or in the narrative decisions made by video game designers, the censorship demands of countries around the world are increasingly shaping the digital entertainment consumed by the world’s more than three billion gamers. These demands create a difficult challenge for video game companies: balancing the need for business growth with a commitment to free speech.
As the Biden administration has worked in recent months to develop cryptocurrency regulations, the U.S. government finds itself caught between two extremes: unwilling to actively block cryptocurrency transactions for fear of restricting a growing and potentially lucrative industry but also determined not to give up completely on policing illegal cryptocurrency payments and going after their role in the cybercrime ecosystem. In a recent executive order and subsequent strategy documents, President Biden has pledged to both support development of cryptocurrencies and to restrict their illegal uses, two goals that the United States has long struggled to reconcile when it comes to digital money. And the Biden administration made clear in their executive order just how much the U.S. government wants to have it both ways, touting the potential benefits of virtual currencies for “responsible financial innovation” as well as the risks they pose to consumers, investors, and the “financial stability and financial system integrity.” The executive order extended to all digital assets—not just cryptocurrencies—including other property that exists only in a digital form, such as non-fungible tokens. But of all forms of digital assets, cryptocurrencies are the kind that present the biggest security risks, as well as the greatest potential economic benefits.
In the past year, the balance struck by the U.S. government between encouraging entrepreneurial cryptocurrency ventures and discouraging criminal activities leveraging cryptocurrencies seems to have shifted somewhat, due both to the volatility of the virtual currencies themselves as well as the growing concerns about the types of crimes enabled by those currencies. In particular, the United States seems increasingly interested in developing domestic cryptocurrency policies that can have a global impact on overseas criminal enterprises, including sanctioning cryptocurrency exchanges and individual cryptocurrency wallets, as well as recovering cryptocurrency payments made to criminals. While these are restrictions on the behavior of U.S. individuals and companies, they are ultimately aimed at overseas criminal operations and making it more difficult for those foreign actors to profit from international cybercrime. It is too soon to say whether these recent measures will be effective or enforceable or whether they can be scaled up to address the full extent of the challenges posed by cryptocurrencies. But it is clear that they mark a significant step forward in the history of U.S. cryptocurrency regulation in terms of how aggressive the government is willing to be about going after criminal virtual currency enterprises and also how willing it is to enter the virtual currency space itself with a potential central bank digital currency (CBDC).