Technology is fundamentally altering the security landscape. Rapid and profound advances in hardware and software, paired with the global shift to digitally networked communications and transactions, have transformed the economic and security landscape, along with the fabric and rhythm of daily life. They have introduced new risks to personal safety and national security, fueled a strategic competition between the United States and China, and increased collective vulnerability to malicious actors armed with cheaper, more effective, and difficult-to-attribute tools.
These changes require the U.S. government to better incorporate emerging technology issues in its national-security decision making—particularly by overhauling the institutional structure and priorities of the National Security Council, which sets the stage for government’s approach to national security. Drawing from more than 25 interviews with current and former NSC staffers, interagency personnel, national security professionals, policymakers, and academics, this analysis offers several policy options for restructuring the NSC to better respond to developments in conflict. Interviewees represented a diverse array of perspectives, with strong and differing opinions on every issue. Our research sought to surface the best ideas and to probe key concerns, while recognizing that not all trade-offs will be satisfyingly balanced, nor disagreements resolved. This analysis serves as both a snapshot of the current challenges faced by our national security enterprise and a blueprint for thinking through how to solve them.
Two recent breakthroughs in quantum computing have generated significant excitement in the field. By using quantum computers to solve problems that classical computers could not, researchers in the United States and China have separately ushered in the era of “quantum advantage.” Yet as momentous as the demonstration of quantum advantage may be, it is the availability of more capable quantum machines that will ultimately have greater impact. Access to these machines will foster a cohort of “quantum natives” capable of solving real-world problems with quantum computers.
Both recent breakthroughs—random circuit sampling by Google in 2019 and boson sampling by the University of Science and Technology of China in 2020—are problems useful for demonstrating quantum advantage. But they do not have real-world utility and are akin to esoteric Plinko games. Neither demonstration brings us closer to identifying any near-term application for quantum computers that will drive technology development and demonstrate impact.
Although quantum computing is in its infancy, the field is already seeing significant commercial investment. The history of classical computing suggests that if this commercial activity is to continue, it is absolutely vital to identify real-world applications for near-term quantum machines, applications with real advantage over classical approaches. Doing so requires us to make quantum computing available much more widely. Fortunately, what we are also witnessing is the emergence of quantum machines sufficiently capable of engaging a broader cohort of the public—and it is this public availability that will maximize our ability to identify truly useful applications.
The rapid development of an effective COVID-19 vaccine provides hope that the pandemic might be brought to an end, but as societies roll out vaccines and begin to open up, policymakers face difficult questions about how to best verify individuals’ vaccine records. Building vaccine record verification (VRV) systems that are robust and ethical will be vital to reopening businesses, educational institutions, and travel. Historically, such systems have been the domain of governments and have relied on paper records, but, now, a variety of non-profit groups, corporations, and academic researchers are developing digital verification systems. These digital vaccine passports include the CommonPass app developed by the World Economic Forum to verify COVID-19 test results and vaccine status, as well as similar systems several major tech companies are actively exploring.
VRV systems present both opportunities and risks in tackling the COVID-19 pandemic. They offer hope of more accurate verification of vaccine status, but they also run the risk of both exacerbating existing health and economic inequalities and introducing significant security and privacy vulnerabilities. To mitigate those risks, we propose a series of principles that ought to guide the deployment of VRV systems by public health authorities, policymakers, health care providers, and software developers. In particular, we argue that VRV systems ought to align with vaccine prioritization decisions; uphold fairness and equity; and be built on trustworthy technology.
The Jan. 6 attack on the U.S. Capitol by supporters of President Donald Trump has brought simmering controversies over social media platforms to a boil. In the wake of Trump’s incitement of the attack, Twitter first suspended and then permanently disabled his account, albeit reluctantly. Reactions from other firms—including Amazon, Apple, Facebook, and Google—followed quickly. Some suspended Trump’s accounts. Others targeted disinformation, either directly, such as when YouTube began limiting false claims about the 2020 president election, or indirectly, as when Apple and Google removed Parler from their app stores.
Prior to the election, lawmakers on both sides of the aisle were already demanding that social media platforms be regulated more closely. As a candidate, President Joe Biden said he would like to see Section 230 of the Communications Decency Act repealed, though he has given no indication he will make doing so a priority. The role of online misinformation and conspiracy theories in fomenting an attack on the Capitol may generate additional urgency for reform, but there is no consensus on what such regulation ought to achieve. If increased political pressure translates into legislative activity, it will mean alterations to or the outright repeal of Section 230, which provides platforms with liability protection for most content posted by users and has emerged as the central battleground in debates over platform regulation. One Democratic proposal, floated by Reps. Anna Eshoo and Tom Malinowski in the aftermath of the attack, would limit Section 230 protections for content that encourages civil rights abuses or terrorism. A bipartisan proposal authored by Democratic Sen. Brian Schatz and Republican Sen. John Thune is aimed at encouraging transparency and consistency in platform content moderation decisions. Democrats and Republicans are deeply divided on their reasons for reforming the law: Republicans want to strip Section 230 protections because social media companies moderate too much content; Democrats want it reformed because platforms aren’t doing enough to quash harmful misinformation. Platforms thus face the Goldilocks problem: They can neither remove nor host too much content, but must get their curation exactly right to satisfy both camps.
These political divides hasn’t stopped a wide range of observers from predicting Section 230’s demise as a result of the attack on the Capitol. Getting rid of Section 230 is a seemingly straightforward way to press platforms to intervene more frequently. But a repeal is likely to cause significant disruptions in the short to medium term. In the long run, changes will be far less dramatic than either proponents or critics envision—the information available online will probably remain relatively constant, but the entities that carry it may shift, especially if increased costs create disadvantages for start-up companies.
On Wednesday, Jan. 6, 2020, a mob of conspiracy theorists left their online world and appeared in person in Washington, D.C., to take part in a violent insurrection. Encouraged by then-President Donald Trump, who, they believed, was to lead their revolution, QAnon conspiracists, along with other far-right elements, were at the violent vanguard. Many of those who stormed the Capitol appear to have believed that they were bringing about “The Storm,” a day of reckoning for the members of an alleged Satanic cabal who “stole” the election from their messianic leader, according to Q mythology.
The role of QAnon supporters in the riot should have come as no surprise. QAnon messages have reached Trump, who has amplified them in turn, as documented by a December report by the Network Contagion Research Institute (NCRI) and American University. The movement’s violent nature and the susceptibility of individuals to the conspiracy theory has made QAnon a significant threat to democracy. Federal authorities are beginning to recognize that threat, and in 2019 the FBI named QAnon as a domestic terrorism threat. At her recent confirmation hearing, Avril Haines, President Joe Biden’s pick to lead the U.S. intelligence, committed to producing a public assessment of the threat posed by QAnon.
The violence of Jan. 6 made clear that the health of online communities and the spread of disinformation represents a major threat to U.S. democracy, and as the Biden administration takes office, it is time for policymakers to consider how to take a more active approach to counter disinformation and form a public-private partnership aimed at identifying and countering disinformation that poses a risk to society.
After a long period of consultation, in mid-December the European Commission finally presented a pair of draft laws that aim to re-write the rules of the internet. While it will be years before the Digital Markets Act (DMA) and the Digital Services Act (DSA) come into force, the two proposals represent a major step forward in updating regulations for online intermediaries, or companies that host third-party content or sell third-party products. Although sticking points and open questions remain, the drafts show how the Commission plans to make digital platforms compete with one another and to ameliorate their potential negative impacts on consumers and society. In many respects, the DMA and DSA offer a well-balanced approach that imposes additional rules where the potential for harm to competition and consumers is highest. Nonetheless, there remains room to strengthen the provisions to make sure they make platform structures more transparent and competitive.
In a 1982 speech, then-U.S. Secretary of Defense Caspar W. Weinberger warned that the United States had for the better part of a decade facilitated unfettered technological transfer and trade with the Soviet Union. This high-tech transfer, the secretary argued, was being done through “legal and illegal channels” and was effectively the technological “rope to hang us,” as it was bolstering Soviet military capability.
Replace the Soviet Union with China, and the themes of Weinberger’s speech would not seem out of place today. At the time, a strategic competitor appeared to be rapidly advancing on U.S. technological superiority through a concerted campaign of technology transfer, aided by U.S. scientific engagement policies, an open academic system, and intellectual property theft. In some cases, this had been abetted by U.S. high-tech trade with the Soviets over the previous decade: The sale of advanced ball bearing machines had helped improve the accuracy of Soviet missiles, the Pentagon chief declared. In other cases, allies were to blame, as when Japan sold civilian dockyards to the USSR that were diverted to service the Soviet’s new aircraft carriers. Replace ball bearings with lasers and dry docks for semiconductor manufacturing equipment, and Weinberger could be speaking of today’s trade tensions between the United States and China.
After decades of globalization, technological integration and open high-tech trade, justified anxieties have returned in Washington and its allied capitals over the depth and nature of their scientific and technological relationships with Beijing. In some ways, examples from the Cold War highlight the ordinariness of these concerns. Many have forgotten the depth of high-tech trade between the United States and the Soviet Union during periods of the Cold War, when Washington continued to largely engage with Moscow on basic science, including topics and technologies that were highly sensitive and dual use in nature. The debate over the wisdom of doing so has resurfaced in the last several years in relation to China. But ultimately the Cold War shows how different this new competition will be. China’s rise as a scientific and potentially technological peer will require fundamentally new thinking from Washington in how it works with its allies. The moral hazard is also more complex, as Beijing has sought U.S. and allied companies, technologies, and research to help bolster its transformation into a techno-authoritarian state.
With advanced semiconductors key to powering a wide range of potentially transformative technologies, cutting edge computer chips have become a heated area of geopolitical competition for the 21st century. Despite their importance, semiconductors represent a rare area in which the Chinese economy is dependent on the rest of the world—rather than the other way around. Every year, China imports more than $300 billion of semiconductors, and most, though not all, major American semiconductor companies pull in at least 25% of their sales from the Chinese market.
This mutual dependence has benefitted the technology sectors in both countries. Every major Chinese technology company relies on U.S. chips: Tencent or Alibaba would not be the powerhouses they are today if they had relied on Chinese microprocessors during their formative years or had developed and manufactured their own. Many U.S. companies, meanwhile, have benefited from Chinese customers, markets, and innovations. The scale and cost reductions enabled by system and device manufacturing based in China and Asia more broadly has helped make information technology ubiquitous. Despite the harsh rhetoric on both sides of the Pacific, American semiconductor companies and their Chinese counterparts today are working together on hundreds, if not thousands, of product designs and joint technology development efforts.
Yet these collaborations have not prevented semiconductors from becoming a central faultline in tensions between the United States and China. In a post-COVID, post-Trump world, many in Washington would like to see the American economy less dependent on China and are exploring new restrictions on imports of Chinese hardware and exports of both cutting-edge semiconductors and the equipment required to manufacture them. Meanwhile in Beijing, Chinese officials are pursuing a clearly stated, though ambiguously defined, goal of “technology independence,” as articulated in the 14th five-year plan outlined last year.
But how to achieve that independence—and whether pursuing it makes sense in the first place—represents a question of profound uncertainty. As U.S. officials weigh their policy options in that regard, they first need to level-set on the state of the Chinese and global semiconductor industries, and how Beijing has approached its goal of building a domestic chip-making industry. Though it has made major advances, most segments of China’s semiconductor industry remain behind its foreign competitors, and its efforts to catch up face major economic obstacles. How the United States approaches its policy toward that industry will have major ramifications not only for the U.S. relationship with China, but also for the American semiconductor, systems, and internet services industries, which remain deeply intertwined with China.
In 2019, and again in 2020, Facebook removed covert social media influence operations that targeted Libya and were linked to the Russian businessman Yevgeny Prigozhin. The campaigns—the first exposed in October 2019, the second in December 2020—shared several tactics: Both created Facebook pages masquerading as independent media outlets and posted political cartoons. But by December 2020, the operatives linked to Prigozhin had updated their toolkit: This time, one media outlet involved in the operation had an on-the-ground presence, with branded merchandise and a daily podcast.
Between 2018 and 2020, Facebook and Twitter announced that they had taken down 147 influence operations in total, according to our examination of their public announcements of disinformation takedowns during that time period. Facebook describes such operations as “coordinated inauthentic behavior,” and Twitter dubs them “state-backed information operations.” Our investigation of these takedowns revealed that in 2020 disinformation actors stuck with some tried and true strategies, but also evolved in important ways, often in response to social media platform detection strategies. Political actors are increasingly outsourcing their disinformation work to third-party PR and marketing firms and using AI-generated profile pictures. Platforms have changed too, more frequently attributing takedowns to specific actors.
Here are our five takeaways on how online disinformation campaigns and platform responses changed in 2020, and how they didn’t.
A decade ago, the writer Eli Pariser popularized the term “filter bubbles,” which refers to the idea that search and social media algorithms wrap individuals in information bubbles that are tailored to their interests and behaviors rather than ones filtered by traditional gatekeepers like journalists. However, academic research has largely failed to support Pariser’s thesis, suggesting that filter bubbles may not be as pervasive as feared.
Yet in the debate over algorithms’ impacts on society, the focus on filter bubbles may be something of a red herring. While personalized filter bubbles might not be the most important problem, we show in our research that even general recommendation algorithms remain highly problematic. Even when they are not personalized, recommendation algorithms can still learn to promote radical and extremist content.