Over the past decade, the Russian government has attempted to achieve a measure of sovereignty over digital technology. By building a domestic technology industry and controls over internet traffic, the Kremlin has tried to gain independence from the Western technology industry and influence over the information available to Russian citizens. In the aftermath of the Russian invasion of Ukraine and the crushing sanctions and domestic political unrest it precipitated, this project has never been more urgent.
Cut off from Western technology suppliers, Russia is moving to build an increasingly autarkic economy. Last month, Russian President Vladimir Putin created a new commission on internet and technological “sovereignty” in Russia and placed Dmitry Medvedev, former president and current deputy chairman of the Security Council, as its head. According to The Moscow Times, the goal of this commission is to find substitutes for the critical IT supplies the Russian economy desperately needs. In short, Moscow is leaning into techno-isolationism more than ever before.
On March 22, 2003, two days into the U.S.-led invasion of Iraq, American troops fired a Patriot interceptor missile at what they assumed was an Iraqi anti-radiation missile designed to destroy air-defense systems. Acting on the recommendation of their computer-powered weapon, the Americans fired in self-defense, thinking they were shooting down a missile coming to destroy their outpost. What the Patriot missile system had identified as an incoming missile, was in fact a UK Tornado fighter jet, and when the Patriot struck the aircraft, it killed two crew on board instantly. The deaths were the first losses suffered by the Royal Air Force in the war and the tragic result of friendly fire.
A subsequent RAF Board of Inquiry investigation concluded that the shoot-down was the result of a combination of factors: how the Patriot missile classified targets, rules for firing the missiles, autonomous operation of Patriot missile batteries, and several other technical and procedural factors, like the Tornado not broadcasting its “friend or foe” identifier at the time of the friendly fire. The destruction of Tornado ZG710, the report concluded, represented a tragic error enabled by the missile’s computer routines.
The shoot-down of the Tornado happened nearly 20 years ago, but it offers an insight into how AI-enabled systems or automated tools on the battlefield will affect the kinds of errors that happen in war. Today, human decisionmaking is shifting toward machines. With this shift comes the potential to reduce human error, but also to introduce new and novel types of mistakes. Where humans might have once misidentified a civilian as a combatant, computers are expected to step in and provide more accurate judgment. Across a range of military functions, from the movement of autonomous planes and cars to identifying tanks on a battlefield, computers are expected to provide quick, accurate decisions. But the embrace of AI in military applications also comes with immense risk. New systems introduce the possibility of new types of error, and understanding how autonomous machines will fail is important when crafting policy for buying and overseeing this new generation of autonomous weapons.
On April 27, two things happened with potentially significant ramifications for online privacy. First, a panel of the Ninth Circuit Court of Appeals issued an opinion that, if it stands, may grant private companies huge influence in determining how much digital privacy their users are entitled to expect from the government. Next, the world’s richest man said something on Twitter.
Of the two, it was of course Elon Musk who got more attention—nevermind the fact that what the Ninth Circuit says is binding law for nearly 62 million Americans. In what appeared to be an offhand comment amid his $44 billion quest to acquire Twitter, Musk declared that direct messages on the platform “should have end to end encryption like Signal, so no one can spy on or hack your messages.” Privacy activists have long advocated that Twitter roll out exactly such a feature, but the complexity of doing so has left the platform unable to deliver.
In the aftermath of the Ninth Circuit’s ruling that coincided with Musk’s tweet, providing an encrypted messaging solution for Twitter users has never looked more attractive. The ruling highlights the shortcomings of online privacy law in the United States and the influence that corporations exert in determining our rights online.
The United States and China are increasingly engaged in a competition over who will dominate the strategic technologies of tomorrow. No technology is as important in that competition as artificial intelligence: Both the United States and China view global leadership in AI as a vital national interest, with China pledging to be the world leader by 2030. As a result, both Beijing and Washington have encouraged massive investment in AI research and development.
Yet the competition over AI is not just about funding. In addition to investments in talent and computing power, high-performance AI also requires data—and lots of it. The competition for AI leadership cannot be won without procuring and compiling large-scale datasets. Although we have some insight into Chinese A.I. funding generally—see, for example, a recent report from the Center for Security and Emerging Technology on the People’s Liberation Army’s AI investments—we know far less about China’s strategy for data collection and acquisition. Given China’s interest in integrating cutting-edge AI into its intelligence and military enterprise, that oversight represents a profound vulnerability for U.S. national security. Policymakers in the White House and Congress should thus focus on restricting the largely unregulated data market not only to protect Americans’ privacy but also to deny China a strategic asset in developing their AI programs.
Over the past decade, the Indian government has assembled a sprawling biometric database designed to improve the delivery of social services to the country’s more than 1 billion citizens. The Aadhaar database is one of the world’s largest biometric identity programs and has been credited with making it easier for Indians to access subsidies and pension payments. Using fingerprints and iris scans, Aadhaar has made it possible for the government to verify the identity of the country’s residents with relative ease. Now, the Election Commission of India wants to link their voter registration database with Aadhaar, a move that would have profound consequences not only for the privacy of Indian citizens but for the future of biometric databases worldwide.
As it stands, the Election Commission of India (EC) stores its voter registration information in its own database and has its own verification tools. However, the Election Commission of India believes Aadhaar can offer increased protections against fraud and registration errors. In August, the Government of India, on behalf of the EC, approached the Unique Identification Authority of India (UIDAI), the body that administers Aadhaar, with a proposal to integrate the two databases. In December 2021, the Lok Sabha passed the Election Laws Amendment Bill, which creates a legal framework for integrating the two systems. Opposition groups argue that the bill will face serious legal hurdles.
The Aadhaar-EPIC controversy illustrates the serious problems that can arise when large biometric identity databases are expanded beyond their remit. Far from making India’s elections more secure, the marriage of the two systems could lead to disenfranchisement and increased voter microtargeting. With countries around the world launching or already administering biometric databases, India’s efforts to marry its biometric identification system to its voter registration database will provide an important precedent for how governments deploy such systems. India’s experience with biometric identification systems should be a lesson for policymakers overseeing similar efforts about the importance of investing in the security of the information ecosystem in which biometric and voting data is housed, how access to this data is regulated and monitored, and how the technology is actually deployed in voter registration and identification.
A great reckoning has arrived for content moderation in podcasts. Just as Facebook, Twitter, YouTube, and other digital platforms have struggled for years with difficult questions about what content to allow on their platforms, podcast apps must now weigh them as well. What speech should be permitted? What speech should be shared? And what principles should inform those decisions?
Although there are insights to be gleaned from these ongoing discussions, addressing the spread of hate speech, misinformation, and related content via podcasts is different than on other social-media platforms. Whereas digital platforms host user-generated content themselves, most podcasts are hosted on the open web. Podcasting apps typically work by plugging into an external RSS feed, downloading a given podcast, and then playing it. As a result, the main question facing podcasting apps is not what content to host and publish, but instead what content to play and amplify.
Making those determinations is far from straightforward, of course, but the challenge is not an intractable one. From new policies and user interfaces to novel regulatory approaches, the podcast ecosystem can and should employ far more robust content-moderation measures.
Late last month, European policymakers unveiled a provisional agreement on the text of the Digital Markets Act, marking the beginning of a new era of digital regulation. The DMA aims to make it easier for small and mid-sized tech companies to enter markets currently dominated by Big Tech and proposes a set of rules that so-called “gatekeepers”—those with a market value of 75 billion euro or annual turnover of 7.5 billion and more than 45 monthly active users—must abide by or risk staggering fines. Changes in the final rounds of negotiations turned the DMA from an innovative policy that was nonetheless firmly rooted in European case law into a far more novel regulatory approach. Even before coming into force, the DMA has paved the way for bolder digital regulation across the globe.
The DMA represents a landmark in the attempt by regulators globally to strengthen frameworks for ensuring competition in an ascendant technology industry. In doing so, the DMA adopts a number of untested approaches—most prominently, a proposal to require interoperability among messenger services. This entails significant risk but is in line with what appears to be a growing appetite among regulators to embrace more novel rulemaking to keep up with technological developments. Getting this approach right will require EU policymakers to adopt rigorous enforcement and oversight regimes to monitor how their regulatory structures are impacting technology ecosystems—and a willingness to quickly revise these regimes if they fail to deliver the desired effects.
The COVID-19 pandemic brought the consequences of offshoring semiconductors into sharp relief for American consumers and businesses. When the pandemic struck—snarling global supply chains and spiking demand for consumer electronics—American businesses and consumers were left without the inputs and supplies they had come to rely upon. This supply chain will remain at risk: Its core nodes remain in locations with high geopolitical uncertainty—none more important than Taiwan, whose semiconductor industry Beijing jealously eyes.
Such supply chain vulnerabilities alongside the recognition that semiconductors represent a strategic resource have inspired a push in Washington to rebuild American chip manufacturing. In June 2021, the U.S. Senate, in a rare act of bipartisan consensus, passed the U.S. Innovation and Competition Act (USICA), which would spend $52 billion to bolster the American semiconductor industry. In February of this year, the House of Representatives passed similar legislation—the America COMPETES Act—along mostly party lines. House and Senate negotiators now must reconcile these bills. President Joe Biden argued in his State of the Union address that passing some version of this legislation was essential “to compete for the jobs of the future” and to “level the playing field with China.”
But reshoring the semiconductor supply chain is unlikely to resolve the supply-chain shocks caused by the pandemic: construction of the most important nodes, namely fabrication of the chips themselves, would require not only tremendous up-front costs, but possibly a steady stream of government assistance in perpetuity. As lawmakers on Capitol Hill iron out how best to position the United States to maintain access to a key technology, it’s worth considering what a more holistic strategy to address semiconductor availability might look like.
Here, we propose a two-pronged approach. First, the United States should focus on deepening its high-tech collaboration with supply-chain partners such as South Korea, Taiwan, or even Europe. The U.S. should also amend immigration rules to permit more skilled workers to enter the country, augmenting the talent pool during a period of labor shortages and increasing the competitiveness of U.S.-based industry. We recommend this combination of policies rather than the costlier and riskier proposition of reshoring the industry from the ground up. The United States may not return to its 40% semiconductor manufacturing market share from the 1990s, but these policies would nonetheless help boost domestic production from 10-12% of the global market and increase supply-chain resilience while minimizing potential efficiency losses from over-reliance on local manufacturing.
Militaries around the world are preparing for the next generation of warfare—one in which human-machine teams are integral to operations. Faster decisionmaking, remote sensing, and coordinating across domains and battlespaces will likely be the keys to victory in future conflicts. To realize these advantages, militaries are investing in human-machine teaming (HMT), a class of technologies that aim to marry human judgment with the data-processing and response capabilities of modern computing.
HMT includes a range of technologies—from autonomous drone swarms conducting reconnaissance to pairing a soldier with an unmanned ground vehicle to clear a building—that make it difficult to define, posing a challenge for policymakers. Early HMT systems are already widely in use—in airline autopiloting systems, for example—but more sophisticated approaches are actively under development. Policymakers overseeing military modernization efforts will increasingly be asked difficult questions about when to deploy HMT and how to effectively monitor its actions.
The artificial intelligence technology at the heart of HMT is rapidly advancing, but two key technologies needed to deploy HMT responsibly—namely, methods to properly test and evaluate these systems, and to generate explanations for how AI “teammates” make decisions—are far less mature. The gap between the potential performance of HMT applications on the one hand, and the need for greater testing and explainability on the other, will be critical for policymakers to address as HMT systems are more widely developed and deployed.
If we are ever to address effectively the harms and risks of digital technology, we first need the right language to describe the systems that collect, analyze, share, and store huge amounts of data about us as consumers, patients, and citizens—often with deleterious effects. Misinformation, attention extraction, discriminatory algorithmic profiling, and cybercrime: These digital harms all emerge from the data ecosystem in which we live, but not in ways we can fully see or explain.
Concepts and phrases inspired by ecology—like “information environment” and “social media ecosystem”—are beginning to reframe data and digital harms as parts of a greater whole and are inspiring a fuller understanding of how digital harms function. From the lifecycle of plastics, people have learned to form a holistic picture of consequences on a collective scale, and the concept of the “data lifecycle” can energize new ways of thinking about digital harms. The “data lifecycle” offers a way to break the complicated life of data into its component parts and to think of digital harms like we do externalities, such as air pollution, biodiversity loss, and chemical runoff. With a fresh metaphor, we can better understand the social costs imposed by goods and services in the data economy.
The sum of all data activities on planet earth might be called its “data metabolism,” which in 2020 created or replicated 64.2 zettabytes of data (1 zettabyte=10,000 gigabytes). Though the volume of data produced and consumed around the world is awe-inspiring, numbers offer only a limited understanding of the system. A qualitative representation of the system’s interrelated parts is also needed.