Tomorrow’s tech policy conversations today


When Russian forces invaded Ukraine earlier this year, many observers believed that the conflict would be marked by overwhelming use of the Kremlin’s cyberweapons. Possessing a technically sophisticated cadre of hackers and toolkits to attack digital infrastructure, the Kremlin, according to this line of thinking, would deploy these weapons in an effort to cripple the Ukrainian government and deliver a decisive advantage on the battlefield. The actual experience of cyberwar in Ukraine has been far more mixed: While Russia has used its cyber capabilities, these digital forays have been far less successful or aggressive than many observers had predicted at the outset of the war.
So why has Russia failed to win on the digital battlefield? In recent weeks, Ukrainian and U.S. government officials and the Western tech companies that have rushed to support Ukraine’s digital defenses have argued that Russia’s failure is due in no small part to the sophistication of Kiev’s defenses. But evaluating that claim is immensely difficult and illustrates a fundamental problem for the current state of cybersecurity research and policy. As it stands, there is no playbook for measuring the effectiveness of cyber defense efforts or conveying to the public when they are working. And this makes it difficult to draw conclusions from the war in Ukraine to inform our future defensive posture. Assessing the effectiveness of cyber defenses is a crucially important part of developing cybersecurity policy and making decisions about where and how to invest in computer networks and infrastructure. But in the absence of good defensive metrics, calibrating these investments remains difficult.


One week ago, powerful explosions ruptured a pair of underwater natural gas pipelines—Nord Stream 1 and 2—that run between Russia and Germany. The pipelines represent an important source of natural gas to Germany, and against the background of Russia’s invasion of Ukraine, Nord Stream 1 and 2 provide a key tool for the Kremlin to exert leverage over Europe. While exactly who is responsible for the attack, which European officials say was a deliberate act of sabotage, remains unclear, experts broadly agree that Russia is the key suspect.
As is typical following an event like this, conspiracy theories about who was responsible quickly proliferated online, with the Kremlin promoting a familiar trope: that the United States was responsible for a nefarious, clandestine plot. In official statements, state-backed media, and tweets, Kremlin messengers promoted the idea that the United States carried out the attack.
To track their spread and understand the role of state propaganda in such information, researchers typically examine posts on Twitter. But this only provides a partial view. In this case, as elsewhere, popular political podcasts served as an important, understudied, means through which Kremlin narratives reach American audiences. Following the explosions, 12 popular political podcasts have devoted 18 episodes to the theory. Less than one quarter of these episodes refuted the baseless theory, and nearly 40% fully blamed the United States.
Political podcasts in the United States are instrumental in shaping public opinion on a wide range of consequential subjects and frame the contours of contentious, often polarized debates. Until recently, research on that space has been limited. Using a new Brookings dashboard and database, we are able to more systematically study how popular political podcasts shape the information environment. By spreading the idea that the United States was in fact responsible for the explosions, several leading U.S. podcasters have advanced the Kremlin’s preferred narrative while staying under the radar of researchers—until now.


In late July, the Russian government appeared to have turned its data localization laws against an unlikely target: the Jewish Agency for Israel. Concerned that the decades-old nonprofit, which helps Jews emigrate to Israel from around the world, is accelerating the brain-drain of educated professionals from Russia in the aftermath of its disastrous invasion of Ukraine, Russian authorities accused the group of violating privacy laws in its storage of data pertaining to Russian citizens.
The move against the Jewish Agency for Israel is the latest example of the Kremlin using laws governing online life in Russia to cement power offline, and its deployment against a group with little to no meaningful technology operation illustrates how those laws are being weaponized against groups viewed as a threat to the governing regime. It is hardly a new phenomenon, but its growing frequency underscores that Moscow’s use of online laws to rein in civil society shows no sign of relenting, and, if anything, is only growing more creative.


Few things are as vital to democracy as the free flow of information. If an enlightened citizenry is essential for democracy, as Thomas Jefferson suggested, then citizens need to a way to be kept informed. For most of the modern era, that role has been played by the press—and especially the editors and producers who exercise control over what news to publish and air.
Yet as the flow of information has changed, the distribution and consumption of news has increasingly shifted away from traditional media and toward social media and digital platforms, with over a quarter of Americans now getting news from YouTube alone and more than half from social media. Whereas editors once decided which stories should receive the broadest reach, today recommender systems determine what content users encounter on online platforms—and what information enjoys mass distribution. As a result, the recommender systems underlying these platforms—and the recommendation algorithms and trained models they encompass—have acquired newfound importance. If accurate and reliable information is the lifeblood of democracy, recommender systems increasingly serve as its heart.
As recommender systems have grown to occupy a central role in society, a growing body of scholarship has documented potential links between these systems and a range of harms—from the spread of hate speech, to foreign propaganda, to political extremism. Nonetheless, the models themselves remain poorly understood, among both the public and the policy communities tasked with regulating and overseeing them. Given both their outsized importance and the need for informed oversight, this article aims to demystify recommender systems by walking through how they have evolved and how modern recommendation algorithms and models work. The goal is to offer researchers and policymakers a baseline from which they can ultimately make informed decisions about how to oversee and govern them.

Amid the lingering effects of the COVID-19 pandemic, rising inflation, the war in Ukraine, geopolitical tensions in East Asia, and more frequent extreme weather events, manufacturing supply chains continue to struggle in bringing goods when and where they are needed. These disruptions have affected all aspects of end-to-end supply chains, producing demand shifts, supply and manufacturing capacity reductions, and coordination failures. Prior to 2020, most supply chain designs lacked the resilience needed to cope with these disruptions, and, in response, companies have tried to diversify their sourcing and increase inventories and manufacturing capacity, all of which have led to increased cost.
Now more than ever, companies need a new paradigm for cost-competitive resilience if they are to redesign supply chains while maintaining their competitive advantages. Firms are increasingly turning toward better contingency planning, improving organizational readiness and worker flexibility, automation, and building more collaborative relationships with suppliers to improve supply chain resilience. Other strategies include moving from vertically specialized to vertically integrated firm structures and trading lean supply chain designs for more decentralized network designs. By redesigning products and supply chains for greater agility, firms are creating greater opportunities for postponement and a reduced need for highly accurate demand forecasts.
In support of these strategies toward cost-competitive resilience, a potent first step is to improve end-to-end supply chain visibility, which provides companies with real-time data and a holistic understanding of their partners across the end-to-end supply chain, starting upstream at the procurement of materials or semifinished goods and ending downstream when products reach the end customer. By knowing the real-time location, production rates, and delivery schedules (among other variables) of raw materials, components, and final products across the global supply chain—whether in manufacturing plants, port terminals, warehouses, or in transit—it becomes easier and quicker to identify disruptions, mitigate their impact, and improve productivity. Improving resiliency requires the public and private sectors to establish visibility across the logistics ecosystem. To this end, the U.S. government should use its convening power and—in partnership with supply chain stakeholders—promote the development of freight data exchanges that enable interoperability, while fostering a competitive market for innovative software solutions.


In early February 2021, Sen. Ted Cruz and his co-host Michael Knowles were recording a live episode of the podcast Verdict with Ted Cruz when the Texas Republican coined a colorful metaphor to describe Beto O’Rourke’s base. In Cruz’s telling, the Texas Democrat’s core support is made up of “reporters” acting like “groupies at a Rolling Stones concert throwing their underwear” at him. “I mean if they wore underwear,” Cruz added. With a wry expression, he paused. “Too edgy?” he asked. Knowles laughed, dismissing the concern outright: “It’s a podcast—you can say whatever you want.”
Knowles’ assessment of the podcast ecosystem as a space where “you can say whatever you want” is—for the most part—accurate, both with respect to government regulation and platform guidelines. Even as tech companies raced to limit the spread of election-related misinformation across social media platforms in late 2020, prominent political podcasters played a central role in disseminating election fraud narratives in the lead up to January 6, as we have documented. Podcasts also offered a prime avenue for the spread of pandemic-related misinformation, particularly regarding unproven treatments and vaccines. Despite the real-world harms caused by this type of misinformation and the medium’s growing reach and influence, to-date little research has explored the role of podcasting in shaping political conversations due to a myriad of technical and other challenges.
To help policymakers, researchers, and the tech community better understand podcasting’s role in the information ecosystem, we have developed a dashboard that aggregates political podcast episode data into a single, easy-to-use format and provides an overarching look at the medium in near real time. This data set represents the first publicly available, centralized collection of podcast episode data describing the political podcasting industry in a ready-to-use, downloadable format. We focus on political podcasters, due to both their prominence in the broader media environment and their ability to rapidly shape public opinion and the contours of political debate. We hope that the release of this dashboard and data set will facilitate better monitoring of a medium that has until recently flown under the radar, despite its growing popularity and influence in political conversations.


Amid heightened geopolitical tensions and growing challenges posed by disruptive innovation, European policymakers are seeking ways to strengthen the continent’s strategic autonomy—particularly with respect to technology. A key part of this effort is the EU Chips Act, which provides billions in financial support to set up factories for advanced chip production (so-called “fabs”) and step up semiconductor research in the EU. Just as U.S. policymakers are attempting to strengthen the American semiconductor industry via the CHIPS and Science Act signed into law on Tuesday, lawmakers in Europe are attempting to build a more independent technology industry. First put forward in April by the European Commission, the EU Chips Act aims to address semiconductor supply shortages and years of decline in semiconductor investment in the EU, boosting Europe’s share of global chip production capacity to 20% from its current level of about 10%. The act is expected to be adopted in the first half of 2023 and has already had an impact on major semiconductor companies’ investment decisions.
The EU Chips Act represents a leading example of initiatives to improve Europe’s strategic autonomy on a range of technologies. The act joins up political, industrial, technological, and financial support in a key technological area; presents a clear plan for industrial and technological capability- and capacity-building; and takes a realistic approach to partnering with like-minded countries to enhance strategic control of the semiconductor industrial ecosystem.


At a recent meeting of the World Internet Conference, attendees were treated to a preview of China’s vision of the internet. In a trailer showcased as part of the meeting, people walk around a futuristic city experiencing super-connected streets and underground spaces, robots and other artificial intelligence tools provide services, and everyone is connected via 5G networks.
This is the future of the web that China is trying to sell the world, and the World Internet Conference, which took place on July 12 in Beijing, is the latest forum in which it is marketing that future. Now, China plans to turn this gathering into what is being called “the World Internet Conference Organization,” which Beijing hopes will displace existing multistakeholder bodies for internet governance and to advance its vision of authoritarian information controls in the process. While it is far from certain that Beijing will be successful in turning this new body into a successful vehicle for advancing its internet governance agenda, it should serve as a wake-up call for defenders of the open internet to modernize internet governance.


Around the world, policymakers are grappling with how to address the spread of harmful content and abuse online. From misinformation, to child sexual abuse material (CSAM), to harassment, and the promotion of self-harm, the range of issues on policymakers’ plates are diverse. All of them have real consequences in the lives of their constituents—and lack easy remedies.
Recent rulemaking and legislative initiatives, however, have seen a shift in how policymakers are holding social media companies accountable for the well-being of their users. From the United States to Europe, lawmakers are increasingly embracing the principles of “safety by design,” which aim to place accountability, user empowerment, and transparency at the heart of rules for online life.
Safety by design offers a more proactive approach for policymakers to address ever-evolving online safety issues, even as these principles raise a new set of challenges. By embracing safety by design, policymakers can provide users with greater choice and understanding of how their online experiences are structured, granting users greater autonomy in mitigating online harms. But safety by design approaches also require careful balancing to preserve civil liberties and to ensure that they provide protections for all online users, not just the children whose safety concerns have come to dominate debates about how to regulate online life. Similarly, such rules need to be crafted in a way that provides consistent guidance for industry while offering a framework that is broad enough to be applied to future online social spaces—from live chat and video applications to the metaverse and beyond.


When the science fiction writer Neal Stephenson first coined the term “metaverse” in 1992, the world of virtual reality-enabled computing that he imagined was still a long way off. But with virtual reality—and the computing infrastructure that enables it—making significant improvements in recent years, the interactive and embodied internet that Stephenson imagined is now closer to reality. Today, computer science researchers conceive of the metaverse as a “network of interconnected virtual worlds” using three-dimensional platforms where humans interact with digital content and with each other, forming an “ecosystem where digital and physical worlds collide”. By relying on a combination of augmented, mixed, and virtual reality to move from the 2D version of the internet to a 3D shared space, the metaverse aims at an internet that is interoperable and synchronous.
The metaverse promises to connect devices to humans and humans to each other in ways that threatens to transform economic and social relations. As a result, it is critical that policymakers and technology companies collaborate to write the rules of the road for the metaverse. The potentially disruptive qualities of the metaverse are illustrative of how the technologies of the Fourth Industrial Revolution (4IR) will likely transform how humans work, entertain, conduct business, and socialize. The scale of this disruption means that policymakers need to adopt a proactive approach in thinking about how these technologies are likely to change our society rather than attempting to address harms once they are widespread. Especially given the recent drawdowns in the technology industry, the impending buildout of the metaverse also offers a rare opportunity to design a system that is more equitable from the start—in contrast to past paradigms like Web 2.0.