With the coronavirus pandemic still spreading, technology has never been more essential. Collaboration apps and online shopping have kept the economy afloat, while video chats and messaging platforms have kept us connected to friends and loved ones. And with many lockdowns still in place, technical breakthroughs in both testing and treating COVID-19 will be key to reopening society safely.
Yet our dependence on technology is not without risks. As the scale and reach of modern technology has grown in recent years, it hasn’t just upended industries. It’s disrupted governance too. Key policy questions—Who gets to speak? Who has access to money? How do we keep people safe?—have been pulled upstream from our political institutions to a sector that was never designed to manage them.
To address that shift, Brookings is launching a new publication, Brookings TechStream. By putting technologists and policymakers in conversation, the site aims to increase the policy fluency of the former and the technical knowledge of the latter. And by discussing the societal and political implications of tomorrow’s technologies today, it seeks to limit the risks of new products and protocols before they occur.
Consider contact-tracing applications. At their best, such apps promise to quickly identify and locate individuals who have been exposed to the virus, and therefore to containing new outbreaks swiftly. If the technology worked well, it would be central to ending lockdowns. Yet as Ashkan Soltani, Ryan Calo, and Carl Bergstrom argue, contact-tracing applications pose significant risks. Rather than waiting until the technology is deployed at scale, we need to have a robust debate about its societal implications now—and the debate needs to include experts from all relevant sectors and policy domains.
To that end, today’s piece by Soltani, Calo, and Bergstrom—a technologist, lawyer, and epidemiologist, respectively—is also instructive. Brookings TechStream aims to be an open platform for researchers, technologists, and policy analysts. If you have a technical background and a penchant for thinking through the societal implications of technology, we want to hear from you.
Over time, Brookings TechStream will cover a wide range of issues at the intersection of technology and public policy. But with the pandemic continuing to spread and the election fast approaching, we are especially interested in the implications of new technologies for disinformation, surveillance and privacy, and public health.
For all its virtues, modern technology continues to pose profound challenges. Yet neither the tech sector nor the policy community can resolve them alone. We need the expertise, intuition, and experience of each. The goal of Brookings TechStream is to offer a platform where conversations between them can happen. Welcome.
Chris Meserole is the deputy director of the Artificial Intelligence and Emerging Technology Initiative and a fellow in the Foreign Policy program at The Brookings Institution.
Ben Nimmo, director of investigations at Graphika, lays out four key strategies to help tech platforms and individuals defend against online disinformation campaigns.
The Facebook page that called for Egyptians to take to the streets on January 25, 2011—a day that would prove pivotal to the Arab Spring uprisings—was almost relegated to history. Just a few short months prior, Facebook had taken the page down, citing its policy requiring people to use their real name on the platform.
The Arab Spring marked the high point of the dream that huge social media platforms would bring the world together and place the power of the media in the hands of ordinary people. The uprising seemed a confirmation of the lofty free-speech rhetoric with which Facebook, YouTube, and Twitter all launched within the span of a few years in the aughts. But it’s been clear for a long time that the actions of the large U.S.-based internet platforms do not match that rhetoric.
Amid the COVID-19 pandemic, the censorship that has been a feature of the major platforms since their inception has only increased, only now more automated. And amid the pandemic, a funny thing has happened: The platforms are now being praised for finally being able and willing to carry out “content moderation”—a euphemism for what is actually private censorship—at scale. This development shouldn’t be seen as new, but as merely the latest development in how the companies respond to pressure from advertisers, governments, and civil society.
If public-health authorities’ worst predictions come true, COVID-19 may never disappear. That means the world will have to live with the virus and develop effective treatments and measures to contain the virus.
Mobile contact-tracing technology has emerged as one such measure to track population movements and alert individuals when they come into contact with an infected person. But such technology faces enormous obstacles. In order for such tools to be effective, some 60 percent of the population needs to opt-in and use them. With the novel coronavirus continuing to spread in the United States and major American universities and technology companies actively developing digital contact-tracing tools, understanding whether the American people would be willing to use such technology to stem the outbreak has never been more important.
But Americans continue to be deeply skeptical of such technology. In a nationally representative study of 2,000 Americans between April 30 and May 1, 2020, we found that just over 30 percent of Americans indicated they would download and use a mobile contact-tracing app, raising questions about whether such technology will be adopted widely enough to be effective. In a bit of good news for developers, support among Americans for digital contact tracing tends to increase with stronger privacy protections.
Amid the calamity of the COVID-19 pandemic, national leaders from Brazil to the United States are tweeting misleading medical advice. Social media influencers are peddling conspiracy theories about what causes the disease. And around the internet, fraudsters are hawking miracle cures. According to one preliminary study, recent months have seen as much misinformation as reliable material on social media. In some cases, misinformation about fake cures and treatments has proven life-threatening and even fatal.
Amid growing concern over what the WHO director-general called the “infodemic” accompanying the pandemic, social media platforms are proactively deleting conspiracy theories and promoting links to trusted agencies like the Centers for Disease Control in the United States. This proactive attempt to curtail misinformation has happened more quickly than in previous cases of rapid spread of viral health misinformation, such as material casting doubt on the efficacy of vaccines. Now that companies have shown they can act quickly and decisively to curb certain content, it is worth considering whether the near-blanket liability protections granted to social media companies for content posted on their platforms should apply to questions of public health.
This may sound appealing, but the history of eliminating egregious medical advertising suggests that eliminating liability protections will be far from a panacea. When the United States cracked down in the 1930s on deceptive advertising for drugs, it did so with a canny piece of legislation that ought to provide some inspiration for regulators today. Chipping away at liability protections has emerged as the favorite tool of Washington to hold big platforms to account, but it is a blunt instrument that legislators should be wary of deploying.
With his unique blend of intelligence history and engagement with the contemporary cybersecurity community, Thomas Rid, a professor of strategic studies at Johns Hopkins University’s School of Advanced International Studies, has become one of the foremost chroniclers of online disinformation and how digital information operations are informed by the historical work of intelligence agencies.
In a pair of recent podcasts, Rid joined members of the Brookings community to discuss his new book, Active Measures: The Secret History of Disinformation and Political Warfare. First up, Rid chats with Harvard Law School Professor Jack L. Goldsmith about the early history of disinformation through the 1980s:
When American propagandists beamed broadcasts beyond the Iron Curtain during the height of the Cold War, the message was in part exactly what you’d expect. “Keep up your hope,” an announcer said in Czech in one such broadcast. “For the Communists will be driven from our homeland and freedom will yet prevail.”
But the program also included lighter material: Listeners were treated to music banned across much of the Soviet Union, such as jazz or local folk songs, followed by a news broadcast.
Funded by the U.S. government, Radio Free Europe and its sister station Radio Liberty used a tactic called pre-propaganda, which refers to propaganda not directly related to the political message of the propagandist. It lays the groundwork for more overt propaganda through audience-building and myth-making—in this case, using jazz as an on-ramp to sell the American way of life.
More recently, the tactic has been adopted by Russia in its efforts to meddle in American politics. By pushing stories from a diverse body of outlets and posting material on different platforms, Kremlin propagandists adapted the concept of pre-propaganda in their efforts to interfere in the 2016 election, according to a recent study by researchers at the Center for Social Media and Politics at New York University, which harvests social media data to study political attitudes and behavior online.The study’s findings show how states are adapting classic propaganda tactics to social media, and why policymakers must consider how information spreads across platforms to protect voters from these covert campaigns.
In the rush to contain COVID-19, the world has plunged head-first into contact-tracing apps. In the hopes that with sufficiently surgical digital precision we might not only stop the spread of the disease, but also soon return to work, applications to enable digital contact tracing of the disease are being rolled out around the world. But the decision to deploy a digital contact-tracing system is as much a political decision as it is a technological intervention, and the public health impact of these interventions will be deeply shaped by political considerations.
Contact tracing isn’t, traditionally, a technology-heavy process. It involves testing patients and, for those that test positive, interviewing them about their whereabouts and human contact during the known infectious period. Effective contact tracing requires a clear understanding of how a virus transmits and for how long and the people with whom an infected person has been in contact.
Unfortunately, the science of how COVID-19 transmits remains unsettled, as is often the case in emergent epidemics. As a result, contact tracers are left casting a wide net. Smartphone apps that track a person’s movements and the people with whom they cross paths can potentially provide a more complete record of the places a person has been while contagious. National governments, leading universities, and major technology companies are now rolling out ways to collect that kind of information and share it with public health authorities.
Amid proclamations that greater adoption of artificial intelligence is going to cause a “robot apocalypse” in the workforce, is it possible to cut through the noise and figure out exactly which workers and industries are most exposed to AI disruption? In a recent report on artificial intelligence’s impact on the future of work, we tried to do exactly that. Using a novel technique developed by Stanford University Ph.D. candidate Michael Webb that uses AI-related patents to determine what types of jobs and tasks AI could affect, we analyzed the overlap between AI patents and Labor Department job descriptions to generate “exposure scores” for jobs in 22 major occupational groups.
The findings are surprising. Contrary to predictions that AI is going to devastate blue collar workers, it is the professional class that is more likely to see AI change their work. Indeed, AI has a distinct white-collar bent to it: engineering, science, and computer technology occupations are among the most at-risk, and business is the fifth highest position on the list.
In the interactive graphic below, mouse over jobs within occupational groups to see how exposed they are to innovations in AI:
As artificial intelligence is increasingly adopted at American workplaces, its impact won’t be equally spread across the United States. Instead, its geography will be diffuse, with greater impacts on large metropolitan areas.
Using a novel technique developed by Stanford University Ph.D. candidate Michael Webb that uses AI-related patents and Labor Department job descriptions, we sought to understand in a recent report what occupational areas the technology is poised to affect and where in the country those effects will be felt most dramatically.
As a young, complex technology, artificial intelligence seems ready to disrupt the large metro areas with a high concentration of workers in high-tech and white-collar industries. That includes places such as San Jose, Calif., Seattle, and Salt Lake City, as well as the Boston-Washington, D.C. corridor.
In the interactive graphic below, mouse over metropolitan areas to see how different U.S. cities are affected.
Chris Meserole is the deputy director of the Brookings Artificial Intelligence and Emerging Technology Initiative and a fellow in the Foreign Policy program. Here, he speaks with Brookings Cafeteria podcast host Fred Dews about TechStream, a new platform that puts technologists, policymakers, civil society, and academic researchers in conversation by looking at the downstream policy and societal implications of emerging tech.