Skip to main content

Tomorrow’s tech policy conversations today

Former Trump strategist Steve Bannon speaks with press while leaving the E. Barrett Prettyman United States Federal Courthouse in Washington, D.C. on November 15, 2021 after turning himself in earlier in the day at a FBI Field Office on two charges of contempt for his failure to comply with the House committee investigation the January 6 U.S. Capitol riots (Photo by Bryan Olin Dozier/NurPhoto)NO USE FRANCE
Former Trump strategist Steve Bannon speaks with press while leaving the E. Barrett Prettyman United States Federal Courthouse in Washington, D.C. on November 15, 2021 after turning himself in earlier in the day at a FBI Field Office on two charges of contempt for his failure to comply with the House committee investigation the January 6 U.S. Capitol riots (Photo by Bryan Olin Dozier/NurPhoto)NO USE FRANCE
Former Trump strategist Steve Bannon speaks with press while leaving a federal courthouse building in Washington, D.C. on Nov. 15, 2021.(Bryan Olin Dozier/NurPhoto)

On the morning of Jan. 6, 2021, Steve Bannon encouraged the audience of his podcast not to waver in their faith. “We’re coming in right over target,” President Donald Trump’s former chief strategist intoned. “This is the point of attack we always wanted…today is the day we can affirm the massive landslide on November 3.”

In the aftermath of the ensuing attack on the Capitol, Bannon’s podcast stands out for its prescient blend of violent rhetoric and blatant disinformation. In the run-up to Jan. 6, Bannon and his podcast guests extensively promoted the false belief that Trump had rightfully and overwhelmingly won the November election, only to have it stolen from him by fraud. In doing so, Bannon was one of several prominent podcast hosts to champion the misleading electoral narratives known collectively as the “Big Lie.” While digital platforms like Facebook and Twitter have received significant scrutiny for their role in permitting the spread of those narratives, far less attention has been paid to podcasting. By virtue of both its intimacy and its scale, podcasting can serve as a powerful vector for misinformation, yet there has been comparatively little analysis to date of the role the podcasting ecosystem played in the lead up to the Jan. 6 attack.

To better understand that role, we compiled a dataset of the most popular political podcast series in the United States in November 2020. More specifically, we examined the “Top 100” list for that month from Apple Podcasts, the most widely used podcast app in the United States at the time, and then downloaded episodes for 20 of the 23 series in the Top 100 that we identified as primarily providing political commentary. We found that:

  • Between Aug. 20, 2020, when the then-candidate Joe Biden accepted the Democratic nomination, and the storming of the Capitol on Jan. 6, 2021, over 25% of all episodes in our dataset (393 of 1,490) endorsed misleading electoral narratives
  • The rate at which popular podcasts endorsed misleading narratives rose dramatically after the election, with more than 50% of all episodes (344 of 666) between November 3 and January 6 endorsing unsubstantiated allegations of voter fraud or related claims
  • Popular podcasters on the right, who were largely responsible for the proliferation of electoral misinformation during this period, are more ideologically homogenous in their partisan leanings than popular podcasters on the left
  • Episodes that endorsed false or misleading electoral narratives had broad cross-platform reach, with total audiences on Twitter and YouTube in the tens of millions
Read More

A U.S. Army soldier of 5-20 Infantry Regiment attached to 82nd Airborne Division, aims his rifle in front of a bullet riddled map of Afghanistan painted on a wall of an abandoned Canadian-built school in the Zharay district of Kandahar province, southern Afghanistan June 9, 2012.  REUTERS/Shamil Zhumatov  (AFGHANISTAN - Tags: MILITARY CIVIL UNREST POLITICS)
A U.S. Army soldier of 5-20 Infantry Regiment attached to 82nd Airborne Division, aims his rifle in front of a bullet riddled map of Afghanistan painted on a wall of an abandoned Canadian-built school in the Zharay district of Kandahar province, southern Afghanistan June 9, 2012.  REUTERS/Shamil Zhumatov  (AFGHANISTAN - Tags: MILITARY CIVIL UNREST POLITICS)
A U.S. Army soldier aims his rifle in front of a bullet riddled map of Afghanistan painted on a wall of an abandoned Canadian-built school in the Zharay district of Kandahar province in southern Afghanistan on June 9, 2012. (REUTERS/Shamil Zhumatov)

As the Taliban moved in on the Afghan capital of Kabul with unexpected speed earlier this year, foreign aid workers, diplomats, and military personnel attempted to destroy reams of sensitive data that they’d collected over the decades-long U.S. occupation. The information included photos of smiling Afghans shaking hands with U.S. colleagues and vast stores of biometric data, which could be used to precisely identify individuals. Highly detailed biometric databases built with U.S. funding and assistance had been used to pay police and military and, in the hands of the Taliban, threatened to become a potent weapon. Afghans who’d worked with foreign governments rushed to scrub their digital identities and hide evidence of their online actions, afraid of the Taliban using cheerful social media posts against them.

In just a few short days, the Taliban’s advance transformed these vast stores of data collected in the name of development and security from valuable asset to deadly liability. It’s a tragic tale, and from the perspective of experts on data security and privacy, it’s even more tragic because it was almost entirely predictable. For years, specialists have been sounding the alarm about the dangers of collecting and failing to secure data on the world’s most vulnerable. Despite an ever-growing list of ostensibly benevolent data collection going wrong, like the recent revelation that the United Nation’s refugee agency shared un-consenting Rohingya refugees’ data with a government that has repeatedly tried to kill them, data advocates’ concerns are too often brushed off as paranoia or tiresome bureaucracy.

Now, thanks to events in Afghanistan, there is greater attention than ever on the drawbacks of data collection in humanitarian contexts. Much of the post-Taliban takeover reporting on data dangers has understandably focused on the potential misuses of the aforementioned stores of biometric data, which is used to identify people with very high reliability based on features like their fingerprints or their eyes. But amid the focus on biometric data, there has been far too little attention paid to another data-driven danger to vulnerable people: location.

Read More

A pile of destroyed desktops and screens are seen in a performance of Spanish Bip-Bip Foundation during the International Telecoms Fair (SIMO) in Madrid November 6, 2007. The foundation is a non-profit organization looking to breach the digital divide between those that have access to new technologies and those that do not.  REUTERS/Sergio Perez  (SPAIN)
A pile of destroyed desktops and screens are seen in a performance of Spanish Bip-Bip Foundation during the International Telecoms Fair (SIMO) in Madrid November 6, 2007. The foundation is a non-profit organization looking to breach the digital divide between those that have access to new technologies and those that do not.  REUTERS/Sergio Perez  (SPAIN)
A pile of destroyed desktops and screens are seen as part of a a performance during the International Telecoms Fair in Madrid on Nov. 6, 2007. (REUTERS/Sergio Perez)

Just as Bill Murray wakes up each morning in Groundhog Day to the tune of Sonny and Cher’s “I Got You Babe,” executives around the world today begin their days with a familiar piece of news: Their company has been breached. It takes Bill Murray’s weatherman character a few days to realize what’s happening to him and even longer to discover that he can change how he behaves. In cybersecurity that realization hasn’t happened, and, instead, we are living the same day over and over again, hoping that the same behavior will lead to a different tomorrow—one free of massive breaches. 

Changing this cycle requires first understanding the problem of widespread cyber vulnerabilities, and the federal government is beginning to take steps to do so—but not fast enough. In May, President Joe Biden signed an executive order that tasked the secretary of homeland security to stand up a Cyber Safety Review Board that would investigate major incidents affecting government computing systems and to disseminate the lessons learned from such incidents. More than six months later, the board exists only on paper, and cyber Groundhog Day marches forward, doomed to repeat the mistakes of the past. Amid widespread computer vulnerabilities, getting this board up and running should be a serious priority, one that has the potential to seriously improve the disastrous state of cybersecurity. 

Read More

FILE PHOTO: Twitter app is seen on a smartphone in this illustration taken, July 13, 2021. REUTERS/Dado Ruvic/File Photo
ZOOM Video Communications logo displayed on a phone screen, thumbnails of the application, smartphone and keyboard are seen in this multiple exposure illustration. Zoom is an American communications technology company headquartered in San Jose, California. It provides videotelephony and online chat services through a cloud-based peer-to-peer software platform and is used for teleconferencing, telecommuting, distance education, and social relations. Zoom App and communication became very popular during the Covid-19, Coronavirus pandemic quarantine lockdown and social distancing as people started massively distant video telecommunication. Thessaloniki, Greece April 24, 2020 (Photo by Nicolas Economou/NurPhoto)NO USE FRANCE
Messaging and social-media applications are displayed on a smart phone on April 24, 2020 (Nicolas Economou/NurPhoto)

Social media platforms face an all but impossible challenge: generating standards for acceptable speech that transcend borders and apply universally. From nudity and sexual content to hate speech and violent material, digital platforms have tried to write rules and build content-moderation regimes that apply around the world. That these regimes have struggled to meet their goals, however, should come as no surprise: The global speech standards authored by online platforms are not the first time that tech innovators have tried to write global rules for speech. Unfortunately, the history of attempts to write such rules does not bode well for contemporary efforts to build global content-moderation regimes. From telegraphic codes to the censorship of prurient material, the promise of globally consistent standards have long been plagued by important—and to some extent inevitable—linguistic and contextual differences.

Read More

FILE PHOTO: A screen shows Chinese President Xi Jinping attending a virtual meeting with U.S. President Joe Biden via video link, at a restaurant in Beijing, China November 16, 2021. REUTERS/Tingshu Wang/File Photo
FILE PHOTO: A screen shows Chinese President Xi Jinping attending a virtual meeting with U.S. President Joe Biden via video link, at a restaurant in Beijing, China November 16, 2021. REUTERS/Tingshu Wang/File Photo
A screen at a restaurant in Beijing, China shows Chinese President Xi Jinping attending a virtual meeting with U.S. President Joe Biden via video link on Nov. 16, 2021. (REUTERS/Tingshu Wang/File Photo)

As it convenes its much-anticipated Summit for Democracy this week, the Biden administration was set to launch a new Alliance for the Future of the Internet, a bid to rally the world’s democracies around a set of principles that support an open web. The launch of that alliance has now been delayed after civil society activists and even some officials in the U.S. government raised concerns that the new initiative would draw scarce resources away from existing fora dedicated to the advancement of internet freedom, deepen distrust between like-minded actors, and undermine the digital rights of those who live in repressive societies. 

The decision to pause the alliance’s launch offers the Biden administration an opportunity to reconsider the proposal and to better ground it in existing human rights norms. If established thoughtfully, the alliance could play an important role in pushing back on autocratic efforts to reshape the internet into an instrument of state control and in promoting an affirmative, positive agenda for internet governance in service of democratic values. By delaying the launch, the administration has created an opportunity to address well-founded concerns among civil society that the initiative risks undermining the very principles it seeks to promote. With that in mind, here are three ways that the administration can ensure that the Alliance for the Future of the Internet is a success. 

Read More

German Chancellor Angela Merkel, French President Emmanuel Macron and European Commission President Ursula von der Leyen speak with European Council President Charles Michel via videoconference at the European Council building in Brussels, Belgium October 20, 2021. Michel is meeting with several EU leaders prior to an EU summit which begins Thursday. Olivier Matthys/Pool via REUTERS
German Chancellor Angela Merkel, French President Emmanuel Macron and European Commission President Ursula von der Leyen speak with European Council President Charles Michel via videoconference at the European Council building in Brussels, Belgium October 20, 2021. Michel is meeting with several EU leaders prior to an EU summit which begins Thursday. Olivier Matthys/Pool via REUTERS
German Chancellor Angela Merkel, French President Emmanuel Macron, and European Commission President Ursula von der Leyen speak with European Council President Charles Michel via videoconference on Oct. 20, 2021. (Olivier Matthys/Pool via REUTERS)

From London to the Organisation for Economic Co-operation and Development, calls to “reimagine” or “revive” multilateralism have been a dime a dozen this year. The global upheaval of COVID-19 and emerging megatrends—from the climate crisis to global population growth—have afforded a new urgency to international cooperation and highlighted a growing sclerosis within multilateralism that even its greatest proponents admit. 

While these calls—and the rethinking they are beginning to provoke—are crucial, a truly new and nuanced multilateralism will require room for other models too. As we described in a paper published last year at the Bennett Institute for Public Policy at the University of Cambridge, digital minilaterals are providing a new model for international cooperation. Made up of small, trust-based, innovation-oriented networks, digital minilaterals use digital culture, practices, processes, and technologies as tools to advance peer learning, support, and cooperation between governments. 

Though far removed from great power politics, digital minilaterals are beginning to help nation-states navigate an environment of rapid technological change and problems of complex systems, including through facilitating peer-learning, sharing code base, and deliberating on major ethical questions, such as the appropriate use of artificial intelligence in society. Digital minilateralism is providing a decentralized form of global cooperation and could help revive multilateralism. To be truly effective, digital minilaterals must place as much emphasis on common values as on pooled knowledge, but it remains to be seen whether these new diplomatic groupings will deliver on their promise. 

Read More

A view shows a laptop display showing part of a code, which is the component of Petya malware computer virus according to representatives of Ukrainian cyber security firm ISSP, at the firm's office in Kiev, Ukraine July 4, 2017.  REUTERS/Valentyn Ogirenko
A view shows a laptop display showing part of a code, which is the component of Petya malware computer virus according to representatives of Ukrainian cyber security firm ISSP, at the firm's office in Kiev, Ukraine July 4, 2017.  REUTERS/Valentyn Ogirenko
A laptop in the office of a Ukrainian cybersecurity firm displays a component of the codebase for the Petya ransomware family on July 4, 2017. (REUTERS/Valentyn Ogirenko)

In June 2017, when the NotPetya malware first popped up on computers across the world, it didn’t take long for authorities in Ukraine, where the infections began, to blame Russia for the devastating cyberattack that would go on to do $10 billion of damage globally. NotPetya was a component of the ongoing conflict between Russia and Ukraine, but even though it was designed to infiltrate computer systems via a popular piece of Ukrainian accounting software, the virus spread far beyond the borders of Ukraine, causing an incredible amount and variety of damage. 

One of the most consequential and as-yet-unresolved legacies of NotPetya centers on Mondelez International, the multinational food company headquartered in Chicago that makes Oreos and Triscuits, among other beloved snack foods. NotPetya infected the computer systems of Mondelez, disrupting the company’s email systems, file access, and logistics for weeks. After the dust settled on the attack, Mondelez filed an insurance claim for damages, which was promptly denied on the basis that the insurer doesn’t cover damages caused by war. The ensuing dispute threatens to not only remake the insurance landscape, but also has major implications for what companies increasingly at risk of being hacked can expect from their insurer. 

Read More

A man walks past the logo of Israeli cyber firm NSO Group at one of its branches in the Arava Desert, southern Israel July 22, 2021. REUTERS/Amir Cohen
A man walks past the logo of Israeli cyber firm NSO Group at one of its branches in the Arava Desert, southern Israel July 22, 2021. REUTERS/Amir Cohen
A man walks past the logo of the Israeli firm NSO Group at one of its branches in the Arava Desert of Israel on July 22, 2021. REUTERS/Amir Cohen

Earlier this year, an international reporting project based on a list of 50,000 phone numbers suspected of being compromised by the Pegasus spyware program revealed just how widespread digital espionage has become. Pegasus, which is built and managed by the Israeli firm NSO Group, turns mobile phones into surveillance tools by granting an attacker full access to a device’s data. It is among the most advanced pieces of cyber espionage software ever invented, and its targets include journalists, activists, and politicians. Of the 14 numbers belonging to world leaders on the list of numbers suspected of being targeted, half were African. They included two sitting heads of state—South Africa’s Cyril Ramaphosa and Morocco’s King Mohammed VI —along with the current or former prime ministers of Egypt, Burundi, Uganda, Morocco, and Algeria.

That African leaders are both victims and users of malware systems such as Pegasus should come as little surprise. Governments on the continent have for some time relied on Pegasus and other spyware to track terrorists and criminals, snoop on political opponents, and spy on citizens. However, recent reporting about NSO Group’s surveillance tools—dubbed the “Pegasus Project”— makes clear that governments across Africa are also using spyware for purposes of international espionage. And these tools are being used in ways that risk worsening authoritarian tendencies and raise questions about whether security services are being properly held to account for their use.

Read More

Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' on Capitol Hill, in Washington, U.S., October 5, 2021.  Matt McClain/Pool via REUTERS
Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' on Capitol Hill, in Washington, U.S., October 5, 2021.  Matt McClain/Pool via REUTERS
Former Facebook employee and whistleblower Frances Haugen testifies during a Senate hearing in Washington, D.C., on Oct. 5, 2021. Matt McClain/Pool via REUTERS

In recent weeks, Facebook whistleblower Frances Haugen has delivered damning testimony to lawmakers in WashingtonLondon, and Brussels, painting a portrait of a company aware of its harmful effects on society but unwilling to act out of concern for profits and growth. Her revelations have created a public-relations crisis for the social-media giant and spurred renewed calls for stiffer oversight of online platforms. But Haugen’s revelations have also resulted in a less expected outcome: Russian propagandists using her testimony for their own ends.  

The Kremlin and the network of news outlets it supports have seized on Haugen’s disclosures and the debate they have prompted as an opportunity to seed narratives that deepen political divisions within the United States, diminish the appeal of a democratic internet, and drive traffic from major social-media platforms to darker corners of the web. In doing so, it has painted the United States as hypocritical in its support for freedom of expression and provided a boost for China’s autocratic model of internet governance, normalizing Russia’s own repressive model. 

Haugen’s revelations have energized efforts on both sides of the Atlantic to strengthen the regulation of social-media platforms, and the Kremlin’s exploitation of these developments suggests that it too has a stake in how the internet is governed. At play is a much broader conflict over the future of the open web.  

Read More

A monitor shows AI monitoring solution using image analyzing at a booth of ITECGROUP at 30th Japan IT Week Spring, IT trade show, in Tokyo on May 26, 2021. AI algorithms detect people’s characteristics, violations and unusual situations.AI detects objects in the video in real time.  ( The Yomiuri Shimbun )
A monitor shows AI monitoring solution using image analyzing at a booth of ITECGROUP at 30th Japan IT Week Spring, IT trade show, in Tokyo on May 26, 2021. AI algorithms detect people’s characteristics, violations and unusual situations.AI detects objects in the video in real time.  ( The Yomiuri Shimbun )
A monitor at an IT trade show in Tokyo displays an AI monitoring solution that detects individuals’ characteristics in real time. (Yasushi Wada / The Yomiuri Shimbun via Reuters Connect)

Imagine that you’re applying for a bank loan to finance the purchase of a new car, which you need badly. After you provide your information, the bank gives you a choice: Your application can be routed to an employee in the lending department for evaluation, or it can be processed by a computer algorithm that will determine your creditworthiness. It’s your decision. Do you pick Door Number One (human employee) or Door Number Two (software algorithm)?

The conventional wisdom is that you would have to be crazy to pick Door Number Two with the algorithm behind it. Most commentators view algorithms with a combination of fear and loathing. Number-crunching code is seen as delivering inaccurate judgments; addicting us and our children to social media sites; censoring our political views; and spreading misinformation about COVID-19 vaccines and treatments. Observers have responded with a wave of proposals to limit the role of algorithms in our lives, from transparency requirements to limits on content moderation to increased legal liability for information that platforms highlight using their recommendation engines. The underlying assumption is that there is a surge of popular demand to push algorithms offstage and to have old-fashioned humans take their place when decisions are to be made.

However, critics and reformers have largely failed to ask an important question: How do people actually feel about having algorithms make decisions that affect their daily lives? In a forthcoming paper in the Arizona State Law Journal, we did just that and surveyed people about their preferences for having a human versus an algorithm decide an issue ranging from the trivial (winning a small gift certificate from a coffee shop) to the substantial (deciding whether the respondent had violated traffic laws and should pay a hefty fine). The results are surprising. They demonstrate that greater nuance is sorely needed in debates over when and how to regulate algorithms. And they show that reflexive rejection of algorithmic decisionmaking is undesirable. No one wants biased algorithms, such as ones that increase racial disparities in health care. And there are some contexts, such as criminal trials and sentencing, where having humans decide serves important values such as fairness and due process. But human decisionmakers are also frequently biased, opaque, and unfair. Creating systematic barriers to using algorithms may well make people worse off.

The results show that people opt for algorithms far more often than one would expect from scholarly and media commentary. When asked about everyday scenarios, people are mostly quite rational—they pick between the human judge and the algorithmic one based on which costs less, makes fewer mistakes, and decides faster. The surveys show that consumers are still human: We prefer having a person in the loop when the stakes increase, and we tend to stick with whichever option we’re given initially. The data analysis also suggests some useful policy interventions if policymakers do opt to regulate code, including disclosing the most salient characteristics of algorithms, establishing realistic baselines for comparison, and setting defaults carefully.

Read More