Skip to main content

Tomorrow’s tech policy conversations today

A man walks past the logo of Israeli cyber firm NSO Group at one of its branches in the Arava Desert, southern Israel July 22, 2021. REUTERS/Amir Cohen
A man walks past the logo of Israeli cyber firm NSO Group at one of its branches in the Arava Desert, southern Israel July 22, 2021. REUTERS/Amir Cohen
A man walks past the logo of the Israeli firm NSO Group at one of its branches in the Arava Desert of Israel on July 22, 2021. REUTERS/Amir Cohen

Earlier this year, an international reporting project based on a list of 50,000 phone numbers suspected of being compromised by the Pegasus spyware program revealed just how widespread digital espionage has become. Pegasus, which is built and managed by the Israeli firm NSO Group, turns mobile phones into surveillance tools by granting an attacker full access to a device’s data. It is among the most advanced pieces of cyber espionage software ever invented, and its targets include journalists, activists, and politicians. Of the 14 numbers belonging to world leaders on the list of numbers suspected of being targeted, half were African. They included two sitting heads of state—South Africa’s Cyril Ramaphosa and Morocco’s King Mohammed VI —along with the current or former prime ministers of Egypt, Burundi, Uganda, Morocco, and Algeria.

That African leaders are both victims and users of malware systems such as Pegasus should come as little surprise. Governments on the continent have for some time relied on Pegasus and other spyware to track terrorists and criminals, snoop on political opponents, and spy on citizens. However, recent reporting about NSO Group’s surveillance tools—dubbed the “Pegasus Project”— makes clear that governments across Africa are also using spyware for purposes of international espionage. And these tools are being used in ways that risk worsening authoritarian tendencies and raise questions about whether security services are being properly held to account for their use.

Read More

Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' on Capitol Hill, in Washington, U.S., October 5, 2021.  Matt McClain/Pool via REUTERS
Former Facebook employee and whistleblower Frances Haugen testifies during a Senate Committee on Commerce, Science, and Transportation hearing entitled 'Protecting Kids Online: Testimony from a Facebook Whistleblower' on Capitol Hill, in Washington, U.S., October 5, 2021.  Matt McClain/Pool via REUTERS
Former Facebook employee and whistleblower Frances Haugen testifies during a Senate hearing in Washington, D.C., on Oct. 5, 2021. Matt McClain/Pool via REUTERS

In recent weeks, Facebook whistleblower Frances Haugen has delivered damning testimony to lawmakers in WashingtonLondon, and Brussels, painting a portrait of a company aware of its harmful effects on society but unwilling to act out of concern for profits and growth. Her revelations have created a public-relations crisis for the social-media giant and spurred renewed calls for stiffer oversight of online platforms. But Haugen’s revelations have also resulted in a less expected outcome: Russian propagandists using her testimony for their own ends.  

The Kremlin and the network of news outlets it supports have seized on Haugen’s disclosures and the debate they have prompted as an opportunity to seed narratives that deepen political divisions within the United States, diminish the appeal of a democratic internet, and drive traffic from major social-media platforms to darker corners of the web. In doing so, it has painted the United States as hypocritical in its support for freedom of expression and provided a boost for China’s autocratic model of internet governance, normalizing Russia’s own repressive model. 

Haugen’s revelations have energized efforts on both sides of the Atlantic to strengthen the regulation of social-media platforms, and the Kremlin’s exploitation of these developments suggests that it too has a stake in how the internet is governed. At play is a much broader conflict over the future of the open web.  

Read More

A monitor shows AI monitoring solution using image analyzing at a booth of ITECGROUP at 30th Japan IT Week Spring, IT trade show, in Tokyo on May 26, 2021. AI algorithms detect people’s characteristics, violations and unusual situations.AI detects objects in the video in real time.  ( The Yomiuri Shimbun )
A monitor shows AI monitoring solution using image analyzing at a booth of ITECGROUP at 30th Japan IT Week Spring, IT trade show, in Tokyo on May 26, 2021. AI algorithms detect people’s characteristics, violations and unusual situations.AI detects objects in the video in real time.  ( The Yomiuri Shimbun )
A monitor at an IT trade show in Tokyo displays an AI monitoring solution that detects individuals’ characteristics in real time. (Yasushi Wada / The Yomiuri Shimbun via Reuters Connect)

Imagine that you’re applying for a bank loan to finance the purchase of a new car, which you need badly. After you provide your information, the bank gives you a choice: Your application can be routed to an employee in the lending department for evaluation, or it can be processed by a computer algorithm that will determine your creditworthiness. It’s your decision. Do you pick Door Number One (human employee) or Door Number Two (software algorithm)?

The conventional wisdom is that you would have to be crazy to pick Door Number Two with the algorithm behind it. Most commentators view algorithms with a combination of fear and loathing. Number-crunching code is seen as delivering inaccurate judgments; addicting us and our children to social media sites; censoring our political views; and spreading misinformation about COVID-19 vaccines and treatments. Observers have responded with a wave of proposals to limit the role of algorithms in our lives, from transparency requirements to limits on content moderation to increased legal liability for information that platforms highlight using their recommendation engines. The underlying assumption is that there is a surge of popular demand to push algorithms offstage and to have old-fashioned humans take their place when decisions are to be made.

However, critics and reformers have largely failed to ask an important question: How do people actually feel about having algorithms make decisions that affect their daily lives? In a forthcoming paper in the Arizona State Law Journal, we did just that and surveyed people about their preferences for having a human versus an algorithm decide an issue ranging from the trivial (winning a small gift certificate from a coffee shop) to the substantial (deciding whether the respondent had violated traffic laws and should pay a hefty fine). The results are surprising. They demonstrate that greater nuance is sorely needed in debates over when and how to regulate algorithms. And they show that reflexive rejection of algorithmic decisionmaking is undesirable. No one wants biased algorithms, such as ones that increase racial disparities in health care. And there are some contexts, such as criminal trials and sentencing, where having humans decide serves important values such as fairness and due process. But human decisionmakers are also frequently biased, opaque, and unfair. Creating systematic barriers to using algorithms may well make people worse off.

The results show that people opt for algorithms far more often than one would expect from scholarly and media commentary. When asked about everyday scenarios, people are mostly quite rational—they pick between the human judge and the algorithmic one based on which costs less, makes fewer mistakes, and decides faster. The surveys show that consumers are still human: We prefer having a person in the loop when the stakes increase, and we tend to stick with whichever option we’re given initially. The data analysis also suggests some useful policy interventions if policymakers do opt to regulate code, including disclosing the most salient characteristics of algorithms, establishing realistic baselines for comparison, and setting defaults carefully.

Read More

The Roblox app in the App Store is displayed on a smartphone screen and a Roblox logo in the background.
In this photo illustration, the Roblox app in the App Store seen displayed on a smartphone screen and a Roblox logo in the background. (Photo by Thiago Prudencio / SOPA Images/Sipa USA)No Use Germany.
The Roblox app in the App Store is displayed on a smartphone screen and a Roblox logo in the background. (Thiago Prudencio / SOPA Images/Sipa USA)

Notifications incessantly ping our mobile and desktop screens. Algorithmic social media feeds consume vast quantities of our time. Simple online tasks require users to traverse minefields of unfavorable default options, all of which need to be laboriously unclicked. To address these daily annoyances of digital life, some might suggest updating smartphone notification settings, practicing better personal discipline, and doing less business online—in short, emphasizing personal responsibility and digital hygiene. But digital hygiene falls far short of systematically addressing the way in which technology is capturing an increasingly large share of our limited stock of attention.

Software does not get bored, tired, or overwhelmed, but we do—and when we do, software is often designed to prey on us. Without recognizing and potentially regulating for engagement maximization in technology, we may increasingly lose de facto ownership of our own attention through seemingly minute, but pervasive digital incursions. In a white paper recently published by UC Berkeley’s Center for Long-Term Cybersecurity, I propose a two-part solution to tech’s attention problem. First, we need to measure attention costs imposed by digital products so as to better understand just how much tech’s engagement maximization practices are costing us as we navigate ubiquitous digital infrastructures. Second, we need to develop measures to reduce attention costs when they are unacceptably high.

Read More

The Chinese actor and singer Xiao Zhan performs at a concert wearing a black and white jacket of intricate design.
Chinese actor and singer Xiao Zhan, member of the male idol group X NINE, sings at the concert of his TV series The Untamed in Nanjing city, east China's Jiangsu province, 2 November 2019.   fachaoshiNo Use China. No Use France.
The Chinese actor and singer Xiao Zhan, whose online fan community prompted greater government attention of such groups, performs at a concert of his TV series “The Untamed” in Nanjing, China, on Nov. 2, 2019. (Oriental Image via Reuters Connect)

Echoing Mao Zedong before him, Xi Jinping regularly stresses the Party’s domination of all aspects of life. “East, west, south, north and center, Party, government, military, society and education—the Party rules all,” as he has said. The latest target of this drive to domination is “fandom culture,” or fanquan wenhua, which refers to online youth communities that coalesce around shared obsessions with celebrity idols. According to the Cyberspace Administration of China, “toxic idol worship” threatens to poison the minds of future generations. Last month, a newspaper published by the Chinese Communist Party’s Central Propaganda Department warned that internet addiction among teenagers “results in health risks that cannot be ignored.”  

This effort to control fandom culture comes against the backdrop of a crackdown on youth entertainment in China, including harsh restrictions on online gaming. But all this talk of rescuing Chinese youth from their own appetites is in fact a smokescreen for a far more serious purpose. Closer scrutiny of China’s recent internet crackdown suggests these moves are part of a broader effort to reassert the Party’s control over the internet as a key battleground for political and ideological security. The struggle, which touches on the future of the regime, is for the hearts and minds of China’s Generation Z. For policymakers considering how to respond to China’s crackdown on online freedoms, it is vital to understand the full scope of its efforts to consolidate power, which go far beyond just the tech industry to include online culture.  

In the eyes of the Party, the country’s hitherto vibrant internet and entertainment sector is a thing to be tamed, and the official backlash facing fandom culture in recent weeks is one of the clearest examples of how even apparently benign aspects of the internet can run afoul of a leadership obsessed with control. Just as the Xi regime has sought to bring the country’s technology companies to heel, it also seeks to control online culture more deeply, and this does not bode well for the long-term development and vibrancy of China’s internet sector. 

Read More

Election posters of Germany's top candidates for chancellor—Annalena Baerbock, co-leader of Germany's Green party, Olaf Scholz, of the Social Democratic Party, and Armin Laschet of the Christian Democratic Union leader—are displayed in Berlin.
Election posters of Germany's top candidates for chancellor Annalena Baerbock, co-leader of Germany's Green party, Olaf Scholz, German Minister of Finance of the Social Democratic Party (SPD) and Armin Laschet, North Rhine-Westphalia's State Premier and Christian Democratic Union (CDU) leader are pictured, in Berlin, Germany, September 16, 2021. Picture taken with long time exposure and rotating effect. REUTERS/Fabrizio Bensch
Election posters of Germany’s top candidates for chancellor—Annalena Baerbock, co-leader of Germany’s Green party, Olaf Scholz, of the Social Democratic Party, and Armin Laschet of the Christian Democratic Union leader—are pictured in Berlin on Sept. 16, 2021. (REUTERS/Fabrizio Bensch)

Gendered disinformation attacks online are a well-known tactic that illiberal actors around the world—including Russia, Hungary and Brazil—have developed to undermine their opponents. By building on sexist narratives these actors intimidate women in order to eliminate critics, consolidate power, and undermine democratic processes. Such disinformation tactics are being imported to the West and are increasingly being adopted by both foreign actors and the far right in Europe.  

Recent elections in Germany provided ample evidence for how such attacks are deployed. Russian state-backed media amplified disinformation and provided more extensive negative coverage regarding Annalena Baerbock, the Green Party’s candidate for chancellor, compared to her male rivals, according to data from the Alliance for Securing Democracy, German Marshall Fund, and the Institute for Strategic Dialogue. Amid mounting concerns about disinformation and foreign interference, Germany has adopted the world’s toughest law against online hate speech and harassment—the Network Enforcement Act (NetzDG). But this wasn’t enough to overcome the disinformation and gender-based online violence facilitated by social media platforms.   

These developments in Germany provide important lessons for European policymakers at work crafting updated regulations. One major piece of that reform package is the Digital Services Act (DSA), which is intended to create a safer, more open digital space across the European Union, greater platform accountability, and more democratic oversight—especially through recently proposed amendments. These changes to the DSA would improve platform accountability for algorithms, would force large platforms to assess algorithms’ impact on fundamental rights, and mandate risk assessments regarding platforms’ impact on “the right to gender equality.” As online abuse is facilitated by platforms’ design features, platforms need to be obligated to identify, prevent, and mitigate the risk of gender-based violence taking place on and being amplified by their products. If the DSA is ever going to address gendered disinformation, it is critical that these amendments be adopted, and whether that happens depends on the extent to which lawmakers in Brussels understand and care about the risks to future elections when women and gender equality are undermined online. 

Against this backdrop, understanding what happened in Germany and how it illustrates the formula of gender disinformation could not be more relevant.  

Read More

View of an online hearing of an internet-related case at Hangzhou Court of the Internet, the first internet court in the world, in Hangzhou city, east China's Zhejiang province, 18 August 2017.Hangzhou Court of the Internet, set up to handle the soaring number of online disputes, has gone online in Hangzhou, Zhejiang province, on Friday (18 August 2017). It is said to be the first internet court in the world, and it will focus on hearing six kinds of civil and administrative internet-related disputes, including online piracy and e-commerce. The court has generated attention among internet and legal industries since its establishment was formally approved by the central leadership by the end of June.No Use China. No Use France.
View of an online hearing of an internet-related case at Hangzhou Court of the Internet, the first internet court in the world, in Hangzhou city, east China's Zhejiang province, 18 August 2017.

Hangzhou Court of the Internet, set up to handle the soaring number of online disputes, has gone online in Hangzhou, Zhejiang province, on Friday (18 August 2017). It is said to be the first internet court in the world, and it will focus on hearing six kinds of civil and administrative internet-related disputes, including online piracy and e-commerce. The court has generated attention among internet and legal industries since its establishment was formally approved by the central leadership by the end of June.No Use China. No Use France.
View of an online hearing at Hangzhou Court of the Internet, the first internet court in the world, in China’s Zhejiang province, on Aug. 18 2017. (Oriental Image via Reuters Connect)

China is exporting digital authoritarianism around the world, yet the debate over how to best counter its efforts to export surveillance tools has largely focused on telecommunication technologies, like those central to the human rights abuses against the Uyghur population in Xinjiang. In fact, investing in telecommunications infrastructure is only one aspect of the way in which the Chinese government is using digital technologies to centralize power.  

Over the last decade, China has rapidly digitized its justice system, such as by using blockchain to manage evidence and opening virtual courts. In doing so, its innovations have caught the attention of justice reformers around the world looking to modernize court systems. Yet this technology is being used to increase central control over the judiciary and collect data on citizens. Both are antithetical to the liberal ideals of human rights, the rule of law, and separation of powers. 

More than an economic driver, Beijing sees its digital surveillance technology as a foreign-policy tool to gain leverage against the West, and it is critical that Washington now respond. China’s “techno-authoritarian toolkit” has already been exported to at least 18 countries, including Ecuador, Ethiopia, and Malaysia. As China sells its tools for domestic control, its digitized justice system may be its next offering to allies and trading partners. Without a compelling alternative, China can push its digital domain into the backbone of democracy—the justice system. To avoid the digital erosion of the rule of law, the United States must invest in and support court technologies that provide an alternative to China’s. By funding research and development and the technical capacity of our justice system at home, the United States can produce desirable court and justice technologies that counteract China and advance liberal ideals around the world. 

Read More

A computer mouse hovers over a lock icon signifying an encrypted internet connection on a web browser.
FILE PHOTO: A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris April 15, 2014.  REUTERS/Mal Langsdon/File Photo
A lock icon, signifying an encrypted internet connection, is seen on an internet browser. (REUTERS/Mal Langsdon/File Photo)

Today, Oct. 21, is the first annual Global Encryption Day. Organized by the Global Encryption Coalition, the day highlights both the pressing need for greater data security and online privacy—and the importance of encryption in protecting those interests. Amid devastating hacks and massive data breaches, there’s never been a more urgent need to bolster our data security and online privacy. Encryption is a critical tool for protecting those interests.

Yet encryption is under constant threat from governments both at home and abroad. To justify their demands that providers of messaging apps, social media, and other online services weaken their encryption, regulators often cite safety concerns, especially children’s safety. They depict encryption, and end-to-end encryption (E2EE) in particular, as something that exists in opposition to public safety. That’s because encryption “completely hinders” platforms and law enforcement from detecting harmful content, impermissibly shielding those responsible from accountability—or so the reasoning goes.

There’s just one problem with this claim: It’s not true. Last month, I published a draft paper analyzing the results of a research survey I conducted this spring that polled online service providers about their trust and safety practices. I found that not only can providers detect abuse on their platforms even in end-to-end encrypted environments, but they even prefer detection techniques that don’t require access to the contents of users’ files and communications.

Read More

Two drone operators fly an MQ-9 Reaper drone from a remote station.
Drone operators fly an MQ-9 Reaper training mission from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken October 3, 2012. Here in the New Mexico desert, the U.S. Air Force has ramped up training of drone operators - even as the nation increasingly debates their use and U.S. forces prepare to leave Afghanistan.  ATTENTION EDITORS - SCREENS BLURRED AT SOURCE. To match feature USA-SECURITY/DRONES 
REUTERS/Airman 1st Class Michael Shoemaker/USAF/Handout (UNITED STATES - Tags: MILITARY) FOR EDITORIAL USE ONLY. NOT FOR SALE FOR MARKETING OR ADVERTISING CAMPAIGNS. THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. IT IS DISTRIBUTED, EXACTLY AS RECEIVED BY REUTERS, AS A SERVICE TO CLIENTS
Drone operators fly an MQ-9 Reaper from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken Oct. 3, 2012. (REUTERS)

In the 20 years following the terrorist attacks of Sept. 11, 2001, successive American presidents have embraced the use of armed Unmanned Aerial Vehicles (UAV), or drones, to carry out strikes against terrorists with little public scrutiny. George W. Bush pioneered their use. Barack Obama institutionalized and normalized the weapon, while Donald Trump continued to rely on it. And with the withdrawal from Afghanistan, Joe Biden is all but certain to maintain the status quo and continue the use of drones to meet his commitment to prevent terrorist attacks against the United States. The most visible signal that Biden will continue to rely on drones relates to a tragic error. On Aug. 29, the Biden administration authorized a drone strike in response to an attack in Afghanistan by the Islamic State’s regional affiliate that killed 13 U.S. military personnel. Instead of killing the suspected attackers, the strike killed ten civilians, including several children.

This tragedy has renewed the debate on drone warfare, but also illustrates the unique challenges facing the Biden administration in its continued reliance on drones, even after the U.S. withdrawal. While the strike raised a familiar set of moral, ethical, and legal questions associated with American drone warfare, it also reflects a new set of challenges in what has been dubbed an “over-the-horizon” strategy. This relies on what one analyst describes as “cooperation with local partners and selective interventions of air power, U.S. special operations forces, and intelligence, economic, and political support from regional bases outside of Afghanistan for the narrow purpose of counterterrorism.” This strategy assumes, however, that the U.S. has the requisite technical infrastructure and intelligence sharing agreements in place to enable the targeting of high-value terrorists in Afghanistan.

Read More

A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence at the Barbican Centre in London in 2019. (PA Images)

If you need to treat anxiety in the future, odds are the treatment won’t just be therapy, but also an algorithm. Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on “affective computing” to detect and interpret human emotions. It’s a field that’s forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis. 

Despite a rush to build applications using it, emotionally intelligent computing remains in its infancy and is being introduced in the realm of therapeutic services as a fix-all solution without scientific validation nor public consent. Scientists still disagree over the over the nature of emotions and how they are felt and expressed among various populations, yet this uncertainty has been mostly disregarded by a wellness industry eager to profit on the digitalization of health care. If left unregulated, AI-based mental-health solutions risk creating new disparities in the provision of care as those who cannot afford in-person therapy will be referred to bot-powered therapists of uncertain quality. 

Read More