Skip to main content

Tomorrow’s tech policy conversations today

Microsoft recently announced that it had detected efforts by Russia, China, and Iran to influence the upcoming U.S. election. The discovery should not come as any surprise. In his 2019 testimony, former FBI Director Robert Mueller cautioned that Russia’s foreign election interference “wasn’t a single attempt. They’re doing it as we sit here.”

The reason the Russians attempted to influence the election outcome in 2016 is simple: They think that domestic politics matters for foreign policy. That calculus hasn’t changed, so it’s no surprise that Russia is again interested in influencing the U.S. electoral outcome. What’s different this time around is the chess board of the international system: the actors, their preferred outcomes, and their preferred mechanisms of influence.

Read More

A Chinese researcher works on an ultracold atom device at the CAS-Alibaba Quantum Computing Laboratory in Shanghai, China
A Chinese researcher works on an ultracold atom device at the CAS-Alibaba Quantum Computing Laboratory in Shanghai, China, 30 July 2015.

Aliyun, the cloud-computing arm of Chinese e-commerce giant Alibaba Group Holding Ltd., is co-founding a quantum computing laboratory with the Chinese Academy of Sciences to help secure its data centers and develop a new type of computer. The formation of the CAS-Alibaba Quantum Computing Laboratory in Shanghai is similar to research initiatives by Microsoft Corp., Google Inc., International Business Machines Corp. and various government-backed scientific laboratories. Researchers are trying to find practical applications in computing and cryptography from the theoretically unique properties of subatomic particles. Quantum mechanics may lead to much faster calculations and almost impossible to break cryptography, its promoters argue.No Use China. No Use France.
A Chinese researcher works on an ultracold atom device at the CAS-Alibaba Quantum Computing Laboratory in Shanghai, China, 30 July 2015.

In 2019, a team of Chinese technicians, engineers, and scientists sent pairs of photons from a single satellite called Micius to two ground stations in China separated by over 1,120 kilometers. The photons were prepared in such a way that they carried information that remained perfectly correlated in spite of the distance between them. In addition, the two receiving stations in China were able to ensure that the two receivers could not be disrupted or deceived by any third party. The experiment demonstrated the ability to share secret cryptographic keys between the two locations in China, with no known means for a third party to covertly observe or copy them. Although the rate of the key exchange was too low for practical use, the achievement represented a step toward secret communications guaranteed by the laws of physics.

Several countries have spent decades trying to find ways of moving data that are both cost-effective and secure by investing in quantum communication technology. The surge in China’s work in the field dates to 2013, when the release of classified information by Edward Snowden detailing U.S. intelligence capabilities caused deep concern in Beijing. “This incident has been so fundamental to Chinese motivations that Snowden has been characterized as one of two individuals with a primary role in the scientific ‘drama’ of China’s quantum advances, along with Pan Jianwei, the father of Chinese quantum information science,” the researchers Elsa Kania and John Costello concluded in a 2017 report.

The national-security implications of China’s interest in space-based quantum communications cuts several ways. The development of impenetrably secure communications links in China would be a loss for American intelligence organizations. On the other hand, China’s intensive efforts in using space for secure quantum-based communications may lead that nation to consider international agreements governing space activities as in their national interest. This strategic interest might be leveraged as part of a future U.S.-China agreement in managing competition in space. There are ample opportunities for collaboration in this field among the United States, Europe, Canada, Japan, Australia, and other democratic allies. China’s leading position in quantum data security suggests that U.S.-China collaboration—at least on basic science—would be a net benefit for the United States in understanding the state of the art.

Read More

When playing online, gamers should be extremely cautious in order to avoid downloading malware.
When playing online, gamers should be extremely cautious in order to avoid downloading malware.

Ben Nimmo, director of investigations at Graphika, speaks with CEPA’s Alina Polyakova and Lawfare’s Quinta Jurecic about a recent information operation linked to the Internet Research Agency—the “troll farm” behind Russian efforts to interfere in the 2016 U.S. election on social media. They discuss how the IRA’s new campaign, which employed AI-generated images and real, apparently unwitting freelance writers to amplify content from a sham website posing as a left-wing news source, demonstrates a continuity of strategy but a refinement of tactics over time.


A group of British students protest the use of algorithms to predict their grades on standardized exams.
A level students protest opposite Downing Street, amid the outbreak of the coronavirus disease (COVID-19), in London, Britain, August 16, 2020. REUTERS/Henry Nicholls

Democratic governments and agencies around the world are increasingly relying on artificial intelligence. Police departments in the United States, United Kingdom, and elsewhere have begun to use facial recognition technology to identify potential suspects. Judges and courts have started to rely on machine learning to guide sentencing decisions. In the U.K., one in three British local authorities are said to be using algorithms or machine learning (ML) tools to make decisions about issues such as welfare benefit claims. These government uses of AI are widespread enough to wonder: Is this the age of government by algorithm?

Many critics have expressed concerns about the rapidly expanding use of automated decision-making in sensitive areas of policy such as criminal justice and welfare. The most often voiced concern is the issue of bias: When machine learning systems are trained on biased data sets, they will inevitably embed in their models the data’s underlying social inequalities. The data science and AI communities are now highly sensitive to data bias issues, and as a result have started to focus far more intensely on the ethics of AI. Similarly, individual governments and international organizations have published statements of principle intended to govern AI use.

A common principle of AI ethics is explainability. The risk of producing AI that reinforces societal biases has prompted calls for greater transparency about algorithmic or machine learning decision processes, and for ways to understand and audit how an AI agent arrives at its decisions or classifications. As the use of AI systems proliferates, being able to explain how a given model or system works will be vital, especially for those used by governments or public sector agencies.

Yet explainability alone will not be a panacea. Although transparency about decision-making processes is essential to democracy, it is a mistake to think this represents an easy solution to the dilemmas algorithmic decision-making will present to our societies.

Read More

Police officers in Mexico monitor feeds from surveillance cameras displayed on several large monitors.
Police officers in Mexico monitor feeds from surveillance cameras displayed on several large monitors.

We live in an age where big data forecasting is everywhere. Around the world, scientists are assembling huge data sets to understand everything from the spread of COVID-19 to consumers’ online shopping habits. However, as models to forecast future events proliferate, few people understand the inner workings or assumptions of these models. Forecasting systems all have weaknesses, and when they are used for policymaking and planning, they can have drastic implications on people’s lives. For this reason alone, it is imperative that we begin to look at the science behind the algorithms.

By examining one such system, it is possible to understand how the seemingly innocuous use of theories, assumptions, or models are open to misapplication.

Read More

Computer network equipment is seen in a server room in Vienna, Austria, October 25, 2018. REUTERS/Heinz-Peter Bader
Computer network equipment is seen in a server room in Vienna, Austria, October 25, 2018. REUTERS/Heinz-Peter Bader

Alissa Starzak is the head of public policy at Cloudflare, a web-infrastructure and cybersecurity company that helps websites stay online by protecting them against excess or malicious internet traffic. Although Cloudflare is not itself a social media platform, it plays a role in online content moderation by deciding which websites it will support—or not. Alongside Lawfare’s Quinta Jurecic and Evelyn Douek, Starzak discusses exactly what Cloudflare does, the impact of its decisions to terminate service from websites deemed to be hosting extremist or violent material, and how Cloudflare thinks about its role as an arbiter of online content.


Facebook CEO Mark Zuckerberg speaks via video conference during an Antitrust, Commercial and Administrative Law Subcommittee hearing, on Capitol Hill, in Washington, Wednesday, July 29, 2020, on "Online platforms and market power. Examining the dominance of Amazon, Facebook, Google and Apple" No Use UK. No Use Germany.
Facebook CEO Mark Zuckerberg speaks via video conference during an Antitrust, Commercial and Administrative Law Subcommittee hearing, on Capitol Hill, in Washington, Wednesday, July 29, 2020, on "Online platforms and market power. Examining the dominance of Amazon, Facebook, Google and Apple" No Use UK. No Use Germany.

The CEOs of America’s most powerful technology companies went before Congress recently to answer questions about their growing role in the U.S. economy. Lawmakers grilled the CEOs on their business practices and whether it is time to curb their companies’ market power. But for antitrust to work in the digital era, it must go beyond its traditional focus on market power to consider questions of public interest.

For years, technology ethicists have considered how to square the interests of major companies with the interests of society as a whole, and recent approaches to ethics in the technology industry provide a cautionary tale for antitrust policy. Our data ethics research shows that Big Tech companies tend to approach challenges from the perspective of compliance: that as long as a company ticks the boxes on a checklist, it is in the clear. While checklists can highlight concerns, they do not necessarily lead to ethical actions. Current approaches to antitrust threaten to extend a compliance mindset, which will likely result in efforts that ostensibly tackle market power but fail to protect consumers. 

Read More

U.S. President Donald Trump is using a mobile phone during a roundtable discussion on the reopening of small businesses in the State Dining Room at the White House in Washington, U.S., June 18, 2020.
U.S. President Donald Trump is using a mobile phone during a roundtable discussion on the reopening of small businesses in the State Dining Room at the White House in Washington, U.S., June 18, 2020. REUTERS/Leah Millis

On Sunday night, President Donald Trump retweeted a video of a violent incident on a New York City subway platform. The video shows a Black man pushing a white woman into a train car and is captioned “Black Lives Matter / Antifa.” The problem? It is over a year old and has nothing to do with either Black Lives Matter or Antifa. It, in fact, shows the actions of a mentally ill man with no known ties to either group.

Trump’s Sunday night retweet is a case study in how far-right online networks work across social media platforms to build their followings, promote their messages, and provide Trump with the viral content that filled his timeline on Sunday. The video was first posted online by a self-identified follower of a network of online white supremacists. It was then re-posted with the inaccurate caption by a recently created Spanish-language citizen news site, TDN_NOTICIAS, dedicated to spreading inflammatory, racist news items. From TDN_NOTICIAS, it was a short journey to Trump’s Twitter feed, where he retweeted it. By working off a variety of platforms—Twitter, Dropbox, and Telegram among them—a group of hateful online provocateurs managed to successfully spread a false news report and gain a coveted signal boost from Trump’s Twitter account.

Read More

People gather in front of Twitter Japan headquarters to pressure the company to be more active against hate speech and discrimination on the platform on June 6, 2020 in Tokyo, Japan.
People gather in front of Twitter Japan headquarter to pressure the company to be more active against hate speech and discrimination on the platform on June 6, 2020 in Tokyo, Japan.
People gather in front of Twitter Japan headquarters to pressure the company to be more active against hate speech and discrimination on the platform on June 6, 2020 in Tokyo, Japan. (Photo by Nicolas Datiche/AFLO)

Infrastructure that works well rarely stands out. The internet infrastructure provided by Cloudflare, which provides a content delivery network that safeguards millions of sites online, is a notable exception. Last year, Cloudflare came under intense pressure to stop providing its services to 8chan, the online message board popular among white supremacists, after the gunmen in three separate shootings posted manifestos on the site prior to their attacks. 8chan had relied on the company’s content delivery network to keep its message board online and accessible. After initially saying it had no legal obligation to do so, the company eventually relented and denied 8chan the use of its services.

Cloudflare’s decision highlights a fundamental question about internet infrastructure companies: What is the political process behind their content moderation decisions?

The services that companies like Cloudflare provide are twofold. First, a content delivery network provides faster load times. Due to the sheer size of the globe, as well as the physical limits of wires and fiber optic cables, content housed on a server farther away from a requesting user will usually take longer to load. Content delivery networks, or CDNs, solve this problem by storing cached copies of a site’s content in datacenters around the world, as close to the user requesting it as possible. Without this service, streaming music or video would slow down considerably. Yet CDNs don’t just offer faster load times—they also provide a unique form of security. One way to take down a website is to overload it with requests, to the point where it has to deny service altogether, in what is known as distributed denial of service (DDoS) attack. However, DDoS attacks aren’t as effective for websites that rely on companies like Cloudflare, because requests are directed to a CDN rather than the website’s server. As the biggest of many such infrastructural service providers, Cloudflare keeps clients’ websites afloat by making sure that they can always meet users’ demands for the content they provide.

Cloudflare and other CDN providers usually offer their services even when the content to be hosted and streamed on their clients’ websites is objectionable. Up until recently, Cloudflare in particular maintained that content should not ever be regulated at the level of infrastructural delivery, clinging to a vision of infrastructure untainted by politics. Cloudflare argues it should not make content moderation decisions. But the question of whether infrastructure companies should make decisions about content, often at the heart of the debates over hate speech and its continued online presence, is a distraction from the reality that they already do—just not in ways that most users of those infrastructures can see. Content moderation does not just happen at the moment of termination: It happens every day a website is kept up and available by the infrastructure below it.

Read More

People look at data on their mobiles as background with internet wire cables on switch hub is projected in this picture illustration taken May 30, 2018. Picture taken May 30, 2018. REUTERS/Kacper Pempel/Illustration
People look at data on their mobiles as background with internet wire cables on switch hub is projected in this picture illustration taken May 30, 2018. Picture taken May 30, 2018. REUTERS/Kacper Pempel/Illustration

The modern world runs on “big data,” the massive data sets used by governments, firms, and academic researchers to conduct analyses, unearth patterns, and drive decision-making. When it comes to data analysis, bigger can be better: The more high-quality data is incorporated, the more robust the analysis will be. Large-scale data analysis is becoming increasingly powerful thanks to machine learning and has a wide range of benefits, such as informing public-health research, reducing traffic, and identifying systemic discrimination in loan applications.

But there’s a downside to big data, as it requires aggregating vast amounts of potentially sensitive personal information. Whether amassing medical records, scraping social media profiles, or tracking banking and credit card transactions, data scientists risk jeopardizing the privacy of the individuals whose records they collect. And once data is stored on a server, it may be stolen, shared, or compromised.

Computer scientists have worked for years to try to find ways to make data more private, but even if they attempt to de-identify data—for example, by removing individuals’ names or other parts of a data set—it is often possible for others to “connect the dots” and piece together information from multiple sources to determine a supposedly anonymous individual’s identity (via a so-called re-identification or linkage attack).

Fortunately, in recent years, computer scientists have developed a promising new approach to privacy-preserving data analysis known as “differential privacy” that allows researchers to unearth the patterns within a data set—and derive observations about the population as a whole—while obscuring the information about each individual’s records.

Read More