Skip to main content

Tomorrow’s tech policy conversations today

View of an online hearing of an internet-related case at Hangzhou Court of the Internet, the first internet court in the world, in Hangzhou city, east China's Zhejiang province, 18 August 2017.Hangzhou Court of the Internet, set up to handle the soaring number of online disputes, has gone online in Hangzhou, Zhejiang province, on Friday (18 August 2017). It is said to be the first internet court in the world, and it will focus on hearing six kinds of civil and administrative internet-related disputes, including online piracy and e-commerce. The court has generated attention among internet and legal industries since its establishment was formally approved by the central leadership by the end of June.No Use China. No Use France.
View of an online hearing of an internet-related case at Hangzhou Court of the Internet, the first internet court in the world, in Hangzhou city, east China's Zhejiang province, 18 August 2017.

Hangzhou Court of the Internet, set up to handle the soaring number of online disputes, has gone online in Hangzhou, Zhejiang province, on Friday (18 August 2017). It is said to be the first internet court in the world, and it will focus on hearing six kinds of civil and administrative internet-related disputes, including online piracy and e-commerce. The court has generated attention among internet and legal industries since its establishment was formally approved by the central leadership by the end of June.No Use China. No Use France.
View of an online hearing at Hangzhou Court of the Internet, the first internet court in the world, in China’s Zhejiang province, on Aug. 18 2017. (Oriental Image via Reuters Connect)

China is exporting digital authoritarianism around the world, yet the debate over how to best counter its efforts to export surveillance tools has largely focused on telecommunication technologies, like those central to the human rights abuses against the Uyghur population in Xinjiang. In fact, investing in telecommunications infrastructure is only one aspect of the way in which the Chinese government is using digital technologies to centralize power.  

Over the last decade, China has rapidly digitized its justice system, such as by using blockchain to manage evidence and opening virtual courts. In doing so, its innovations have caught the attention of justice reformers around the world looking to modernize court systems. Yet this technology is being used to increase central control over the judiciary and collect data on citizens. Both are antithetical to the liberal ideals of human rights, the rule of law, and separation of powers. 

More than an economic driver, Beijing sees its digital surveillance technology as a foreign-policy tool to gain leverage against the West, and it is critical that Washington now respond. China’s “techno-authoritarian toolkit” has already been exported to at least 18 countries, including Ecuador, Ethiopia, and Malaysia. As China sells its tools for domestic control, its digitized justice system may be its next offering to allies and trading partners. Without a compelling alternative, China can push its digital domain into the backbone of democracy—the justice system. To avoid the digital erosion of the rule of law, the United States must invest in and support court technologies that provide an alternative to China’s. By funding research and development and the technical capacity of our justice system at home, the United States can produce desirable court and justice technologies that counteract China and advance liberal ideals around the world. 

Read More

A computer mouse hovers over a lock icon signifying an encrypted internet connection on a web browser.
FILE PHOTO: A lock icon, signifying an encrypted Internet connection, is seen on an Internet Explorer browser in a photo illustration in Paris April 15, 2014.  REUTERS/Mal Langsdon/File Photo
A lock icon, signifying an encrypted internet connection, is seen on an internet browser. (REUTERS/Mal Langsdon/File Photo)

Today, Oct. 21, is the first annual Global Encryption Day. Organized by the Global Encryption Coalition, the day highlights both the pressing need for greater data security and online privacy—and the importance of encryption in protecting those interests. Amid devastating hacks and massive data breaches, there’s never been a more urgent need to bolster our data security and online privacy. Encryption is a critical tool for protecting those interests.

Yet encryption is under constant threat from governments both at home and abroad. To justify their demands that providers of messaging apps, social media, and other online services weaken their encryption, regulators often cite safety concerns, especially children’s safety. They depict encryption, and end-to-end encryption (E2EE) in particular, as something that exists in opposition to public safety. That’s because encryption “completely hinders” platforms and law enforcement from detecting harmful content, impermissibly shielding those responsible from accountability—or so the reasoning goes.

There’s just one problem with this claim: It’s not true. Last month, I published a draft paper analyzing the results of a research survey I conducted this spring that polled online service providers about their trust and safety practices. I found that not only can providers detect abuse on their platforms even in end-to-end encrypted environments, but they even prefer detection techniques that don’t require access to the contents of users’ files and communications.

Read More

Two drone operators fly an MQ-9 Reaper drone from a remote station.
Drone operators fly an MQ-9 Reaper training mission from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken October 3, 2012. Here in the New Mexico desert, the U.S. Air Force has ramped up training of drone operators - even as the nation increasingly debates their use and U.S. forces prepare to leave Afghanistan.  ATTENTION EDITORS - SCREENS BLURRED AT SOURCE. To match feature USA-SECURITY/DRONES 
REUTERS/Airman 1st Class Michael Shoemaker/USAF/Handout (UNITED STATES - Tags: MILITARY) FOR EDITORIAL USE ONLY. NOT FOR SALE FOR MARKETING OR ADVERTISING CAMPAIGNS. THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. IT IS DISTRIBUTED, EXACTLY AS RECEIVED BY REUTERS, AS A SERVICE TO CLIENTS
Drone operators fly an MQ-9 Reaper from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken Oct. 3, 2012. (REUTERS)

In the 20 years following the terrorist attacks of Sept. 11, 2001, successive American presidents have embraced the use of armed Unmanned Aerial Vehicles (UAV), or drones, to carry out strikes against terrorists with little public scrutiny. George W. Bush pioneered their use. Barack Obama institutionalized and normalized the weapon, while Donald Trump continued to rely on it. And with the withdrawal from Afghanistan, Joe Biden is all but certain to maintain the status quo and continue the use of drones to meet his commitment to prevent terrorist attacks against the United States. The most visible signal that Biden will continue to rely on drones relates to a tragic error. On Aug. 29, the Biden administration authorized a drone strike in response to an attack in Afghanistan by the Islamic State’s regional affiliate that killed 13 U.S. military personnel. Instead of killing the suspected attackers, the strike killed ten civilians, including several children.

This tragedy has renewed the debate on drone warfare, but also illustrates the unique challenges facing the Biden administration in its continued reliance on drones, even after the U.S. withdrawal. While the strike raised a familiar set of moral, ethical, and legal questions associated with American drone warfare, it also reflects a new set of challenges in what has been dubbed an “over-the-horizon” strategy. This relies on what one analyst describes as “cooperation with local partners and selective interventions of air power, U.S. special operations forces, and intelligence, economic, and political support from regional bases outside of Afghanistan for the narrow purpose of counterterrorism.” This strategy assumes, however, that the U.S. has the requisite technical infrastructure and intelligence sharing agreements in place to enable the targeting of high-value terrorists in Afghanistan.

Read More

A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence at the Barbican Centre in London in 2019. (PA Images)

If you need to treat anxiety in the future, odds are the treatment won’t just be therapy, but also an algorithm. Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on “affective computing” to detect and interpret human emotions. It’s a field that’s forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis. 

Despite a rush to build applications using it, emotionally intelligent computing remains in its infancy and is being introduced in the realm of therapeutic services as a fix-all solution without scientific validation nor public consent. Scientists still disagree over the over the nature of emotions and how they are felt and expressed among various populations, yet this uncertainty has been mostly disregarded by a wellness industry eager to profit on the digitalization of health care. If left unregulated, AI-based mental-health solutions risk creating new disparities in the provision of care as those who cannot afford in-person therapy will be referred to bot-powered therapists of uncertain quality. 

Read More

U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, U.S., September 29, 2021. REUTERS/John Altdorfer .
U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, U.S., September 29, 2021. REUTERS/John Altdorfer .
U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, on September 29, 2021. REUTERS/John Altdorfer .

What should be the boundaries of government-sponsored cybertheft and surveillance beyond national borders? To what extent do apps such as TikTok pose a national-security threat? Can the United States and European Union reach an agreement on transatlantic data flows that balances economic, privacy, and national-security concerns? These seemingly disconnected questions lurked in the background of the recent inaugural meeting of the EU-U.S. Trade and Technology Council. They all point to the difficulty of defining the proper scope of state power to access and exploit data—one of the defining governance challenges of our time.  

The world’s major cyberpowers are plagued by internal contradictions in their approaches to online espionage, law enforcement, and data collection—when it comes to data access, governments want to have their cake and eat it too. The U.S. government struggles to keep its story straight on the distinction between “commercial” cyberespionage, which it decries, and the more traditional cyberspying that it accepts as a reality of international politics. Brussels is in the awkward position of asking other countries to do more to protect Europeans’ privacy rights than the EU can ask of its own member states. China’s expansive carveouts for national security and lack of checks on government surveillance call into question the meaningfulness of the PRC’s new privacy initiatives and feed global distrust.  

It is an open question whether self-serving approaches are sustainable in an era when data security and national security have become all but synonymous. Reckoning with the challenge of digital coexistence should begin with a candid acknowledgment of these inconsistencies, if only to clarify that in the long run, the only realistic way to transcend them is through forms of legal and political restraint.  

Read More

A NASA satellite image shows the United States, Mexico and Canada at night in this composite image.
NASA satellite image shows the United States, Mexico and Canada at night in this composite assembled from data acquired by the Suomi NPP satellite in April and October 2012. The image was made possible by the new satellite's "day-night band" of the Visible Infrared Imaging Radiometer Suite (VIIRS), which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe dim signals such as city lights, gas flares, auroras, wildfires, and reflected moonlight. REUTERS/NASA/Robert Simmon/NOAA/Department of Defense/Handout (UNITED STATES - Tags: ENVIRONMENT SCIENCE TECHNOLOGY TPX IMAGES OF THE DAY) FOR EDITORIAL USE ONLY. NOT FOR SALE FOR MARKETING OR ADVERTISING CAMPAIGNS. THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. IT IS DISTRIBUTED, EXACTLY AS RECEIVED BY REUTERS, AS A SERVICE TO CLIENTS
A NASA satellite image shows the United States, Mexico and Canada at night in this composite image. (Reuters)

Artificial intelligence has the potential to transform local economies. Hype and fear notwithstanding, many experts forecast various forms of AI to become the source of substantial economic growth and whole new industries. Ultimately, AI’s emerging capabilities have the potential to diffuse significant productivity gains widely through the economy, with potentially sizable impacts.

Yet even as American companies lead the way in pushing AI forward, the U.S. AI economy is far from evenly distributed. In fact, as we found in a recent report, AI development and adoption in the United States is clustered in a dangerously small number of metropolitan centers.

Our research suggests that while some AI activity is distributed fairly widely among U.S. regions, a combination of first-mover advantage, market-concentration effects, and the “winner-take-most” dynamics associated with innovation and digital industries may already be shaping AI activity into a highly concentrated “superstar” geography in which large shares of R&D and commercialization take place in only a few regions. This could lead to a new round of the tech-related interregional inequality that has led to stark economic divides, large gains for a few places, and further entrenchment of a “geography of discontent” in politics and culture.

For that reason, we would argue that the nation should actively counter today’s emerging interregional inequality. Where it can, the government should act now while AI geography may still be fluid to ensure that more of the nation’s talent, firms, and places participate in the initial build out of the nation’s AI economy.

Read More

A Kratos XQ-58 Valkyrie unmanned aircraft demonstrates launching a drone from an internal bay.
The U.S. Air Force has demonstrated the ability of its drones to launch other smaller drones. 

The Air Force Research Laboratory successfully completed the XQ-58A Valkyrie’s sixth flight test and first release from its internal weapons bay, March 26, 2021 at Yuma Proving Ground, Arizona. This test was the first time the weapons bay doors have been opened in flight. 

The Kratos XQ-58 Valkyrie is an experimental stealthy unmanned combat aerial vehicle. 

This test, conducted in partnership with Kratos UAS and Area-I, demonstrated the ability to launch an ALTIUS-600 small, unmanned aircraft system (SUAS) from the internal weapons bay of the XQ-58A. Kratos, Area-I and AFRL designed and fabricated the SUAS carriage and developed software to enable release. After successful release of the SUAS, the XQ-58A completed additional test points to expand its demonstrated operating envelope. 

“This is the sixth flight of the Valkyrie and the first time the payload bay doors have been opened in flight,” said Alyson Turri, demonstration program manager. “In addition to this first SUAS separation demonstration, the XQ-58A flew higher and faster than previous flights.” 

This test further demonstrates the utility of affordable, high performance unmanned air vehicles.

Where: Wright-Patterson Air Force Base, Ohio, United States
When: 26 Mar 2021
Credit: USAF/Cover-Images.com

**Editorial Use Only**
A Kratos XQ-58 Valkyrie unmanned aircraft demonstrates launching a drone from an internal bay. The Valkyrie is being developed as a possible autonomous “wingman” to U.S. fighter jets. (USAF/Cover-Images.com via Reuters Connect)

Mankind’s earliest weapons date back 400,000 years—simple wooden spears discovered in Schöningen, Germany. By 48,000 years ago, humans were making bows and arrows, then graduating to swords of bronze and iron. The age of gunpowder brought flintlock muskets, cannons, and Gatling guns. In modern times, humans built Panzer tanks, the F-16 Fighting Falcon, and nuclear weapons capable of vaporizing cities.

Today, humanity is entering a new era of weaponry, one of autonomous weapons and robotics.

The development of such technology is rapidly advancing and poses hard questions about how their use and proliferation should be governed. In early 2020, a drone may have been used to attack humans autonomously for the first time, a milestone underscoring that robots capable of killing may be widely fielded sooner rather than later. Existing arms-control regimes may offer a model for how to govern autonomous weapons, and it is essential that the international community promptly addresses a critical question: Should we be more afraid of killer robots run amok or the insecurity of giving them up?

Read More

A physician holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients.
A physician holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients.
Dr. Sam Pope, a pulmonary critical care physician and the director of the medical ICU at Hartford Hospital, holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients on May 1, 2020. (Mark Mirko/Hartford Courant/TNS/ABACAPRESS.COM)

As researchers grew to understand COVID-19 during the early days of the pandemic, many built AI algorithms to analyze medical images and measure the extent of the disease in a given patient. Radiologists proposed multiple different scoring systems to categorize what they were seeing in lung scans and developed classification systems for the severity of the disease. These systems were developed and tested in clinical practice, published in academic journals, and modified or revised over time. But the pressure to quickly respond to a global pandemic threw into stark relief the lack of a coherent regulatory framework for certain cutting-edge technologies, which threatened to keep researchers from developing new diagnostic techniques as quickly as possible.

Radiology has long been one of the most promising branches of AI in medicine, as algorithms can improve traditional medical imaging methods like computed tomography (CT), magnetic resonance imaging (MRI), and x-ray. AI offers computational capabilities to process images with greater speed and accuracy and can automatically recognize complex patterns in assessing a patient’s health. But since AI algorithms can continually learn from the medical images they review, the traditional approach to reviewing and approving upgrades to software that is used for diagnosing, preventing, monitoring, or treating diseases such as COVID-19 may not be appropriate. As the course of the public-health response to the pandemic is beginning to shift, radiology is continuing to advance our understanding of the disease (such as the mechanisms by which COVID patients suffer neurological issues like brain fog, loss of smell, and in some cases serious brain damage) and beginning to account for some of the costs of our response (such as the fact that the body’s response to vaccines may cause false positives for cancer diagnosis).

In a recent policy brief for Stanford University’s Institute for Human-Centered Artificial Intelligence, we explore the path forward in building an appropriate testing framework for medical AI and show how medical societies should be doing more to build trust in these systems. In suggesting changes for diagnostic AI to reach its full potential, we draw on proposals from the Food and Drug Administration in the United States, the European Union, and the International Medical Device Regulators Forum to regulate the burgeoning “software as a medical device” (SaMD) market. We recommend that policymakers and medical societies adopt stronger regulatory guidance on testing and performance standards for these algorithms.

Read More

A drone helps give the Indianapolis Fire Department an aerial look as they respond to an industrial accident on Thursday, June 3, 2021.
A drone helps give Indianapolis Fire Department an aerial look as they respond to an orange cloud emanating from Imagineering Finishing Technologies in Indianapolis on Thursday, June 3, 2021. The cloud, which is a result of nitric acid mixing with moisture in a container. Smoke pushed up into the air, prompting the Indianapolis Metropolitan Police Department to close down a lane of Emerson Avenue and asking nearby residents to shelter in place.Orange Smoke Indianapolis
A drone helps give the Indianapolis Fire Department an aerial look as they respond to an industrial accident on Thursday, June 3, 2021. (USA TODAY NETWORK via Reuters Connect)

For drone pilots, danger is everywhere. An undertrained or unlucky pilot can screw up, press the wrong button, and cause her drone to plummet to the ground. A light drizzle might permeate a pilot’s drone and send it rocketing into a tree. Almost everyone who flies drones for a living has witnessed at least one such an incident, in which a drone crashes, is lost, or otherwise malfunctions.

Despite the certainty that drones will crash—and perhaps endanger public safety—it’s all but impossible to determine how often such incidents occur in the United States, where there’s little publicly available data on drone crashes. This lack of data applies not only to civilian crashes, but also to the drones that are flown ever more regularly by government entities, like police and fire departments. While drone crashes don’t seem to be terribly common—and there are still no known cases of a small drone crash killing anyone—it’s still important that pilots, regulators, and the public have some sense of how often they happen and in what patterns they occur. With better data, regulators could more quicky identify unsafe practices and badly run drone programs, control malfunctioning, and identify badly made equipment.

Read More

Surveillance cameras are seen mounted in front of two Chinese flags.
Hikvision surveillance cameras are seen in front of a Chinese flag at a main shopping area, during the Labour Day holiday, following the outbreak of the coronavirus disease (COVID-19), in Shanghai, China May 5, 2021. REUTERS/Aly Song
Hikvision surveillance cameras are seen in front of a Chinese flag at a shopping area in Shanghai on May 5, 2021. (REUTERS/Aly Song)

Across the Chinese government’s surveillance apparatus, its many arms are busy collecting huge volumes of data. Video surveillance footage, WeChat accounts, e-commerce data, medical history, and hotel records: It’s all fair game for the government’s surveillance regime. Yet, taken individually, each of these data streams don’t tell authorities very much. That’s why the Chinese government has embarked on a massive project of data fusion, which merges disparate datasets to produce data-driven analysis. This is how Chinese surveillance systems achieve what authorities call “visualization” (可视化) and “police informatization” (警务信息化). 

While policymakers around the world have grown increasingly aware of China’s mass surveillance regime—from its most repressive practices in Xinjiang to its exports of surveillance platforms to more than 80 countries—relatively little attention has been paid to how Chinese authorities are making use of the data it collects. As countries and companies consider how to respond to China’s surveillance regime, policymakers need to understand data fusion’s crucial role in monitoring the country’s population in order to craft effective responses.  

Read More