Skip to main content

Tomorrow’s tech policy conversations today

Two drone operators fly an MQ-9 Reaper drone from a remote station.
Drone operators fly an MQ-9 Reaper training mission from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken October 3, 2012. Here in the New Mexico desert, the U.S. Air Force has ramped up training of drone operators - even as the nation increasingly debates their use and U.S. forces prepare to leave Afghanistan.  ATTENTION EDITORS - SCREENS BLURRED AT SOURCE. To match feature USA-SECURITY/DRONES 
REUTERS/Airman 1st Class Michael Shoemaker/USAF/Handout (UNITED STATES - Tags: MILITARY) FOR EDITORIAL USE ONLY. NOT FOR SALE FOR MARKETING OR ADVERTISING CAMPAIGNS. THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. IT IS DISTRIBUTED, EXACTLY AS RECEIVED BY REUTERS, AS A SERVICE TO CLIENTS
Drone operators fly an MQ-9 Reaper from a ground control station at Holloman Air Force Base, New Mexico, in this U.S. Air Force handout photo taken Oct. 3, 2012. (REUTERS)

In the 20 years following the terrorist attacks of Sept. 11, 2001, successive American presidents have embraced the use of armed Unmanned Aerial Vehicles (UAV), or drones, to carry out strikes against terrorists with little public scrutiny. George W. Bush pioneered their use. Barack Obama institutionalized and normalized the weapon, while Donald Trump continued to rely on it. And with the withdrawal from Afghanistan, Joe Biden is all but certain to maintain the status quo and continue the use of drones to meet his commitment to prevent terrorist attacks against the United States. The most visible signal that Biden will continue to rely on drones relates to a tragic error. On Aug. 29, the Biden administration authorized a drone strike in response to an attack in Afghanistan by the Islamic State’s regional affiliate that killed 13 U.S. military personnel. Instead of killing the suspected attackers, the strike killed ten civilians, including several children.

This tragedy has renewed the debate on drone warfare, but also illustrates the unique challenges facing the Biden administration in its continued reliance on drones, even after the U.S. withdrawal. While the strike raised a familiar set of moral, ethical, and legal questions associated with American drone warfare, it also reflects a new set of challenges in what has been dubbed an “over-the-horizon” strategy. This relies on what one analyst describes as “cooperation with local partners and selective interventions of air power, U.S. special operations forces, and intelligence, economic, and political support from regional bases outside of Afghanistan for the narrow purpose of counterterrorism.” This strategy assumes, however, that the U.S. has the requisite technical infrastructure and intelligence sharing agreements in place to enable the targeting of high-value terrorists in Afghanistan.

Read More

A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence.
A woman interacts with a digital art installation exploring the relationship between humans and artificial intelligence at the Barbican Centre in London in 2019. (PA Images)

If you need to treat anxiety in the future, odds are the treatment won’t just be therapy, but also an algorithm. Across the mental-health industry, companies are rapidly building solutions for monitoring and treating mental-health issues that rely on just a phone or a wearable device. To do so, companies are relying on “affective computing” to detect and interpret human emotions. It’s a field that’s forecast to become a $37 billion industry by 2026, and as the COVID-19 pandemic has increasingly forced life online, affective computing has emerged as an attractive tool for governments and corporations to address an ongoing mental health crisis. 

Despite a rush to build applications using it, emotionally intelligent computing remains in its infancy and is being introduced in the realm of therapeutic services as a fix-all solution without scientific validation nor public consent. Scientists still disagree over the over the nature of emotions and how they are felt and expressed among various populations, yet this uncertainty has been mostly disregarded by a wellness industry eager to profit on the digitalization of health care. If left unregulated, AI-based mental-health solutions risk creating new disparities in the provision of care as those who cannot afford in-person therapy will be referred to bot-powered therapists of uncertain quality. 

Read More

U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, U.S., September 29, 2021. REUTERS/John Altdorfer .
U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, U.S., September 29, 2021. REUTERS/John Altdorfer .
U.S. Secretary of State Antony Blinken is flanked by Commerce Secretary Gina Raimondo and Trade Representative Katherine Tai as they meet with European Commission Executive Vice Presidents Margrethe Vestager and Valdis Dombrovskis during U.S and European Union trade and investment talks in Pittsburgh, Pennsylvania, on September 29, 2021. REUTERS/John Altdorfer .

What should be the boundaries of government-sponsored cybertheft and surveillance beyond national borders? To what extent do apps such as TikTok pose a national-security threat? Can the United States and European Union reach an agreement on transatlantic data flows that balances economic, privacy, and national-security concerns? These seemingly disconnected questions lurked in the background of the recent inaugural meeting of the EU-U.S. Trade and Technology Council. They all point to the difficulty of defining the proper scope of state power to access and exploit data—one of the defining governance challenges of our time.  

The world’s major cyberpowers are plagued by internal contradictions in their approaches to online espionage, law enforcement, and data collection—when it comes to data access, governments want to have their cake and eat it too. The U.S. government struggles to keep its story straight on the distinction between “commercial” cyberespionage, which it decries, and the more traditional cyberspying that it accepts as a reality of international politics. Brussels is in the awkward position of asking other countries to do more to protect Europeans’ privacy rights than the EU can ask of its own member states. China’s expansive carveouts for national security and lack of checks on government surveillance call into question the meaningfulness of the PRC’s new privacy initiatives and feed global distrust.  

It is an open question whether self-serving approaches are sustainable in an era when data security and national security have become all but synonymous. Reckoning with the challenge of digital coexistence should begin with a candid acknowledgment of these inconsistencies, if only to clarify that in the long run, the only realistic way to transcend them is through forms of legal and political restraint.  

Read More

A NASA satellite image shows the United States, Mexico and Canada at night in this composite image.
NASA satellite image shows the United States, Mexico and Canada at night in this composite assembled from data acquired by the Suomi NPP satellite in April and October 2012. The image was made possible by the new satellite's "day-night band" of the Visible Infrared Imaging Radiometer Suite (VIIRS), which detects light in a range of wavelengths from green to near-infrared and uses filtering techniques to observe dim signals such as city lights, gas flares, auroras, wildfires, and reflected moonlight. REUTERS/NASA/Robert Simmon/NOAA/Department of Defense/Handout (UNITED STATES - Tags: ENVIRONMENT SCIENCE TECHNOLOGY TPX IMAGES OF THE DAY) FOR EDITORIAL USE ONLY. NOT FOR SALE FOR MARKETING OR ADVERTISING CAMPAIGNS. THIS IMAGE HAS BEEN SUPPLIED BY A THIRD PARTY. IT IS DISTRIBUTED, EXACTLY AS RECEIVED BY REUTERS, AS A SERVICE TO CLIENTS
A NASA satellite image shows the United States, Mexico and Canada at night in this composite image. (Reuters)

Artificial intelligence has the potential to transform local economies. Hype and fear notwithstanding, many experts forecast various forms of AI to become the source of substantial economic growth and whole new industries. Ultimately, AI’s emerging capabilities have the potential to diffuse significant productivity gains widely through the economy, with potentially sizable impacts.

Yet even as American companies lead the way in pushing AI forward, the U.S. AI economy is far from evenly distributed. In fact, as we found in a recent report, AI development and adoption in the United States is clustered in a dangerously small number of metropolitan centers.

Our research suggests that while some AI activity is distributed fairly widely among U.S. regions, a combination of first-mover advantage, market-concentration effects, and the “winner-take-most” dynamics associated with innovation and digital industries may already be shaping AI activity into a highly concentrated “superstar” geography in which large shares of R&D and commercialization take place in only a few regions. This could lead to a new round of the tech-related interregional inequality that has led to stark economic divides, large gains for a few places, and further entrenchment of a “geography of discontent” in politics and culture.

For that reason, we would argue that the nation should actively counter today’s emerging interregional inequality. Where it can, the government should act now while AI geography may still be fluid to ensure that more of the nation’s talent, firms, and places participate in the initial build out of the nation’s AI economy.

Read More

A Kratos XQ-58 Valkyrie unmanned aircraft demonstrates launching a drone from an internal bay.
The U.S. Air Force has demonstrated the ability of its drones to launch other smaller drones. 

The Air Force Research Laboratory successfully completed the XQ-58A Valkyrie’s sixth flight test and first release from its internal weapons bay, March 26, 2021 at Yuma Proving Ground, Arizona. This test was the first time the weapons bay doors have been opened in flight. 

The Kratos XQ-58 Valkyrie is an experimental stealthy unmanned combat aerial vehicle. 

This test, conducted in partnership with Kratos UAS and Area-I, demonstrated the ability to launch an ALTIUS-600 small, unmanned aircraft system (SUAS) from the internal weapons bay of the XQ-58A. Kratos, Area-I and AFRL designed and fabricated the SUAS carriage and developed software to enable release. After successful release of the SUAS, the XQ-58A completed additional test points to expand its demonstrated operating envelope. 

“This is the sixth flight of the Valkyrie and the first time the payload bay doors have been opened in flight,” said Alyson Turri, demonstration program manager. “In addition to this first SUAS separation demonstration, the XQ-58A flew higher and faster than previous flights.” 

This test further demonstrates the utility of affordable, high performance unmanned air vehicles.

Where: Wright-Patterson Air Force Base, Ohio, United States
When: 26 Mar 2021
Credit: USAF/Cover-Images.com

**Editorial Use Only**
A Kratos XQ-58 Valkyrie unmanned aircraft demonstrates launching a drone from an internal bay. The Valkyrie is being developed as a possible autonomous “wingman” to U.S. fighter jets. (USAF/Cover-Images.com via Reuters Connect)

Mankind’s earliest weapons date back 400,000 years—simple wooden spears discovered in Schöningen, Germany. By 48,000 years ago, humans were making bows and arrows, then graduating to swords of bronze and iron. The age of gunpowder brought flintlock muskets, cannons, and Gatling guns. In modern times, humans built Panzer tanks, the F-16 Fighting Falcon, and nuclear weapons capable of vaporizing cities.

Today, humanity is entering a new era of weaponry, one of autonomous weapons and robotics.

The development of such technology is rapidly advancing and poses hard questions about how their use and proliferation should be governed. In early 2020, a drone may have been used to attack humans autonomously for the first time, a milestone underscoring that robots capable of killing may be widely fielded sooner rather than later. Existing arms-control regimes may offer a model for how to govern autonomous weapons, and it is essential that the international community promptly addresses a critical question: Should we be more afraid of killer robots run amok or the insecurity of giving them up?

Read More

A physician holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients.
A physician holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients.
Dr. Sam Pope, a pulmonary critical care physician and the director of the medical ICU at Hartford Hospital, holds a printed X-ray illustrating double lung pneumonia typical in COVID-19 patients on May 1, 2020. (Mark Mirko/Hartford Courant/TNS/ABACAPRESS.COM)

As researchers grew to understand COVID-19 during the early days of the pandemic, many built AI algorithms to analyze medical images and measure the extent of the disease in a given patient. Radiologists proposed multiple different scoring systems to categorize what they were seeing in lung scans and developed classification systems for the severity of the disease. These systems were developed and tested in clinical practice, published in academic journals, and modified or revised over time. But the pressure to quickly respond to a global pandemic threw into stark relief the lack of a coherent regulatory framework for certain cutting-edge technologies, which threatened to keep researchers from developing new diagnostic techniques as quickly as possible.

Radiology has long been one of the most promising branches of AI in medicine, as algorithms can improve traditional medical imaging methods like computed tomography (CT), magnetic resonance imaging (MRI), and x-ray. AI offers computational capabilities to process images with greater speed and accuracy and can automatically recognize complex patterns in assessing a patient’s health. But since AI algorithms can continually learn from the medical images they review, the traditional approach to reviewing and approving upgrades to software that is used for diagnosing, preventing, monitoring, or treating diseases such as COVID-19 may not be appropriate. As the course of the public-health response to the pandemic is beginning to shift, radiology is continuing to advance our understanding of the disease (such as the mechanisms by which COVID patients suffer neurological issues like brain fog, loss of smell, and in some cases serious brain damage) and beginning to account for some of the costs of our response (such as the fact that the body’s response to vaccines may cause false positives for cancer diagnosis).

In a recent policy brief for Stanford University’s Institute for Human-Centered Artificial Intelligence, we explore the path forward in building an appropriate testing framework for medical AI and show how medical societies should be doing more to build trust in these systems. In suggesting changes for diagnostic AI to reach its full potential, we draw on proposals from the Food and Drug Administration in the United States, the European Union, and the International Medical Device Regulators Forum to regulate the burgeoning “software as a medical device” (SaMD) market. We recommend that policymakers and medical societies adopt stronger regulatory guidance on testing and performance standards for these algorithms.

Read More

A drone helps give the Indianapolis Fire Department an aerial look as they respond to an industrial accident on Thursday, June 3, 2021.
A drone helps give Indianapolis Fire Department an aerial look as they respond to an orange cloud emanating from Imagineering Finishing Technologies in Indianapolis on Thursday, June 3, 2021. The cloud, which is a result of nitric acid mixing with moisture in a container. Smoke pushed up into the air, prompting the Indianapolis Metropolitan Police Department to close down a lane of Emerson Avenue and asking nearby residents to shelter in place.Orange Smoke Indianapolis
A drone helps give the Indianapolis Fire Department an aerial look as they respond to an industrial accident on Thursday, June 3, 2021. (USA TODAY NETWORK via Reuters Connect)

For drone pilots, danger is everywhere. An undertrained or unlucky pilot can screw up, press the wrong button, and cause her drone to plummet to the ground. A light drizzle might permeate a pilot’s drone and send it rocketing into a tree. Almost everyone who flies drones for a living has witnessed at least one such an incident, in which a drone crashes, is lost, or otherwise malfunctions.

Despite the certainty that drones will crash—and perhaps endanger public safety—it’s all but impossible to determine how often such incidents occur in the United States, where there’s little publicly available data on drone crashes. This lack of data applies not only to civilian crashes, but also to the drones that are flown ever more regularly by government entities, like police and fire departments. While drone crashes don’t seem to be terribly common—and there are still no known cases of a small drone crash killing anyone—it’s still important that pilots, regulators, and the public have some sense of how often they happen and in what patterns they occur. With better data, regulators could more quicky identify unsafe practices and badly run drone programs, control malfunctioning, and identify badly made equipment.

Read More

Surveillance cameras are seen mounted in front of two Chinese flags.
Hikvision surveillance cameras are seen in front of a Chinese flag at a main shopping area, during the Labour Day holiday, following the outbreak of the coronavirus disease (COVID-19), in Shanghai, China May 5, 2021. REUTERS/Aly Song
Hikvision surveillance cameras are seen in front of a Chinese flag at a shopping area in Shanghai on May 5, 2021. (REUTERS/Aly Song)

Across the Chinese government’s surveillance apparatus, its many arms are busy collecting huge volumes of data. Video surveillance footage, WeChat accounts, e-commerce data, medical history, and hotel records: It’s all fair game for the government’s surveillance regime. Yet, taken individually, each of these data streams don’t tell authorities very much. That’s why the Chinese government has embarked on a massive project of data fusion, which merges disparate datasets to produce data-driven analysis. This is how Chinese surveillance systems achieve what authorities call “visualization” (可视化) and “police informatization” (警务信息化). 

While policymakers around the world have grown increasingly aware of China’s mass surveillance regime—from its most repressive practices in Xinjiang to its exports of surveillance platforms to more than 80 countries—relatively little attention has been paid to how Chinese authorities are making use of the data it collects. As countries and companies consider how to respond to China’s surveillance regime, policymakers need to understand data fusion’s crucial role in monitoring the country’s population in order to craft effective responses.  

Read More

Sen. Ted Cruz gestures at a poster of a Facebook post depicting the Mother Teresa below a headline reading "CENSORED."
Senator Ted Cruz (R-TX) shows a board before a Senate Judiciary Constitution Subcommittee hearing titled "Stifling Free Speech: Technological Censorship and the Public Discourse." on Capitol Hill in Washington, U.S., April 10, 2019. REUTERS/Jeenah Moon
Senator Ted Cruz (R-TX) shows a board before a Senate Judiciary subcommittee hearing on online speech on April 10, 2019. (REUTERS/Jeenah Moon)

In recent years, policymakers have attempted to tackle the harms associated with online platforms via an ineffective bricolage of finger-pointing, performative hearings grilling various CEOs, and, ultimately, policy proposals. These proposals mostly aim to reform the intermediary liability immunity provisions of Section 230 of the Communications Decency Act. But the debate over whether and how to reform this law, which protects platforms from most lawsuits stemming from content posted by users, has been mostly unproductive and riddled with outlandish proposals. Consensus on specific or theoretical reform has remained elusive. 

However, just as the progressive antitrust movement has won allies in the Republican Party, the effort to reform Section 230 may ultimately provide conservatives and liberals another issue area where they might find common cause. With the federal government increasingly at odds with the tech industry, an unlikely coalition is being formed by those that see regulation as a way to hurt industry and those who see reform as a good in itself. If a significant number of Republicans are willing to back President Joe Biden’s progressive pick to lead the Federal Trade Commission, Lina Khan, it’s not unreasonable to think that Section 230 reform might inspire the formation of a similar bipartisan coalition.

Nonetheless, Section 230 reform faces a number of formidable obstacles. Unlike the resurgent antitrust movement, the Section 230 reform space lacks uniform goals. And the debate over reforming the law represents one of the most muddled in all of Washington.

What follows is a synthesis of the paradigms, trends, and ideas that animate Section 230 reform bills and proposals. Some have more potential for bipartisan support while others remain party-line ideas and function as ideological or messaging tools. This piece attempts to clarify the landscape of major Section 230 reform proposals. We separate the proposals based on their approach in reforming the legislation: either by broadening exemptions for immunity, clarifying one or more parts of the content governance process, or solely targeting illegal content.

Read More

Thermographic video footage shows what appears to be a plume of methane gas flowing from a vent stack at a storage facility in Minerbio
FILE PHOTO: A handout screen grab from thermographic video footage shot with an infrared camera and made available to Reuters June 10, 2021 by Clean Air Task Force (CATF), shows what appears to be a plume of methane gas flowing from a vent stack at the SNAM underground storage facility in Minerbio, Italy. CATF/James Turitto/Handout via Reuters
Thermographic video footage shows what appears to be a plume of methane gas flowing from a vent stack at a storage facility in Minerbio, Italy. (CATF/James Turitto/Handout via Reuters)

All around the United Kingdom, local authorities desperately need to build additional primary and secondary schools. The UK’s school-age population is rapidly growing, and with nearly 400,000 additional pupils expected to enter the school system in the coming year, some 640 new schools are needed. At the same time, local authorities face twin pressures that make new construction a daunting challenge: dwindling budgets and a need to reduce emissions. To meet this challenge, researchers at the University of Cambridge are exploring the use of prefabricated engineered timber buildings that aim to reduce costs and hit sustainability targets for new school construction.

The role of technology innovation in climate crisis mitigation is by now well-established. But the Cambridge project stands out because it focuses on publicly procured school structures. Recent international climate policy encourages business and industry to green how they work through innovation. Yet governments often underappreciate their own procurement power as a vital environmental policy instrument far closer to home. Directing government procurement spending toward more sustainable projects represents a major opportunity not only to reduce emissions created by governments’ own operations, but also to encourage the development of technologies capable of mitigating and helping societies adapt to the climate crisis. As the economist William Janeway describes, when new technologies mature beyond R&D, the state can create a market “by serving as the first customer”, pulling innovations “down the learning curve” to cheaper, dependable production.

Read More