Across the Chinese government’s surveillance apparatus, its many arms are busy collecting huge volumes of data. Video surveillance footage, WeChat accounts, e-commerce data, medical history, and hotel records: It’s all fair game for the government’s surveillance regime. Yet, taken individually, each of these data streams don’t tell authorities very much. That’s why the Chinese government has embarked on a massive project of data fusion, which merges disparate datasets to produce data-driven analysis. This is how Chinese surveillance systems achieve what authorities call “visualization” (可视化) and “police informatization” (警务信息化).
While policymakers around the world have grown increasingly aware of China’s mass surveillance regime—from its most repressive practices in Xinjiang to its exports of surveillance platforms to more than 80 countries—relatively little attention has been paid to how Chinese authorities are making use of the data it collects. As countries and companies consider how to respond to China’s surveillance regime, policymakers need to understand data fusion’s crucial role in monitoring the country’s population in order to craft effective responses.
In recent years, policymakers have attempted to tackle the harms associated with online platforms via an ineffective bricolage of finger-pointing, performative hearings grilling various CEOs, and, ultimately, policy proposals. These proposals mostly aim to reform the intermediary liability immunity provisions of Section 230 of the Communications Decency Act. But the debate over whether and how to reform this law, which protects platforms from most lawsuits stemming from content posted by users, has been mostly unproductive and riddled with outlandish proposals. Consensus on specific or theoretical reform has remained elusive.
However, just as the progressive antitrust movement has won allies in the Republican Party, the effort to reform Section 230 may ultimately provide conservatives and liberals another issue area where they might find common cause. With the federal government increasingly at odds with the tech industry, an unlikely coalition is being formed by those that see regulation as a way to hurt industry and those who see reform as a good in itself. If a significant number of Republicans are willing to back President Joe Biden’s progressive pick to lead the Federal Trade Commission, Lina Khan, it’s not unreasonable to think that Section 230 reform might inspire the formation of a similar bipartisan coalition.
Nonetheless, Section 230 reform faces a number of formidable obstacles. Unlike the resurgent antitrust movement, the Section 230 reform space lacks uniform goals. And the debate over reforming the law represents one of the most muddled in all of Washington.
What follows is a synthesis of the paradigms, trends, and ideas that animate Section 230 reform bills and proposals. Some have more potential for bipartisan support while others remain party-line ideas and function as ideological or messaging tools. This piece attempts to clarify the landscape of major Section 230 reform proposals. We separate the proposals based on their approach in reforming the legislation: either by broadening exemptions for immunity, clarifying one or more parts of the content governance process, or solely targeting illegal content.
All around the United Kingdom, local authorities desperately need to build additional primary and secondary schools. The UK’s school-age population is rapidly growing, and with nearly 400,000 additional pupils expected to enter the school system in the coming year, some 640 new schools are needed. At the same time, local authorities face twin pressures that make new construction a daunting challenge: dwindling budgets and a need to reduce emissions. To meet this challenge, researchers at the University of Cambridge are exploring the use of prefabricated engineered timber buildings that aim to reduce costs and hit sustainability targets for new school construction.
The role of technology innovation in climate crisis mitigation is by now well-established. But the Cambridge project stands out because it focuses on publicly procured school structures. Recent international climate policy encourages business and industry to green how they work through innovation. Yet governments often underappreciate their own procurement power as a vital environmental policy instrument far closer to home. Directing government procurement spending toward more sustainable projects represents a major opportunity not only to reduce emissions created by governments’ own operations, but also to encourage the development of technologies capable of mitigating and helping societies adapt to the climate crisis. As the economist William Janeway describes, when new technologies mature beyond R&D, the state can create a market “by serving as the first customer”, pulling innovations “down the learning curve” to cheaper, dependable production.
Across the country, the tools that power modern police surveillance contribute to cycles of violence and harassment. Predictive policing systems digitally redline certain neighborhoods as “hotspots” for crime, with some systems generating lists of people they think are likely to become perpetrators. These designations subject impacted communities to increased police presence and surveillance that follows people from their homes to schools to work. The typical targets are Black and brown youth, who may also be secretly added to gang databases or asked by school officials to sign “contracts” that prohibit them from engaging in behavior that “could be interpreted as gang-affiliated.” For communities whose lived experience includes being treated as inherently suspicious by police and teachers, increased surveillance can feel like a tool for social control rather than a means of public safety.
The foundation of these practices are what police departments and technology companies call data-driven policing, intelligence-led policing, data-informed community-focused policing or precision policing. While data has always been used to solve crime, these tools go a step further, relying on a fraught premise: that mining information from the past can assist in predicting and preventing future crimes. As the scholar Andrew Guthrie Ferguson has said, “Big-data technology lets police become aggressively more proactive.” But this data can be biased, unreliable, or simply false. Unquestioned reliance on data can hypercharge discriminatory harms from over-policing and the school-to-prison pipeline. Our elected leaders must uncover and dismantle these practices and recognize them for what they are: an attack on our constitutional rights to due process and equal protection under the law.
When the Supreme Court handed down its decision in Van Buren v. United States, cybersecurity professionals nationwide breathed a sigh of relief. Asked to determine the scope of the United States’ main federal anti-hacking law, the court adopted a limited interpretation of the Computer Fraud and Abuse Act (CFAA). Had the ruling come out differently, it could have created more risk for so-called “white hat” hackers who search for flaws in software as a public service.
But even after Van Buren, white hats continue to face some lingering legal uncertainty under the CFAA and other laws. Meanwhile, the United States faces nothing short of a cybersecurity crisis, and U.S. authorities have begun to acknowledge that “black hat” hackers (particularly those overseas) appear largely unmoved by the threat of prosecution. That is, the specter of liability may be discouraging white hats from doing innocuous or beneficial security research, without meaningfully deterring malicious hacking. This topsy-turvy state of the law—and those who wield it as a cudgel to threaten researchers—is a weakness in U.S. national security.
Confronted by viral conspiracy theories, climate change denialism, extremist movements, and anti-democratic groups (among others) feeding off false information online, social media platforms have taken steps in recent years to curtail the spread of misinformation. But even as tech companies have come under pressure to crack down on misinformation, one key avenue of information distribution in the digital economy—podcasting—has escaped significant scrutiny, despite the massive scale of the podcast ecosystem.
Nearly 116 million Americans—or around 41%—listen to podcasts monthly, but only recently have podcasters begun to receive scrutiny for their role in spreading misleading or false content. When Joe Rogan—perhaps the world’s most popular podcaster—questioned in April the relative risks of COVID-19 vaccines for young people, he came under intense criticism. Rogan quickly backtracked, telling his more than 11 million listeners that he had been a “moron.” But that retraction may have been too late, as there remains a strong correlation between listeners of the Joe Rogan Experience and vaccine hesitance.
Unfortunately, the spread of misinformation in podcasts appears to be common. In a preliminary analysis of more than 8,000 episodes of popular political podcasts, approximately one-tenth includes potentially false information. Due to the way podcasts are distributed, however, addressing the problem will require a different approach than in other sectors of the tech industry, one that combines broad infrastructure changes and a fundamental rethinking of the role of the listener in content moderation.
During President Joe Biden’s first six months in office, his administration has made a priority of revitalizing American alliances and intensifying scrutiny of the technology industry. In Europe, policymakers are also examining the influence of tech companies. These efforts on both sides of the Atlantic crystallized in June with the formation of the EU-US Trade and Technology Council (TTC), which Biden announced at a summit alongside his EU counterpart, European Commission President Ursula von der Leyen. This new body represents an opportunity for policymakers in the United States and Europe to strengthen efforts to improve the online information ecosystem. With politicians and antitrust investigators in both Washington and Brussels scrutinizing the market power of major tech companies, the TTC gives officials in the United States and Europe a venue to make sure that their respective efforts are aligned.
The TTC should be a catalyst for policy that works to govern the myriad information and content-related problems online and a venue for policymakers to answer pressing questions regarding how to regulate online ecosystems. Democracies will fail in their mandate to produce both safe and open communication spaces if they do not confront questions about what makes some types of content manipulation unacceptable while others—including influence operations carried out in democratic states—are embraced. The TTC ought serve as a springboard for resolving these problems and broader concerns around of informational manipulation, digital hate, and influence operations.
Loitering missiles operate from a simple premise: What if a missile could become more accurate by slowing down?
Awkward cousins to armed drones and cruise missiles, loitering munitions were first developed as a specialized weapon to target anti-aircraft systems in the 1980s and now exist as an alternative to everything from airstrikes to mortar rounds or grenade tosses. Loitering munitions can be as small as a model airplane or longer than a surfboard. Typically fixed-wing and powered by pusher propellers, they can resemble everything from matchsticks with wings to Klingon Birds of Prey. Categorically, loitering munitions are autonomous missiles that can stay airborne for some time, identify a target, and then attack. A munition’s loiter—or the amount of time between launch and detonation—is a function of the missile’s sensors and the kinds of targets these weapons are wielded against.
For decades, loitering missiles have been on the forefront of autonomous lethality. Historically, loitering munitions were used to target things like radars but are increasingly being used to attack humans. And as they make this transition in targeting capability, loitering munitions represent a bridge between today’s precision-guided weapons that rely on greater levels of human control and our future of autonomous weapons with increasingly little human intervention.
As COVID-19 swept through the world in early 2020, technology companies scrambled to repurpose their products to fight the pandemic. This repurposing was especially pronounced in the civilian drone industry, whose companies predicted that the pandemic would prove the value of their map-making, inspection, and delivery technology. As drones were adapted for everything from monitoring social-distance requirements to delivering medical supplies, companies hoped that a historically drone-skeptical public might be won over by the technology once and for all.
But how many of those lofty predictions about how drones would help humanity survive the pandemic actually came true? More than a year after the pandemic was declared, the impact of drones in combating the pandemic, despite the industry’s predictions, have been decidedly mixed. While they have shown potential in their ability to deliver medical supplies, drones have also been deployed in a variety of ways to control populations and carry out token public-health work. Of the lofty predictions about drones’ life-saving capabilities, none have come true, undermining efforts demonstrate the genuinely helpful ways drones can be deployed and build public support for the technology.
In recent memory, ransomware has gone from major nuisance to international crisis. Criminal gangs that target computers, encrypt their contents, and demand a ransom in order to provide a decryptor tool have struck critical infrastructure around the world. They have disrupted Ireland’s entire healthcare systems, shut down hundreds of food retailers in Sweden, and disrupted fuel delivery to the U.S. Eastern Seaboard, among countless other examples.
At the heart of the ransomware phenomenon is a misalignment of economic and policy incentives that allow criminals to operate successfully and with impunity. But as ransomware has proliferated, addressing this problem most often falls on the shoulders of its victims—businesses facing difficult decisions about whether or not to pay ransoms to regain access to critical systems and data. And as victims have paid up in order to mitigate damage, there are now growing calls for businesses to be banned from paying ransoms.
But these calls to ban ransom payments outright fail to capture what is an enormously complicated policy issue. As it stands, the ransomware model favors the criminal, but will banning ransom payments outright reverse this imbalance of incentives? We hail from different countries and different cybersecurity backgrounds; one of us is mostly experienced in the private sector and the other in government. One begins from a presumption in favor of a ban, the other against one. Here, we examine the vexing issue of whether or not to ban ransom payments and wider ideas about how to disrupt the flow of money to the criminals and use our differing perspectives to offer some solutions.