Every Thursday, the TechStream newsletter brings you the latest from Brookings’ TechStream and news and analysis about the world of technology. To sign up and get this newsletter delivered to you inbox, click here.
The stand-off between the Indian government and Western technology companies over online speech sharply escalated this week. On Monday, Indian police raided the offices of Twitter after the company flagged a government spokesman’s tweets for sharing forged material. On Wednesday, WhatsApp sued the Indian government in an attempt to block new internet regulations that threaten to undermine the security and privacy of Indian internet users.
This week’s events come against the backdrop of an attempt by the Modi government to expand its ability to control online speech and block dissent as it struggles to contain a massive outbreak of COVID-19. New regulations announced earlier this year that came into force on Wednesday—and which WhatsApp is now challenging in court—require online platforms to promptly respond to government takedown requests of content deemed inappropriate, to appoint in-country representatives, and to be able to trace the so-called “first originator” of content on a platform.
This last provision is the focus of WhatsApp’s lawsuit, which argues that the provision will force the company to break the end-to-end encryption that ensures the company and law enforcement mostly can’t read the content of messages. According to WhatsApp’s argument, the Modi government’s new regulations are in violation of privacy protections in the Indian constitution. “Civil society and technical experts around the world have consistently argued that a requirement to ‘trace’ private messages would break end-to-end encryption and lead to real abuse. WhatsApp is committed to protecting the privacy of people’s personal messages and we will continue to do all we can within the laws of India to do so,” the company said in a statement.
WhatsApp’s lawsuit sets up what may be the most consequential legal battle over encrypted communications technology since the U.S. government attempted in 2016 to force Apple to break into an iPhone belonging to a man involved in a terrorist attack in San Bernardino, California. Since then, major technology companies have incorporated end-to-end encryption technology in communications platforms with billions of users, and governments around the world have bitterly complained that this technology has undermined criminal investigations. The continued mass-availability of encrypted communications technology represents perhaps the key battleground for online privacy. Whether WhatsApp will be able to continue offering encrypted chat services in India, which is its largest market in the world, will have major consequences for its availability globally.
The outcome of WhatsApp’s suit will have immense implications for online liberties in India and elsewhere. Only 45% of India’s 1.37 billion people have access to the internet, and as a larger portion of the population comes online, the shape of the Indian internet, given its huge user base, is likely to impact how other governments write their own online rules. If India’s restrictive internet regulations stand, it will likely contribute to the further splintering of the global web. If companies like WhatsApp are not able to offer users end-to-end encryption in India, that may cause online services elsewhere to be unavailable in India. Millions in the Indian diaspora rely on it to stay in touch with friends and family there. If WhatsApp’s encryption tools are not allowed to remain in place, they will likely have to move to less secure, more privacy-invasive platforms.
The dispute in India between the government and online platforms also has ramifications for how backsliding democracies will address questions of online speech. The Modi government has moved to limit internet freedoms as it has implemented a range of highly controversial policies—agricultural reform and stripping Kashmir of its constitutional protections, among them—and India is likely to serve as a model for other governments in their attempt to impose order on free-wheeling online platforms. In Russia, for example, the government is moving to build a sovereign web, restricting the availability of Twitter, and increasing the pace at which requests platforms take down content. If the Modi government succeeds, it may embolden autocrats elsewhere to move more aggressively against online platforms.
In India, the government’s effort to control opposition voices online has crystallized in its attempts to bring Twitter to heel. After thousands of farmers descended on Delhi to protest agricultural reforms, the Modi government demanded that Twitter take down hundreds of posts related to the protest. In recent weeks, the government ordered hundreds of posts be removed that criticized the Indian government’s handling of the COVID-19 pandemic, even as social media has emerged as a critical tool for Indians to find medical care in a health-care system near collapse. This week, Indian police raided Twitter’s Delhi office after the platform applied a “manipulated media” label to a post by a government spokesman that shared an image of a forged document purporting to have been drafted by the opposition Congress party and describing how to capitalize politically on the government’s poor handling of the pandemic. The document was easily debunked as a fake.
Monday’s raid was mostly symbolic—the authorities delivered a notice challenging the label to an office that was empty because of COVID—but the message it sends is unmistakable: The Indian government wants greater control over what content appears on Twitter. Whether it succeeds in getting that power will reverberate around the world.
– Elias Groll (@EliasGroll)
Recently on TechStream
The rapid rollout of new artificial intelligence applications has regulators struggling to respond with their usual toolkit. While AI moves rapidly, the creation or modification of regulations—“hard law”—moves more slowly, and as a result governments struggle to swiftly address the issues raised by AI. As a result “soft law”—a program that creates substantial expectations that are not directly enforceable by the government—is growing in popularity as a governance mechanism. As soft law grows as a tool to govern AI systems, it is imperative that organizations gain a better understanding of their current deployments and best practices—a goal Carlos Ignacio Gutierrez and Gary Marchant aim to facilitate with the launch of a new database documenting these tools.
What we’re following
Data privacy. Following the enactment of a data-privacy law, the Chinese government has warned more than 200 app developers to not collect more data on their users than needed. The warnings are the latest effort by the Chinese government to rein in its tech sector, and, so far, warnings to curb data collection have been delivered to major Chinese tech firms such as Baidu and Tencent and also to Microsoft.
AI. The Chinese government is reportedly testing an AI-enabled facial recognition system capable of gauging emotion on the country’s Uyghur minority. If confirmed, the technology would be the latest example of the Chinese state trialing new surveillance technology in its campaign against Chinese Uyghurs.
U.S.-China. The Senate voted on Thursday to advance a bill aimed at boosting U.S. competitiveness with China. The measure would funnel money toward U.S. semiconductor production and boost funding for the National Science Foundation, the Department of Energy, and NASA.
Disinfo. EU regulators announced new non-binding rules on disinformation that would require platforms to open up the hood on their algorithms and show how they prevent the spread of false information. While the rules are nonbinding for now, they may become legally binding as part of the forthcoming Digital Services Act.
Cyber Command. With his administration attempting to respond to a series of damaging cybersecurity breaches, President Joe Biden is expected to propose an increase in funding for U.S. Cyber Command. Biden’s budget would grow the size of the so-called Cyber Mission Force by 10%.
Cybersecurity guidelines. The Department of Homeland Security is expected to issue cybersecurity guidelines for the pipeline industry following a ransomware attack on a major pipeline company. The new regulations will require pipeline companies to report major incidents to the federal government, with further rules to come.
Antitrust. The Washington, D.C. attorney general announced that he would sue Amazon on antitrust grounds, alleging that the online retailer prevented sellers on the site from offering their products for lower prices on other platforms. The suit is the latest in a string of antitrust lawsuits against major tech companies that allege anticompetitive behavior.
Epic v. Apple. Arguments concluded this week in the closely watched legal battle between Apple and videogame developer Epic, which is suing on antitrust grounds to loosen Apple’s control of its App Store and the 30% commission it takes on purchases made through it. The case represents the most immediate grave antitrust challenge faced by Apple. The outcome of the case appears uncertain and turns on how the judge in the case decides to define the market in question.
Content moderation. Florida Governor Ron DeSantis signed into law a measure that would fine online platforms that bar politicians from their platform. The law is a response to recent decisions by platforms to suspend former President Donald Trump and reflects broader conservative concerns about the growing power of tech companies. While the law is unlikely to survive legal challenge, it is probably the first of many such efforts to restrict the ability of online platforms to moderate speech.
Predictive policing. Around the United States, police departments are increasingly relying on databases to predict crime. But these systems can have highly unpredictable consequences, as in the case of Robert McDaniel, who was placed on a “heat list” by the Chicago police department and was subsequently shot twice—likely in part because he was placed on the list by police in the first place.
Reports we’re reading
Influence operations. A Facebook report details influence operations detected on its platforms between 2017 and 2020.
China’s tech wish list. A report from the Center for Security and Emerging Technology examines the types of technologies China is interested in acquiring abroad.
A final point
“I received a partnership proposal, which consists of damaging the Pfizer vaccine on video.”
—French YouTuber Léo Grasset, also known as DirtyBiology, describes a pitch from a PR agency with possible links to Russia that asked him to spread vaccine misinformation.
Apple, Facebook, Google, and Microsoft provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research.