Sections

Commentary

New intermediary rules jeopardize the security of Indian internet users

A member of the All India Student Federation teaches farmers about social media and how to use such tools as part of ongoing protests against the government.

India’s information technology ministry recently finalized a set of rules that the government argues will make online service providers more accountable for their users’ bad behavior. Noncompliance may expose a provider to legal liability from which it is otherwise immune. Despite the rules’ apparently noble aim of incentivizing providers to better police their services, in reality, the changes pose a serious threat to Indians’ data security and reflect the Indian government’s increasingly authoritarian approach to internet governance.

The government of Prime Minister Narendra Modi has in recent years taken a distinctly illiberal approach to online speech. When India’s IT ministry released its original draft of the rules more than two years ago, civil society groups criticized the proposal as a grave threat to free speech and privacy rights. In the intervening years, threats to free speech have only grown. To quell dissent, Modi’s government has shut off the internet in multiple regions. Facing widespread protests led by the country’s farmers against his government, Modi has escalated his attacks on the press and pressured Twitter into taking down hundreds of accounts critical of the government’s protest response. The new rules represent the latest tightening of state control over online content, and as other backsliding democracies consider greater restrictions on online speech, the Modi government is providing a troubling model for how to do so. 

Beyond chilling digital rights, the new rules threaten to undermine computer security systems that Indian internet users rely on every day in order to grant the state increased power to police online content. The new rules require messaging services to be able to determine the origin of content and demand that online platforms develop automated tools to take down certain content deemed illegal. Taken together, the new rules pose threats to freedom of speech and the privacy and security of India’s internet users. 

The relevant provisions apply to “significant” “social media intermediaries” (which I’ll call SSMIs for short). “Significant” means the provider has hit a yet to be defined number of registered Indian users. “Social media intermediary” broadly encompasses many kinds of user-generated content-driven services. A government press release calls out WhatsApp, YouTube, Facebook, Instagram, and Twitter specifically, but services as diverse as LinkedIn, Twitch, Medium, TikTok, and Reddit also fall within the definition. 

Two provisions are of particular concern. Section 4(2) of the new rules requires SSMIs that are “primarily” messaging providers to be able to identify the “first originator” of content on the platform. Section 4(4) requires any SSMI (not limited to messaging) to “endeavour to deploy technology-based measures, including automated tools or other mechanisms to proactively identify” two categories of content: child sex abuse material and content identical to anything that’s been taken down before. I’ll call these the “traceability” and “filtering” provisions.

These provisions endanger the security of Indian internet users because they are incompatible with end-to-end encryption. End-to-end encryption, or E2EE, is a data security measure for protecting information by encoding it into an illegible scramble that no one but the sender and the intended recipient can decode. That way, the encrypted data remains private, and outsiders can’t alter it en route to the recipient. These features, confidentiality and integrity, are core underpinnings of data security. 

Not even the provider of an E2EE service can decrypt encrypted information. That’s why E2EE is incompatible with tracing and filtering content. Tracing the “originator” of information requires the ability to identify every instance when some user sent a given piece of information, which an intermediary can’t do if it can’t decode the encrypted information. The same problem applies to automatically filtering a service for certain content. 

Put simply, SSMIs can’t provide end-to-end encryption and still comply with these two provisions. This is by design. Speaking anonymously to The Economic Times, one government official said the new rules will force large online platforms to “control” what the government deems to be unlawful content: Under the new rules, “platforms like WhatsApp can’t give end-to-end encryption as an excuse for not removing such content,” the official said

The rules confront SSMIs with a Hobson’s choice: either weaken their data security practices, or open themselves up to expensive litigation as the price of strong security. That is an untenable dilemma. Intermediaries should not be penalized for choosing to protect users’ data. Indeed, the existing rules already require intermediaries to take “reasonable measures” to secure user data. If SSMIs weaken their encryption to comply with the new traceability and filtering provisions, will that violate the “reasonable data security” provision? This tension creates yet another quandary for intermediaries. 

The new rules make a contradictory demand: Secure Indians’ data—but not too well. A nation of 1.3 billion people cannot afford half-measures. National, economic, and personal security have become indivisible from data security. Strong encryption is critical to protecting data, be it military communications, proprietary business information, medical information, or private conversations between loved ones. Good data security is even more vital since the COVID-19 pandemic shifted much of daily life online. Without adequate protective measures, sensitive information is ripe for privacy invasions, theft, espionage, and hacking.

Weakening intermediaries’ data security is a gift to those who seek to harm India and its people. Citing national security and privacy concerns, Indian authorities have moved to restrict the presence of Chinese apps in India, but these new rules risk exposing the country’s internet users. The rules affect all of an intermediary’s users, not just those using the platform for bad acts. Over 400 million Indians currently use WhatsApp, and Signal hopes to add 100-200 million Indian users in the next two years. Most of those half-billion people are not criminals. If intermediaries drop E2EE to comply with the new rules, that primarily jeopardizes the privacy and security of law-abiding people, in return for making it easier for police to monitor the small criminal minority. 

Such monitoring may prove less effective than the Indian government expects. If popular apps cease offering E2EE, many criminals will drop those apps and move to the dark web, where they’re harder to track down. Some might create their own encrypted apps, as Al-Qaeda did as far back as 2007. In short, India’s new rules may lead to a perverse outcome where outlaws have better security than the law-abiding people whom they target. 

Meanwhile, weakening encryption is not the only way for police to gather evidence. We live in a “golden age for surveillance” in which our activities, movements, and communications generate a wealth of digital information about us. Many sources of digital evidence, such as communications metadata, cloud backups, and email, are not typically end-to-end encrypted. That means they’re available from the service provider in readable form. If Indian police have difficulty acquiring such data (for example because the data and the company are located outside of India), it’s not due to encryption, and passing rules limiting encryption will do nothing to ameliorate the problem.

When intermediaries employ end-to-end encryption, that means stronger security for communities, businesses, government, the military, institutions, and individuals—all of which adds up to the security of the nation. But the new traceability and filtering requirements may put an end to end-to-end encryption in India. The revised intermediary rules put the whole country’s security at risk. Amid a global backsliding for internet freedom, the proposal may offer an example for other would-be authoritarians to follow. 

Riana Pfefferkorn is a research scholar at the Stanford Internet Observatory.

Facebook, Google, and Microsoft provide financial support to the Brookings Institution, a nonprofit organization devoted to rigorous, independent, in-depth public policy research. 

Authors