Sections

Commentary

What U.S. policymakers can learn from the European decision on personalized ads

European Commissioner for Values and Transparency Vera Jourova and European Commissioner for Justice Didier Reynders (R) give a news conference on EU rules on data protection (GDPR) and the new EU Strategy on victims' rights, in Brussels, Belgium, June 24, 2020. Olivier Hoslet/Pool via REUTERS

The European Data Protection Board’s (EDPB) recent decision on Meta’s personalized ad practices might require social media companies and other online businesses to significantly revise their data-focused advertising business models if it is upheld by the European courts.

As I explained in a previous Brookings post, the EDPB’s decision is rooted in Article 6 of the European General Data Protection Regulation (GDPR), which requires companies to have a lawful basis for their data practices. GDPR’s three main criteria for lawfulness are service necessity, consent, and legitimate interests. The EDPB rejected Meta’s claim that targeted ads were necessary to provide social media services, ruling that the ads were useful for Meta but not strictly necessary to provide the service.

Meta could now claim as an alternative legal basis that its users consent to personalized ads but, under the EDPB’s guidelines, consent must be freely given and this would be true only if users could receive social media services without being exposed to personalized ads.

Meta could claim instead that it has a legitimate business interest in serving personalized ads to its social media users, and this has some support in GDPR’s Recital 47, which says that direct marketing is a legitimate interest. However, under the absolute right to object to direct marketing in Article 21 of GDPR, Meta would then have to offer its users a personalized-ad-free social media service.

In the long run, without a court victory overturning the EDPB’s decision, Meta and other online companies relying on personalized ads will need to change their data practices.

As called for by President Biden in his recent Wall Street Journal opinion piece and again in his State of the Union address, the United States Congress is renewing its bipartisan push for national privacy legislation with a hearing on March 1, 2023 before the House Energy and Commerce Committee’s Subcommittee on Innovation, Data, and Commerce. The goal, announced Subcommittee leaders Gus Bilirakis (R-FL) and Jan Schakowsky (D-IL), is to get “a strong national standard across the finish line.”

U.S. policymakers seeking to establish new privacy law should consider carefully what lessons can be learned from a privacy regime that potentially has such a powerful impact on an established business practice in the name of protecting privacy. In this follow-up post, I’m going to argue that U.S. legislators should consider a modified version of the European approach of requiring a lawful basis for data processing as part of a new national privacy law. The wording of the U.S. version need not be the same as that in GDPR, but the key idea that companies must establish that their data practices satisfy one of several alternative standards of lawfulness—service necessity, consent, or legitimate interests—should be incorporated into U.S. law.

A Legal Basis Privacy Regime

A legal basis privacy regime sets out normative standards to determine whether a data practice is lawful. For GDPR, data processing is lawful when it is strictly necessary to provide a service, when the user has freely consented to it, or when it is necessary for the legitimate interests of the data processor or a third-party. The normative theory embedded in this approach is that data use is legitimate either because it preserves the autonomy of the user or because it serves the legitimate interests of the data processor or the public.

A legal basis regime, however, need not declare that certain enumerated concrete data practices are lawful. It does not have to say in statute, for instance, that it is lawful for companies to use personal data for information security purposes. Instead, it provides criteria for showing when specific data uses such as the use for information security would be considered lawful. Such a standard can be used by companies and regulators to treat a wide range of data practices as lawful.

An alternative approach would simply list approved or permissible uses without also incorporating a normative standard. This is embodied in perhaps the oldest privacy law in the United States, the Fair Credit Reporting Act from 1970. This law names specific data practices and it labels them as lawful. It allows consumer reporting agencies to collect and process personal information only for one of several listed permissible purposes including credit, insurance, and employment screening.

Last year’s American Data Privacy and Protection Act (ADPPA), which passed the House Energy and Commerce Committee with an impressive bipartisan vote of 52-3, also takes a list approach. It says companies “may not collect, process, or transfer” personal information unless “reasonably necessary and proportionate” to provide a requested service or fulfill one of a list of “permissible purposes.” These permissible purposes include authenticating users, providing security, preventing fraud or physical harm, scientific research, communicating with users or delivering user communications to others, and first party or targeted advertising.  It names permissible data uses but, other than service necessity, it does not provide criteria whereby companies can demonstrate that other data uses are lawful.

The limitation in the enumeration approach is that it does not set out a standard of lawfulness, except that in some cases enumeration statutes allow the criterion of providing a service to serve as a standard of lawfulness. This means that if data use is not on the list of specific named practices and is not needed for providing a service, it is not allowed. Inevitably, such a static list will be underinclusive as business practices and technology evolve. Privacy law should incorporate some open-ended standard to allow the law to respond to innovative developments.

Legitimate Interests

The GDPR standard of legitimate interests is an open-ended standard. It says that any data processing whatsoever can be rendered lawful when it is “necessary for the purposes of the legitimate interests” pursued by a company. GDPR even creates a balancing test where data processing in pursuit of a company’s legitimate interests still would not be lawful when these interests are “overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data.”

The language of legitimate interests contained in GDPR is not the only way to create a flexible standard of lawfulness, rather than a static list of permissible uses. But it might be a good place for Congressional drafters to begin with as they move forward with a new national privacy law.

A legitimate interests standard, or something similar, should be incorporated into privacy proposals such as ADPPA that currently rely on an enumeration approach. Of course, it would be prudent for privacy legislation to contain in addition a list of data processing activities that Congress finds to be lawful. Companies should not have to prove, for instance, that data use for fraud prevention is legitimate. A list of approved uses would provide needed legal certainty for some obvious and agreed-upon data uses to satisfy the requirement of being lawful. But there should still be an additional opportunity for businesses to demonstrate, subject to regulatory approval, that a particular data use, which is not on the pre-approved list nevertheless, satisfies a standard of lawfulness. The legal basis of legitimate interests would do this.

Consent

Some privacy advocates are wary of including consent as a sufficient legal basis for data processing. Several years ago, Cameron Kerry said in a Brookings report, “Maybe informed consent was practical two decades ago, but it is a fantasy today.” A recent report from the Annenberg School of Communication demonstrates, again, that choice as currently practiced in today’s online world fails utterly to protect privacy. This ineffectiveness of the current notice and choice regime has led many advocates to agree with privacy scholar Ari Waldman who says, “Consent, opt in or opt out, should never be part of any privacy law.”

“A robust consent regime modeled after the GDPR’s is very different from the current U.S. notice and choice approach, which often relies on the weaker opt out form of choice.”

Clearly, businesses have abused the consent basis for data use and have bombarded users with uninformative and intrusive notices. Some constraints must be put on the ability of businesses to hound their users with repeated requests to consent to information processing that is not needed to provide service.

It is important for a privacy statute to avoid overreliance on consent as the sole or most important way to make a data practice legitimate. But, this should not mean abandoning consent entirely. A robust consent regime modeled after the GDPR’s is very different from the current U.S. notice and choice approach, which often relies on the weaker opt out form of choice. Consent under GDPR means “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” and requires a “clear affirmative action” indicating agreement to the data collection and use. Importantly, refusing consent must be “without detriment” to the user, meaning the same service must be made available under the same terms and conditions if the user refuses consent, or the user must be offered reasonable incentives to induce consent. If data is really needed to provide the service, then the appropriate legal basis is service necessity and no form of choice, opt-in or opt-out is needed.

Consent can be a powerful way for consumers to block damaging data use. When Apple gave users a properly structured choice in connection with the ability of apps to track them for the purpose of advertising, users overwhelmingly responded that they did not want tracking.

ADPPA adopts this robust notion of consent and applies it at various points in the statute. For instance, in requiring consent to transfer information pertaining to a child. But the lack of consent in ADPPA’s list of permissible data uses is significant and damaging. It means that a company may not justify its data processing, regardless of its purpose, on the grounds of genuine user consent. And it means that users would not be able to protect themselves from damaging data practices by refusing to consent to them.

The Privacy Regulator

Interpreting and enforcing such a legal basis privacy regime will require an alert, flexible and well-funded privacy regulator staffed with knowledgeable technologists and industry experts. The statute should provide as much guidance as possible to guide the agency in this task, but substantial discretion and rulemaking authority will be needed for the agency to meet the demands of the future. Without this institutional support for implementation and enforcement, a new privacy law would be merely performative, the theatrical impersonation of privacy protection but not the real thing.

The work that European data protection authorities have done in interpreting their own statutory text relating to the key conditions of contractual necessity and consent provide some factors that could be incorporated into a new U.S. privacy statute. The statutory text would make it clear that service necessity is to be interpreted as strictly necessary for the provision of the service and not as merely useful for business purposes, and that affirmative consent applies in those circumstances where a business wants to collect and use personal data over and above what is needed to provide the service.

Unfortunately, the European interpretation of the legitimate interests standard is so narrow that it effectively removes legitimate interests as a practical way for businesses to establish a legal basis for their data use. But the United Kingdom has produced an especially helpful report on using the legitimate interests standard. If Congress wants to further constrain regulatory discretion and ensure some consistency of interpretation as agency officials change, it could incorporate directly into the statute some of the factors that have emerged in the U.K. legitimate interests guidelines.

For instance, the statute could require companies to conduct an impact study if they seek to use the legal basis of legitimate interests and to file that study with the privacy regulator within a fixed period of time. The statute could require the company to take into account the purpose and the necessity of the data processing and to conduct a balancing assessment weighing the purpose against the privacy rights of the data subjects. The statute could further require that the balancing assessment should take into account the nature of the personal data, the reasonable expectations of the data subjects, the likely impact of the processing on the data subjects, and whether any safeguards can be put in place to mitigate negative impacts.

The ADPPA requires the Federal Trade Commission (FTC) to act as the nation’s digital privacy regulator with full funding, including the establishment of a separate Bureau of Privacy to implement the new law. Former Federal Communications Commission Chairman Tom Wheeler and his colleagues at the Shorenstein Center would create a separate Digital Platform Agency with authority over both competition and privacy, as would Harold Feld at Public Knowledge in his proposed Digital Platform Act and Senator Michael Bennett with his proposed a Digital Platform Commission. I make the case for a digital regulator responsible for privacy, content moderation and competition policy in my forthcoming Brookings book on digital regulation.

Targeted Ads

The ADPPA treats targeted advertising as a permissible use of personal data, but it requires companies to offer users the ability to opt-out, a policy similar to the GDPR approach of treating direct marketing as a legitimate interest while giving consumers an absolute right to object to it.

But how to provide for such an opt-out from targeted ads is by no means obvious. A key issue will be the extent to which companies can offer incentives for allowing personalized ads. The California privacy law deals with a related issue of an opt out from data sharing by banning financial incentives for data sharing that are “unjust, unreasonable, coercive, or usurious in nature.” ADPPA might need to be clarified to provide for a similar standard in connection with its opt out from targeted ads, but the details cannot be incorporated in the statute itself. For businesses and consumers to understand which financial incentives are banned under such a statutory provision would require significant guidance from the enforcing privacy regulator.

Cameron Kerry and Mishaela Robison argue in a recent Brookings piece that the U.S. privacy legislation should clearly provide the implementing privacy agency with rulemaking authority to address this tangled targeted ad issue. This makes good sense.

In addition, there are similar issues of interpretation and implementation in connection with the legal bases of data processing. As the EDPB decision revealed, determining when data use is strictly necessary for providing a service requires privacy regulators to make informed and detailed decisions about business operations. Businesses and consumers need clarity on which data uses are strictly necessary for providing a service and which are merely useful. Ex ante rules might help provide this clarity. To deal with this and a host of similar issues, the new privacy statute should provide the agency with rulemaking authority to interpret and implement the legal standards of consent, contractual necessity, and legitimate interests.

Conclusion

As this week’s hearing on a national privacy standard indicates, Congress is again taking up the task of enacting a new national privacy law. It should build upon the solid foundation in the ADPPA and require companies to have a legal basis for data processing. ADPPA effectively contains the service necessity standard of lawfulness already. It is not too late to add language relating to consent and legitimate interests to the ADPPA as additional standards of lawfulness. Such measures would enshrine in law a workable framework for evaluating whether business practices violate user privacy rights and would also powerfully express a national commitment to the primacy of privacy.

Meta is a donor to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and are not influenced by any donation.