Sections

Research

A model of behavioral manipulation

Editor's note:

This is a Brookings Center on Regulation and Markets working paper.

Executive Summary

The default position among most economists and AI researchers is that the combination of large amounts data and advances in AI will bring widespread benefits to society. For participants in online platforms, these may take the form of more informative advertising, better targeted products, more personalized services, and perhaps better information and recommendation for decision-making across a variety of domains. One challenge to this optimistic scenario is that the information that online platforms collect could be used for good or for bad—to mislead rather than help users. Is this likely? If it is, what form would such behavioral manipulation take? And how can it be countered?

We attempt to shed some light on these issues by building a conceptual framework in which platforms can use the information they collect either helpfully or manipulatively. This type of “behavioral manipulation” becomes much more likely when platforms collect large amounts of data (and can process them with new and more powerful AI tools) but consumers do not fully understand the way in which these data and tools can be used against them.

We capture these issues by developing a theoretical model in which platforms dynamically offer one of several products and an associated price to a user who is uncertain about the quality of the products but can slowly learn about the quality of the goods she consumes. Crucially, the informative signal received by the user also depends on extraneous factors, which may for a while generate higher signals (e.g., the appearance of the good or various behavioral biases that temporarily make consumers overestimate the quality of some types of goods). Big data and AI enable platforms not only to better estimate the quality of a product but also learn from the experiences of other similar users which goods will tend to be “glossy” in the sense of generating more favorable signals. It is this superior information that enables the platform both to be helpful to users and also to engage in behavioral manipulation.

We show that if glossiness of products is not very persistent (so that platforms cannot exploit this information for a long time) or equivalently if users find out and correct their misunderstanding quickly, there is not much room for behavioral manipulation, and platforms use any information they collect in a helpful way. As a result, the introduction of new and more powerful AI tools and more data collection will help consumers, as many in the tech industry claim.

However, when glossiness is highly persistent and users do not find out about it quickly, then it becomes profitable for platforms to exploit this information, and they will do so in a way that reduces consumer welfare. Moreover, we show that under the same conditions, having bigger platforms (with more product offerings) is even worse for consumer welfare because this gives more opportunities to the platform to engage in micro-targeted manipulative behavior.

The overarching policy implication of this analysis is that manipulative uses of platform information can be harmful to users and may need to be regulated. Nevertheless, it is not clear how such regulations can be designed without regulators having access to detailed information available to the platform. One possibility would be to limit the size of platforms in order to limit the most pernicious type of manipulation that occurs when platforms have many products and a lot of users for extensive data collection, but this may be too blunt a tool for effective regulation. Pro-comparative policies that push against market concentration may be useful, but it is unclear whether several competing platforms would necessarily engage in less behavioral manipulation, and whether they do or not may depend on to what extent consumers recognize the possibility of behavioral manipulation. Providing information to users about behavioral manipulation (or the possibility thereof) may be useful under some circumstances, but it is unclear whether such information can significantly modify user behavior. Yet another possibility would be to set limits on price discrimination on platforms. Further study of these questions would clearly be highly relevant for the design of certain aspects of AI policy.

Download the full working paper here.

Authors