Sections

Research

The Possible and the Impossible in Multi-Agent Learning

H. Peyton Young
H. Peyton Young Professor in Economics - Johns Hopkins University

April 1, 2007

Abstract

Interactive learning is inherently more complex than single-agent learning, because the act of learning changes the thing to be learned. If agent A is trying to learn about agent B, A’s behavior will naturally depend on what she has learned so far, and also on what she hopes to learn next. But A’s behavior can be observed by B, hence B’s behavior may change as a result of A’s attempts to learn it. The same holds for B’s attempts to learn about A.

This feedback loop is a central and inescapable feature of multi-agent learning situations. It suggests that methods which work for single-agent learning problems may fail in multi-agent settings. It even suggests that learning could fail in general, that is, there may exist situations in which no rules allow players to learn one another’s behavior in a completely satisfactory sense. This turns out to be the case: in the next section I formulate an uncertainty principle for strategic interactions which states that if there is enough ex ante uncertainty about the other players’ payoffs (and therefore their potential behaviors), there is no way that rational players can learn to predict one another’s behavior, even over an infinite number of repetitions of the game (Foster and Young, 2001; for earlier results in the same spirit see Binmore (1987) and Jordan (1991, 1993)).