Ken Binmore is the Renaissance Man of game theory, combining a strong analytical presence and an excellent record of empirical research with a deep appreciation for the social role of game theory and its relationship to evolutionary biology, anthropology, and philosophy. "Does game theory work?' is mainly a compilation of his bargaining experiments, but it includes an new introduction explaining the issues behind the title of the book and offering an answer to the question.
The main issue behind the question is the body of experimental results that show that individuals often behave in ways not predicted by classical game theory. This body of data includes the investigation of logic and decision-making by Nobel laureate Daniel Kahneman and his coworkers on Bayesian rationality and the more recent body of experiments on strategic interaction in social dilemmas.
I agree with Binmore's answer, which is that game theory does work, but I think he is wrong and/or misleading in many of the points he makes in the book's introductory chapter. For a broader treatment of these issues, see Herbert Gintis, "Behavioral Ethics Meets Natural Justice", Politics, Philosophy and Economics 5,1 (2006):5-32.
Since I am one of the "behavioral economists" who comes into criticism in this chapter, I shall begin by stating my own views. Classical game theory holds that rational actors are self-regarding in the sense that they care only about their own material payoff in games, and they will play in ways that implement Nash equilibria. I think the evidence overwhelmingly supports this prediction in market-like interactions in which individuals cannot affect the behavior of others through strategic interaction. Indeed, Vernon Smith received the Nobel prize in economics largely for showing that this is the case. However, when individuals come into direct, personal, strategic interaction, the classical predictions fail. This is not in the first instance because there is any problem with game theory, but rather because the self-regarding model of human preferences is incorrect. Rather, people care about fairness, honesty, trustworthiness, and are strong reciprocators in the sense that they prefer to return kindness for kindness and unkindness for unkindness, even when this is personally costly in material terms.
Binmore's position, by contrast, is that there is nothing wrong with the neoclassical model of the individual as a largely selfish maximizer of personal material gain, and the apparent value of fairness, reciprocity, and ethical virtue exhibited in experimental settings arise because either the monetary stakes are very low, or the game is so complex that individuals deploy behaviors from every-day life in which these values help a selfish individual to establish and maintain a reputation that is selfishly maximizing in the long run, or individuals simply haven't had enough time to learn how to behave selfishly. I think each of these arguments is incorrect.
First, does the fact that when the monetary stakes increase people behave more selfishly contradict the other-regarding preferences model? Not at all. A simple application of the economist's rational actor model shows that unless one cares infinitely about the non-monetary payoffs, when the when the monetary rewards to a selfish behavior increase and the non-monetary rewards for unselfish behavior are held constant, behavior will shift towards the selfish behavior. For instance, suppose a fraction f(p) of subjects are willing to sacrifice an amount of money p to behave honestly. Then, as p increases, we expect f to fall; i.e. the higher the cost of being honest, the lower the fraction of the subject pool who will act honestly.
Second, it is true, as Binmore stresses, that in many experimental games, subjects begin by playing unselfishly but when the game is repeated many times, they end up behaving selfishly. Binmore interprets this as "learning to play the game," so the original behavior is not altruistic, but simply mistaken. For instance, in the public goods game, subjects begin by contributing more than half their income to the public pool, but after ten rounds, they contribute almost nothing. Is this because they learned how to play? Not at all. It is because some players do not contribute, and contributors feel cheated and respond by not contributing themselves. We know that this is the correct explanation because if we restart the game with experienced subjects, the same people who contributed nothing at the end of the last series of tries, begin by contributing at their original level (Andreoni, Journal of Pubic Economics, 1988).
Binmore's final argument, that acts of altruism and kindness demonstrated in the laboratory are due to subjects' mistaking the one-shot anonymity of the laboratory for the repeated game, reputation-formation environment of everyday life, is equally without foundation. The most important indication of this is that in fact we experience many one-shot anonymous encounters in everyday life, and people are quite capable of telling the difference between such events and the recurrent ones we share with family, friends, and coworkers. The idea that anonymous one-shots are rare and we are unaccustomed to dealing with them is not plausible.
Binmore believes that repeated game theory's Folk Theorem is sufficient to explain human cooperation, and other-regarding preferences are just a small wrinkle in human behavior. This is bizarre coming from Binmore, who stresses throughout that people only learn to play simple games, whereas the Nash equilibria implemented by the Folk Theorem are horribly complex and depend on highly implausible constructs, such as individuals actually playing mixed strategies, signals been public, a mechanism existing to choose among the continuum of equilibria available, and some dynamical mechanism by which behavior is coordinated and stabilized. For groups of more than five or six agents, the Folk Theorem is a poor model of behavior indeed, and has no empirical support.
Binmore stresses that social institutions choose efficient equilibria from among the myriad Nash equilibria envisioned by the Folk Theorem, but there is no analytical model that supports this assertion. Indeed, as Aumann (1987) has shown, under many plausible conditions the natural equilibrium concept for game theory is the correlated equilibrium, which is highly amenable to instantiation through social institutions. However, it is a long distance from this plausible notion to the idea that human cooperation is based in the main on selfishness, and the other-regarding preferences and ethical proclivities of humans is just a little icing on the cake. My own view is that human society is predicated on our predisposition to behave ethically, and a society of selfish sociopaths, however patient and however enlightened to their own self-interest, would offer [sic] lives that are overarchingly nasty, brutish and short.
tl;dr version: Gintis tror att "game theory works" men han är fortfarande ingen lolbertarian och the namedropper är