The Evolution of Cooperation

Introduction.

"The Evolution of Cooperation" generally refers to either the study of how cooperation can emerge and persist (also known as cooperation theory) as elucidated by application of game theory, a book by Robert Axelrod [Axelrod 1984] that popularized that study, or a paper by Axelrod and William Hamilton [Axelrod & Hamilton 1981] in the scientific literature. The book was summarized in Douglas Hofstadter's May 1983 "Metamagical Themas" column in Scientific American [Hofstadter 1983] (reprinted in his book [Hofstadter 1985]); see also Richard Dawkin's summary in the updated version of The Selfish Gene [Dawkins 1989, ch. 12]. This article is an introduction to the evolution of cooperation. 1

The idea that human behavior can be usefully analyzed mathematically gained great credibility following the application of operations research in World War II to improve military operations [Morse & Kimball 1951]. (See sidebar.)

This spurred great interest after the war - and publication in 1944 of Von Neumann and Morgenstern's Theory of Games and Economic Behavior [Von Neumann & Morgenstern 1944] - in the use of game theory for developing and analyzing optimal strategies for military and other uses. (See The Compleat Strategyst [Williams 1954, 1966] for a popular exposition of game theory. See also Wikipedia re Operations research, Game theory and Prisoner's dilemma.)

But game theory had a little crisis: it could not find a strategy for a simple game called "The Prisoner's Dilemma" (PD) where two players have the option to cooperate together for mutual gain, but each also takes a risk of being suckered. (See sidebar.)

The Prisoner's Dilemma.

The Prisoner's Dilemma game (invented around 1950 by Merrill Flood and Melvin Dresher [Axelrod 1984, p.216 n.2]) can be demonstrated as thus: you and a criminal associate have been busted. Fortunately, most of the evidence was shredded, so you're facing only a year in prison. But the prosecutor wants to nail someone, so he's offering you a deal: if you squeal on your associate - which will result in his getting a five year stretch - the prosecutor will see that six months is taken off of your sentence. Which sounds good, until you learn your associate is being offered the same deal - which would get you five years.

So what do you do? The best that you and your associate can do together is to not squeal - that is, to cooperate (with each other, not the prosecutor!) in a mutual bond of silence, and do your year. But wait - if your associate cooperates (that sucker!), can you do better by squealing ("defecting") to get that six month reduction? It's tempting, but then he's also tempted. And if you both squeal, oh, no, it's four and half years each. So perhaps you should cooperate - but wait, that's being a sucker yourself, as your associate will undoubtably defect, and you won't even get the six months off. So what is the best strategy to minimize your incarceration? (Aside from going straight in the first place.)

Technically, the Prisoner's Dilemma is any two-person "game" where the payoffs are ranked in a certain way. If the payoff ("reward") for mutual cooperation is R, for mutual defection is P, the sucker gets only S, and the temptation payoff (provided the other player is suckered into cooperating) is T, then the payoffs need to be ordered T > R > P > S, and satisfy R > (T+S)/2. [pp. 8-10, 206207] 2

The popularity of this issue is in part because it mirrors a larger issue where the realms of political philosophy, ethics, and biology cross: the ancient issue of individual interests versus group interests, and even the basis of morality[Wade 2008]. On one hand, the so-called "Social Darwinians" (roughly, those who would use the "survival of the fittest" of Darwinian evolution to justify the cutthroat competitiveness of laissez-faire capitalism [Bowler 1984, pp. 94-99 269-70]; see sidebar) declaim that the world is an inherently competitive "dog eat dog" jungle, where every individual has to look out for him- or her-self, and right-wing philosopher Ayn Rand damned "altruism" and declared selfishness a virtue [Rand 1961]. On the other hand, other philosophers (such as Hobbes and Rousseau, see sidebar) have long observed that cooperation in the form of a "social contract" is necessary for human society, but saw no way of attaining that short of a coercive authority. (Also see Wikipedia re Altruism, Social contract, and Social Darwinism.)

Social Darwinism vs. Mutual Aid

Darwin's theory of evolution by natural selection is explicitly competitive ("survival of the fittest"), Malthusian ("struggle for existence"), even gladiatorial ("red in tooth and claw"), and permeated by the Victorian laissez faire ethos of Darwin and his disciples (such as T. H. Huxley and Herbert Spencer). What they read into the theory was then read out by "Social Darwinians" as scientific justification for their social and economic views (such as poverty being a natural condition and social reform an unnatural meddling) [Bowler 1984, pp. 94-99].

As early as 1902 the Russian naturalist (and later anarchist) Petr Kropotkin had complained: "They raised the 'pitiless' struggle for personal advantage to the height of a biological principle, which man must submit to as well, under the menace of otherwise succumbing in a world based on mutual extermination." [Kropotkin 1902, p. 4]

Kropotkin argued that "mutual aid" was as much a factor in evolution as the "struggle for existence", a position now reached by the theory of cooperation. (See Stephen Jay Gould's essay, "Kropotkin Was No Crackpot", [Gould 1997].) Why Kropotkin's view languished, why the interpretation of Darwinian theory developed the way it did, and the various political ramifications are interesting topics worth further study, but beyond the scope of this article.

The Social Contract

Thomas Hobbes, from Leviathan:

Where there is no common power, there is no law; where no law, no justice. [p. 108]
[T]here must be some coercive power to compel men equally to the performance of their covenants by the terror of some punishment greater than the benefit they expect by the breach of their covenant.... [p. 120]
[C]ovenants without the sword are but words.... [p. 139]

Jean Jacques Rosseau, from The Social Contract:

[The social contract] can arise only where several persons come together: but, as the force and liberty of each man are the chief instruments of his self-preservation, how can he pledge them without harming his own interests, and neglecting the care he owes himself? [p. 13] In order then that the social compact may not be an empty formula, it tacitly includes the undertaking, which alone can give force to the rest, that whoever refuses to obey the general will shall be compelled to do so by the whole body. This means nothing less than that he will be forced to be free.... [p. 18]

Herman Melville, from Moby-Dick:

[After risking his life to save someone who had been jeering him, the cannibal harpooner Queequeg says:] "It's a mutual, joint-stock world, in all meridians. We cannibals must help these Christians." [p. 96]

Although the Social Darwinians claimed that altruism and even cooperation were unrealistic, by the 1960's biologists and zoologists were noting many instances in the real "jungle" where real animals - presumably unfettered by conscience and not corrupted by altruistic liberals - were cooperating [p. 90; Trivers 1971]. So is cooperation necessary? Is it possible? Is it evil? Is it as intractable as the Prisoner's Dilemma suggests?

Altruism and cooperation.

Altruism can be defined as "the willingness to sacrifice oneself for others" [Bowler 1984, p. 215], or at least to take a loss or risk. Although altruism - such as saving life, or going to war - has been glorified since the dawn of human history (and probably before), what does the altruist get out of it? In evolutionary terms, survival of a self-sacrificing "altruist gene" - not to be confused with some kind of gene to con others into self-sacrifice - would seem a contradictory concept. And quite contrary to the "individualist" philosophies of the nineteenth century.

One approach - that of sociobiology - considers survival as acting on groups (individuals are expendible); it has not been fully persuasive [Trivers 1971, p. 48; Bowler 1984, p. 312; Dawkins 1989, pp. 7-10, 287, ch. 7 generally].

Another approach explains altruism as more apparent than real, due to indirect or hidden benefits. (E.g., benefitting near kin, who share many genes with the altruist, might serve to advance an "altruist gene".) This upsets the definition; it "take[s] the altruism out of altruism" [Trivers 1971]. So what are we examining: altruism? Or the appearance of altruism?

In a like manner, cooperation and other prosocial behavior, once deemed altruistic, can now be seen as solidly based on a well-considered self-regard. The Randian premise that puts self-interest paramount is largely unchallenged. What has changed is the recognition of a broader, more profound view of what constitutes self-interest.

A partial explanation for the persistence of cooperation and altruism in the wild was provided by the genetic kinship theory of William D. Hamilton [Hamilton 1963, 1964; Dawkins 1989]. The key idea is that the unit of survival is not the individual - no one lives forever - but the genome. Altruism - taking a loss for the benefit of another - can be evolutionarily advantageous because the loss of a few progeny could advance other progeny who are carrying the same genes. However, this works only where the individuals involved are closely related; it fails to explain how unrelated individuals might benefit from cooperation.

Alternately, the reciprocity theory of Robert Trivers [Trivers 1971] suggested a basis for cooperation between unrelated individuals, but simple reciprocity seemed to be as intractable as the PD. However, it turns out that the iterated Prisoner's Dilemma, or IPD, is amenable to analysis. And it shows that where there is a repeated interaction cooperation arises naturally because the individuals involved can obtain a greater benefit. This was dramatically demonstrated by a pair of tournaments held by Robert Axelrod around 1980.

Axelrod's Tournaments.

Axelrod initially solicited strategies from other game theorists to compete in the first tournament. Each strategy was paired with each other strategy for 200 iterations of a Prisoner's Dilemma game, and scored on the total points accumulated through the tournament. The winner was a very simple strategy submitted by Anatol Rapoport called "TIT FOR TAT" (TFT) that cooperates on the first move, and subsequently echoes (reciprocates) what the other player did on the previous move. The results of the first tournament were analyzed and published, and a second tournament held to see if anyone could find a better strategy. TIT FOR TAT won again. Axelrod analyzed the results, and made some interesting discoveries about the nature of cooperation, which he describes in his book [Axelrod 1984].

In both the actual tournaments and various simulations the best performing strategies were "nice"[p. 113]. That is, they always started by cooperating, and they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. There is a profound lesson here: TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness." [p. 130]

The lesson rapidly broadens: Most of the games that game theory had heretofore investigated are "zero-sum" - that is, the total rewards are fixed, and a player does well only at the expense of other players - and the adage "survival of the fittest" implies that one player is a winner and the other a loser. But real life is not zero-sum. Although our culture glorifies heroic individualism, our best prospects are usually in cooperative efforts. In fact, TFT cannot score higher than its partner. But in consistently scoring a strong second-place it won tournaments [p. 112]. Axelrod sumarizes this as: don't be envious [pp. 110-113]. Or: don't strive for a payoff greater than the other player's [Axelrod 2000, p. 25].

While rightists could well take a lesson here about being "nice", leftists should take a lesson that nice alone is not sufficient to gain a greater benefit. To avoid being suckered (exploited) it is just as necessary to be provocable to both retaliation and forgiveness. That is, when the other player defects, a nice strategy must immediately be provoked into retaliatory defection [pp. 62, 211].3 The same goes for forgiveness: return to cooperation as soon as the other player does. Over-doing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players [p. 186]

Which leads to the fourth property of successful strategies: clarity. Or: don't be too clever[p. 120+]. In any IPD game there is a certain maximum score each player can get by always cooperating. But some strategies try to find ways of getting a little more with an occasional defection (exploitation). This can work against some strategies that are less provocable or more forgiving than TIT FOR TAT, but generally they do poorly. "A common problem with these rules is that they used complex methods of making inferences about the other player [strategy] - and these inferences were wrong."[p. 120] Against TFT (and "nice" strategies generally) one can do no better than to simply cooperate [pp. 47, 118].

Axelrod also did an "ecological" tournament, where the prevalence of each type of strategy in each round was determined by that strategy's success in the previous round; the competition in each round becomes stronger as weaker performers are reduced and eliminated. The results were amazing: a handful of strategies - all "nice" - came to dominate the field [pp. 48-53].

Foundation of reciprocal cooperation.

The lessons described above apply in environments that support cooperation, but whether cooperation is supported at all depends on one crucial factor: the probability w that the players will meet again [p. 13]. (Also called the discount parameter, or shadow of the future.) When w is low - that is, the players have a negligible chance of meeting again - each interaction is effectively a single-shot Prisoner's Dilemma game, and one might as well defect in all cases (a strategy called "ALL D"), because even if one cooperates there is no way to keep the other player from exploiting that. But in the iterated PD the value of repeated cooperative interactions can become greater than the benefit/risk of a single exploitation (which is all that a strategy like TFT will tolerate).

Curiously, rationality and deliberate choice are not necessary, nor trust nor even consciousness [pp. 18, 174], as long as there is a pattern that benefits both players (e.g., increases fitness), and some probability of future interaction. Often the initial mutual cooperation is not even intentional, but having "discovered" a beneficial pattern both parties respond to it by continuing the conditions that maintain it.

This implies two requirements for the players, aside from whatever strategy they may adopt. First, they must be able to recognize other players, to avoid exploitation by cheaters. Second, they must be able to track their previous history with any given player, in order to be responsive to that player's strategy.[p. 174]

Even when the discount parameter, w, is high enough to permit reciprocal cooperation there is still a question of whether and how cooperation might start. One of Axelrod's findings is that when the existing population never offers cooperation nor reciprocates it - the case of ALL D - then no nice strategy can get established by isolated individuals; cooperation is strictly a sucker bet. (The "futility of isolated revolt".[p. 150]) But another finding of great significance is that clusters of nice strategies can get established. Even a small group of individuals with nice strategies with infrequent interactions can yet do so well on those interactions to make up for the low level of exploitation from non-nice strategies [pp. 63-68, 99].

Axelrod is not without critics. Binmore (1998b) says Axelrod's simulation data is "woefully inadequate" and dependent on particular conditions; he decries inadequate consideration of game theory and the popularisation of TFT as general model for human behavior.

Subsequent work.

In 1984 Axelrod estimated that there were "hundreds of articles on the Prisoner's Dilemma cited in Psychological Abstracts" [p. 28]. Since then he has estimated that citations to The Evolution of Cooperation alone "are now growing at the rate of over 300 per year" [Axelrod 2000, p.3]; to fully review this literature is infeasible. What follows are therefore only a few selected highlights.

Axelrod has a subsequent book, The Complexity of Cooperation [Axelrod 1997] which he considers a sequel to The Evolution of Cooperation. Other work on the evolution of cooperation has expanded to cover prosocial behavior generally [Boyd 2006, Bowles 2006] and in religion [Norenzayan & Shariff 2008], the promotion of conformity Bowles et. al. 2003], other mechanisms for generating cooperation [Nowak 2006], the IPD under different conditions and assumptions [Axelrod & Dion 1988], and use of other games such as the Public Goods and Ultimatum games to explore deep-seated notions of fairness and fair play [Nowak et. al. 2000; Sigmund et. al. 2002]. It has also been used to challenge the rational and self-regarding "Economic Man" model of economics [Camerer & Fehr 2006], and as a basis for replacing Darwinian sexual selection theory with a theory of social selection [Roughgarden et. al. 2006].

Nice strategies are better able to invade if they have social structures or other means of increasing their interactions. Axelrod discusses this in chapter 8; in a later paper he and Rick Riolo and Michael Cohen [Riolo et. al. 2001] use computer simulations to show cooperation rising among agents who have negligble chance of future encounters but can recognize similarity of an arbitrary characteristic.

When an IPD tournament introduces noise (errors or misunderstandings) TFT strategies can get trapped into a long string of retaliatory defections, thereby depressing their score. TFT also tolerates "ALL C" (always cooperate) strategies, which then give an opening to exploiters. 4 In 1992 Martin Nowak and Karl Sigmund demonstrated a strategy called Pavlov (or "win-stay, lose-shift") that does better in these circumstances [Nowak & Sigmund 1992; see also Milinski 1993]. Pavlov looks at its own prior move as well as the other player's move. If the payoff was R or P (see "Prisoner's Dilemma", above) it cooperates; if S or T it defects.

In a 2006 paper Nowak listed five mechanisms by which natural selection can lead to cooperation. [Nowak 2006]. In addition to kin selection and direct reciprocity, he shows that:

The payoffs in the Prisoner's Dilemma game are fixed, but in real life defectors are often punished by cooperators. Where punishment is costly there is a second-order dilemma amongst cooperators between those who pay the cost of enforcement and those who do not [Hauert et. al. 2007]. Other work has shown that while individuals given a choice between joining a group that punishes free-riders and one that does not initially prefer the sanction-free group, yet after several rounds they will join the sanctioning group, seeing that sanctions secure a better payoff [Gurek et. al. 2006].

And there is the very intriguing paper "The Coevolution of Parochial Altruism and War" by Jung-Kyoo Choi and Samuel Bowles [Choi & Bowles 2007]. From their summary:

Altruism—benefiting fellow group members at a cost to oneself —and parochialism—hostility towards individuals not of one's own ethnic, racial, or other group—are common human behaviors. The intersection of the two—which we term "parochial altruism"—is puzzling from an evolutionary perspective because altruistic or parochial behavior reduces one's payoffs by comparison to what one would gain from eschewing these behaviors. But parochial altruism could have evolved if parochialism promoted intergroup hostilities and the combination of altruism and parochialism contributed to success in these conflicts.... [Neither] would have been viable singly, but by promoting group conflict they could have evolved jointly.

They do not claim that humans have actually evolved in this way, but that computer simulations show how war could be promoted by the interaction of these behaviors.

Conclusion.

When Richard Dawkins set out to "examine the biology of selfishness and altruism" in The Selfish Gene [Dawkins 1976, 1989, p. 1], he reinterpreted basis of evolution, and therefore of altruism. He was "not advocating a morality based on evolution" [Dawkins 1989, p. 2], and even felt that "we must teach our children altruism, for we cannot expect it to be part of their biological nature." [p. 139] But Trivers had shown that altruism (cooperation) could be based on reciprocity of behavior [Trivers 1971], John Maynard Smith was showing that behavior could be subject to evolution [Maynard Smith 1976, 1978, 1982] and Axelrod's dramatic results showed that in a very simple game the conditions for survival (be "nice", promote the mutual interest) seem to be the essence of morality. While this does not yet amount to a science of morality, the game theoretic approach has established that cooperation can be individually profitable and evolutionarily viable, and has clarified the requisite conditions. Extensions of this work to morality [Gauthier 1986] and the social contract [Kavka 1986; Binmore 1994, 1998, 2004] may yet resolve the old issue of individual interests versus group interests.

Recommended reading.


Notes.

  1. Most references are to the scientific literature. A few more accessible references in the popular literature are included.
  2. Partial references are generally to [Axelrod 1984].
  3. Bertold Brecht's "Good Woman of Setzuan" could not survive without the occasional visit of the cousin. An intriguing question is why she cannot apply the necessary correctives herself.
  4. Axelrod [pp. 136-8] has some interesting comments on the need to suppress universal cooperators. See also a similar theme in Piers Anthony's novel Macroscope.
  5. Here group selection is not a form of evolution, which is problematical [see Dawkins 1989, ch. 7], but a mechanism for evolving cooperation.

References.