Game theory is a mathematical framework to study cooperation and conflict in a general setting. Game theory is a theory of strategic interactions. In other words, it is a theory of how to win. While game theory can help design computer algorithms that can beat any human at Texas Hold ‘Em poker, it also has serious utility in business and politics. Under what conditions can selfish actors ensure or enforce mutual cooperation that requires sacrifice from everyone? This a fundamental question studied by game theorists, and it is also happens to be the main hurdle that needs to be overcome for humanity to deal with climate change.
The most famous game studied by game theorists is called Prisoner’s Dilemma (PD). PD has two players, who can either cooperate or defect. The idea is that there are two robbers who are being interrogated separately by the police. Robbers cooperate if they both keep silent about their crime: each then spends a short sentence in prison. If both talk, then each spends a long sentence in prison. However, if only one robber implicates the other, the defector spends no time in prison, while the other gets a maximum sentence.
In PD, the best overall outcome is if both players cooperate. However, if one player defects and the other cooperates, the defector gets a higher payoff compared to cooperating, and the cooperator is a sucker who gets nothing. In one round of the game, the best strategy is always to defect—no matter what strategy the other player chooses, defection leads to the best outcome. Defection is thus a Nash equilibrium and Evolutionarily Stable Strategy for PD. Cooperation is unstable because both parties have a strong incentive to cheat the other. Such games make for fascinating television.
What conditions favor cooperation in PD? Robert Axelrod, a political scientist, and William D. Hamilton, an evolutionary biologist, answer this question in a classic theoretical paper called “The Evolution of Cooperation” (Fig. 1 from this paper is above). If the game is played repeatedly for an indeterminate number of rounds, and players remember what move their opponent last played, then cooperation works out. The idea is to cooperate with the other player in the first encounter, and then to mimic their moves. If the other player cooperated in the last round, then cooperate this time. If the other player defected last time, then punish them by defecting this time. This strategy is called iterated tit-for-tat, and it is evolutionarily stable if and only if the probability of continuing the game is sufficiently large. The same conditions hold in business or politics: if two parties know a relationship will end at a fixed date, the prospects for stable cooperation in the present are dim. Always Defect is always an evolutionarily stable strategy; cooperators can only invade if they preferentially cooperate with other cooperators rather than being suckered time and time again by defectors.
In the parlance of game theory, altruism is a sacrifice for other’s welfare, while spite involves hurting oneself to punish others even more. Many beliefs, such as it is an honor to sacrifice oneself for God or country, or that one will go to heaven if one murders people with different beliefs, can seem pretty crazy. Nonetheless, such beliefs are everpresent—and conviction in one’s beliefs is universally admirable across human cultures. Why are people willing to die for their beliefs? Where do ideologies come from? Writers such as Plato, Macchiavelli, and Hobbes all considered laws and ethical systems—social norms that promote cooperation—to be mechanisms of social control. I believe that ideologies serve at least three purposes. First, beliefs can benefit one group or agent at the expense of others (e.g. patriarchy, forced religious conversions, starting your own religion or cult). Second, beliefs can enforce group cohesion (e.g. the commandment that “Thou shalt have no other gods before me”). Finally, beliefs can benefit a community as a whole (e.g. vegetarianism).
There are some interesting game theoretical arguments for some varieties of religious belief. First, belief in an afterlife creates an incentive against villainous behavior. Pascal famously described a wager in which belief in Christianity has a small cost but a potentially infinite payout. Imagine that God exists. Then living a good life as a believer has a infinitely large reward (going to Heaven). If God doesn’t exist, then the price of moral restraint has some finite cost. Then, even if the probability that God exists is very small, the infinite reward of going to Heaven outweighs any finite price of moral restraint even in the absence of God.
Life without the possibility of an afterlife is a bit like one-shot prisoner’s dilemma. Lack of moral restraint—especially in business and finance--can have a very large payoff indeed.
Now, let’s suppose that reincarnation is true, and people are reborn over and over again in an unending cycle. This seems a bit like iterated prisoner’s dilemma! Does reincarnation work like iterated tit-for-tat in justifying cooperation and being empathetic with all sentient beings? Not quite: iterated tit-for-tat requires memory of who was a defector and a cooperator in their past life for reincarnation to be sufficient to structure cooperation. And of course, Always Defect is still a stable strategy—even if reincarnation were true.
Jain, Buddhist, and Hindu philosophical traditions use the Sanskrit word “samskara” to describe memory effects from past lives that condition one’s existence. Certainly, samskaras are real; historical as well as social and emotional information from previous generations affect how people interact with others (e.g. whether a black person should trust a police officer). In iterated tit-for-tat, players have to remember who is likely to defect and who is likely to cooperate to properly structure cooperative interactions.
The point is that individuals and communities foster ideologies that often—perhaps usually—benefit some individuals at the expense of the community at large, or at the expense of a different community. Why do people disbelieve climate change? The answer usually lies in understanding the payoff for those individuals in changing their beliefs. Who benefits from a certain belief or set of values? Those people are usually the ones promoting those particular values. Unless one wants to be a sucker, it’s well worthwhile critically considering the origins and payoffs of our beliefs, both to ourselves, as well as to our communities.