A Game Theoretic View of the Atonement

2013-09-09 John Forbes Nash

John Forbes Nash, after whom Nash equilibrium is named.

The Prisoner’s Dilemma came up in the comments to a post of mine from about a month ago. I outlined my thoughts very briefly there (see comment #12), but I’d like to return to them in more depth today.

The Prisoner’s Dilemma is perhaps the most important scenario studied in game theory, and “it shows why two individuals might not cooperate, even if it appears that it is in their best interests to do so.” To understand the analysis, however, I’ll need to back up and give a very brief game theory primer.

 

In game theory, a game is a situation where two players each face two or more options in pursuit of goals which are at least partially in conflict, and where the outcome of the situation depends on the choices that each player makes. This interdependence is what game theory from more general decision theory. In traditional parlance, games are won and lost, but in game theory game are solved when you understand exactly what decisions the players will take when each takes into account the actions of every other player This arrangement of complementary player actions is called an equilibrium.

There are many kind of equilbria, but the most important is the Nash equilibrium. A Nash equilibrium is a set of of player actions such that each player has no incentive to change his or her action in response to the actions chosen by other players.

Now we’re ready to see how the concept of Nash equilibrium applies to an example of the Prisoner’s Dilemma. According to the initial setup, two prisoner (Prisoner A and Prisoner B) have been arrested. They are currently being held in separate interrogation rooms, and each faces a simple option: rat out their fellow prisoner (“defect”) or keep mum (“cooperate”). The results are of these decisions are illustrated in the table below. (This format is called a payoff matrix, if you’re curious.)

2013-09-09 Prisoner's Dilemma

Intuitively, it seems like the best solution is for the two prisoners to cooperate. In that case, as the payoff matrix describes, each serves a short, 1-year prison sentence. Unfortunately, this arrangement is not a Nash equilibrium. To see why, just ask whether there’s anything Prisoner A would prefer, given that Prisoner B is choosing to cooperate. There is.

2013-09-09 Prisoner's Dilemma A Defects

If Prisoner A knew that Prisoner B was going to cooperate and stay mum, then Prisoner A would prefer to defect by ratting out Prisoner B. In that case, Prisoner B gets a harsh 3-year sentence, but Prisoner A gets off with no jail time at all. So the “cooperate/cooperate” response is not a solution to the game. Neither, of course, is “cooperate/defect”. In that case, If Prisoner B knew that Prisoner A was going to defect, Prisoner B would also want to defect to reduce his or her jail time from 3 years to 2 years.

2013-09-09 Prisoner's Dilemma A Defects B Defects

So we’ve ruled out “cooperate/cooperate” along with “cooperate/defect”, and we see that “defect/cooperate” is no better. What about “defect/defect”? Well, in that case, nothing that either Prisoner can do unilaterally will increase their payoff. If either player shifts, alone, from “defect” to “cooperate”, that player will end up serving 3 years instead of 2. Well, why don’t they just both switch to cooperate?

The answer is that if you believe that they are rational and that their payoffs are fully reflected in the table, they can’t. By far one of the biggest stumbling blocks to understanding game theory is trusting the payoff matrix. Intuitively, we understand that if we ratted out our compatriot we would feel guilty and if we cooperated together we would feel a sense of vindicated trust. When we imagine outcomes like this, we’re basically stating that the payoffs in the payoff matrix aren’t correct. That’s fine as far as it goes. You can draw your own payoffs for the story about two prisoners who have been captured and factor in things like guilt and friendship and easily jury-rig a game where “cooperate/cooperate” is a Nash equilibrium and the story has a happy ending (unless you’re the cops, I guess). So far so good, but you haven’t solved this game , you’ve solved some other game. You’re no longer talking about the Prisoner’s Dilemma (capital P, capital D). In that game, the total payoffs–including all factors like guilt, friendship, honor, shame, and so forth–have been incorporated into the table above. Your prisoners might feel so bad about turning in their fellow conspirator that it outweighs the benefits of turning states evidence, but these prisoners do not. In our case, Player B knows that if he chose to cooperate, Player A would rather defect and vice versa. Therefore they must each choose “defect” as a matter of self-preservation.

2013-09-09 O Brother

Of course the particular story about prisoners being interrogated and choosing whether to cooperate or defect is pretty specific, but what makes the Prisoner’s Dilemma so important is that the overall structure can be used to represent a wide variety of important, real-world problems. That very partial list includes things like doping in professional sports, how much a company should spend on advertising, climate change, and international arms races. The basic question is simply this: how do you get people to cooperate when there’s a benefit from exploiting one another’s attempt to cooperate? In that sense, I believe it’s the fundamental practical ethical question humanity faces.

In that context, the non-existence of a Nash equilibrium to support “cooperate/cooperate” is disheartening, to say the least.

But all is not necessarily lost. If the Prisoner’s Dilemma is a model for the core moral consideration in human interaction, then it’s obviously not played in a vacuum. Instead, many rounds are played, sometimes with new players and sometimes with the same players many times in a row. How many times do you encounter a situation that could be modeled as the Prisoner’s Dilemma with your coworkers? Friends? Family? Spouse? Many, many times every single day. The quest is not to find the optimal strategy for a single, isolated instance of the Prisoner’s Dilemma (we already know that is “defect/defect”), but rather to find a strategy for how to play the Prisoner’s Dilemma an indefinite number of times with an indefinite number of other players.

We’re now in the context of the iterated Prisoner’s Dilemma. The bad news is that the techniques for determining equilibria in iterated games are substantially more complex than in stand-alone games. The good news is that in iterated games, “cooperate/cooperate” can be supported over time as a Nash equilibrium (or one of the more sophisticated equilibria like subgame perfect Nash equilbria or Bayesian Nash equilibria that are refinements of the basic Nash equilibrium concept). The unfortunate thing about the solutions to the iterated Prisoner’s Dilemma is that they rely on threats. In fact, much of what makes analysis of iterated games complicated is trying to determine which threats are credible. As a general rule, the harsher the available threat, the easier it is to achieve cooperation.

The biggest breakthrough in the iterated Prisoner’s Dilemma came in the early 1980s when political scientist Robert Axelrod sponsored a series of Prisoner’s Dilemma tournaments in the early 1980s. The setup was very simple: a bunch of computer programs were to be matched against each other in a series of repeated Prisoner’s Dilemma games, and the winner would be the program that garnered the most total points over the span of all the games. A wide variety of incredibly sophisticated and complex strategies were submitted, many of which relied on attempting to learn the strategy of the other computer programs to subsequently exploit it. What stunned the researchers, however, was that the very simplest programs (comprising just 4 lines of code) was also the most successful. The program was called simply tit-for-tat. All it did was this: start out by cooperating and then, on every subsequent game, simply repeat whatever the opponent had played in the previous game. The explanation for exactly why this simple strategy was so successful is complex, and it basically launched the study of evolutionary cooperation, but it boils down to this: be nice but provocable.

2013-09-09 Tit-for-Tat

Tit-for-Tat coming out on top in a recreation of Axelrod’s original tournament.

My own belief is that, in moral terms, tit-for-tat looks a whole lot like “eye-for-an-eye”. It is, in essence, an implementation of justice and retribution. It’s highly successful relative to other strategies, but it requires infliction of pain to work. Is there something better?

I believe so, but before I get to my theories and close the post, a couple of qualifiers. First: the tit-for-tat strategy is considered the most robust strategy over all, but in any given scenario the actual best strategy depends on the composition of the other players. If a single tit-for-tat player gets dropped into a pool of players who play defect all the time, then the tit-for-tat player will lose. Furthermore, the best strategy also depends on the specific rules of the contest and especially the number of rounds and the method for matching the players for each round. Trying to decide which set of rules offers a good model for real life is a complex issue in and of itself.

Secondly: you might be wondering why anyone bothers trying to explain human interaction using sophisticated mathematical models at all. To that I can only offer this explanation: although obviously mathematical models such as these cannot hope to capture the full complexity of human interactions, game theoretic analysis has led to crucial insights into real-world problems in the past. As an example, I would suggest Thomas Schelling and his work in books like The Strategy of Conflict. I don’t claim game theory is the only way to approach these issues–not remotely!–but I do believe it offers unique and important insights.

So, what can we learn from applying game theory to humanity?

First, I think we can learn to recognize how difficult the problem of building Zion truly is. Placing ourselves at risk goes against our rational self-interest, at least in the short-run. In addition–and unlike the computer programs in Axelrods tournament–humans make mistakes. This means sometimes we defect when meant to cooperate, or cooperate when we meant to defect. These mixed signals dramatically complicate our attempts to grapple with the practical and ethical problems of learning to cooperate. (The connection to Zion is simply this: I imagine a society where everyone chooses “cooperate” all the time is a partial glimpse of Zion).

Second, I think we should be careful about how we apply “be nice, but provocable” to human nature. In the context of the original Prisoner’s Dilemma and Axelrod’s tournament, the negative effects of defection had to come from the other player. But if sin brings with it a natural consequence, then the provocation may not be required from the nice agents. It’s built into the world. On the other hand, if the negative effects are delayed or disguised by noise, additional chastisement might be required to alert the players of the true payoffs. This is a technically crucial point, because if I’m going to cite game theory to get me this far, I have to rely on it to solve the problems I’ve raised. And the only way out of the Prisoner’s Dilemma, even the iterated version, is threats. A Zion where cooperation is maintained by threat hardly seems like a Zion at all. If the punishment is external, however, and doesn’t rely on retaliation, then we’re changing the payoffs and thus the nature of the game. And I believe that indeed part of what the Gospel does is change our perceptions of the payoffs.

Third, there’s a lot to learn from the way that the programs interacted in Axelrod’s tournament. Specifically: the “nice” programs tended to significantly outperform the exploitative programs when each dealt with similar programs. (Neighborhoods of nice AIs do much better than neighborhoods of exploitative AIs or mixed neighborhoods.) In addition, there’s a very slight improvement on tit-for-tat that can be used to get a marginal increase in performance called tit-for-tat-with-forgiveness. This strategy is the same as tit-for-tat, except that it sometimes (1-5% of the time) replies to a defect by cooperating instead of defecting. Of course, the term “forgiveness” is loaded, and I don’t think it should be read in the usual, religious sense because the whole concept of animosity or resentment is outside the scope of this model. So, instead of forgiveness, I think that sometimes departing from tit-for-tat to proffer cooperation despite an opponent’s betrayal in a previous game can be seen as a kind of investment that serves two purposes. First, it helps to identify “nice” players to each other, which enables them to collect in neighborhoods. (It also allows nice players to overcome the problems represented by their imperfections).

This introduces the notion of signalling (an economics term for reliable communication, as opposed to cheap talk). In this sense, deliberately sacrificing well-being is a kind of call to others. It’s capacity for use as a communications medium was demonstrated in a 20th anniversary replay of the Axelrod tournament. In that case, the winners submitted multiple versions of the same program. These programs were designed to perform a specific pattern of 10 moves to identify each other in the game. If two of the same AI found each other, they then choose “cooperate/defect” to maximize the value to one of the players, but if the two AI found other opponents, they would always choose “defect” to minimize the benefit to the other player. At the end of the competition, these AI filled most of the very top and very bottom spots in the ranking. This illustrates the power of being able to communicate with like-minded agents to circumvent the traditional assumptions.

But I think that signalling can do more than just attract those are already “nice.” Returning to the idea that beating the game involves changing the payoffs, I think that communities of “nice” agents can serve as an example that cooperation has additional benefits when you consider the aggregate payoff instead of only the individual payoffs. This is especially true if, over time, the benefits from communities of nice agents can be reinvested to create even better payoffs in the future. This creates a kind of “city on a hill” effect, but there’s another possible role for the idea of sacrifice.

If the basic premise is correct, if deliberately choosing to cooperate as a deviation from tit-for-tat (from justice) works as a signal because it is costly (because it renders the person who chooses this path vulnerable), then the strength of the signal corresponds to the depth of the sacrifice. If this is true, then the strength of the signal depends in a sense on the cost of the sacrifice to send it. An infinite cost begets infinite credibility, and becomes an eternal beacon with a simple message: cooperate. That’s my game-theoretic perspective on the Atonement.

I don’t believe that this perspective is in any sense the exclusively “right” way to look at the Atonement. What I believe is that the Prisoner’s Dilemma represents a deeply important problem of practical ethics. What I’ve outlined here is an informal sketch of a re-conceptualization of the iterated Prisoner’s Dilemma in the context of a complex system. The most important change, both philosophically and technically, from the canonical example is that I’ve introduced the idea of learning. Specifically, in the model I have in mind, we can learn that cooperation leads to increasing returns as networks of cooperating players experience benefits from repeated cooperative interaction. If we can learn, then there is the possibility of changing our perceptions of the payoffs and therefore changing the game. And that’s the part of the impact that I believe Christ had: His sacrifice showed us that the game can be changed.

 

13 comments for “A Game Theoretic View of the Atonement

  1. This seems to align fairly well with the moral influence theory of atonement. Ironic that cold calculating math supports the most emotional theory of atonement while the penal substitution theory(or the accountants’ theory of atonement as I like to call it) is completely inexplicable.

  2. That’s a lot of description. At a simple level if we are talking atonement aren’t we playing with Jesus with a payoff matrix that is set up by our notion of consequences for sin.

    I don’t see how the atonement relates to other prisoner’s (non-Jesus).

    If life is a prisoner’s dilemma, then why don’t we have a more explicit “group punishment” model of morality. There is some of that in duties of parents to children, and leaders to sheep, but even there its seems to be more of a individual process.

    Cool thought experiment but I just don’t see cooperation as that central to ethics. People cooperate for evil ends as often as for good ends, whether in sex, violence or idol worship.

  3. Nathaniel, this is a nice analysis, but I think you’ve neglected one important point. In real-life, we’re not *capable* of coming up with an accurate payoff matrix because we’re not able to always distinguish what is good and what is bad. Example: If someone gets angry and hits me, it hurts my body but it also hurts his/her spirit. However, our calculations rarely consider the hurt absorbed by the abuser because it is hidden, like Dorian Gray’s portrait. Thus, cooperation may *not* put us at a relative disadvantage even in the short term. I am reminded of a post from some time back (I can’t remember the specifics, however) that shared the story of two prisoner’s in a concentration camp witnessing a beating of a friend by a guard. One asked, “What can we do?” and the other responded “Show them that love is stronger” implying that the harm the guard(s) suffered was greater than that suffered by the prisoner being beaten. I don’t mean to discount our calculations (being hit definitely hurts), but an eternal perspective requires us to remember that what we see is only a shadow of what there is.

  4. My point with the above being that it is very possible that cooperation is *always* the best strategy and one doesn’t need a game theoretic model in life to choose it if one sees things as they really are. Of course, I love math and philosophy so I’m happy to discuss both academically anyway.

  5. Jesus’ atonement for our sins is good and valid, whether we cooperate, or not.

    Fact is, we don’t cooperate. We know what to do. We just flat out refuse to do it. And if we sometimes do something, then our motives are shot to hell and we’ve ruined it right there (even though our neighbor may benefit, anyway).

    God saves us in spite of our works. Not because of them.

  6. Nathaniel, this is a nice analysis, but I think you’ve neglected one important point. In real-life, we’re not *capable* of coming up with an accurate payoff matrix because we’re not able to always distinguish what is good and what is bad.

    No, I quite agree with this and it’s covered in two different ways in the analysis. First, I point out that (unlike the programs in Axelrod’s tournament), humans are imperfect both in our perception and execution of actions.

    More importantly, however, the idea that the payoffs can changes as people learn implies that what is happening is that humans are coming to understand that the idea that cooperation is a short-term cost is false.

    There are, after all, only two ways out of the Prisoner’s Dilemma: threat of retaliation or changing the game (i.e. changing the payoffs). The most reasonable interpretation for changing the payoffs is that the payoff matrix that matters is the *perceived* rather than the *actual* payoff matrix. In that case, your comments are actually perfectly aligned with my rough sketch.

  7. Mtnmarty-

    Cool thought experiment but I just don’t see cooperation as that central to ethics. People cooperate for evil ends as often as for good ends, whether in sex, violence or idol worship.

    I would argue that examples like that are not cooperative, but rather mutually exploitative.

  8. Steve-

    Jesus’ atonement for our sins is good and valid, whether we cooperate, or not.

    Perhaps you believe in a Christ who, after saving us wretched creatures, was content to leave us in our wretched state rather than instruct us on a better way to live. I do not.

    So even if your proposition were true and Christ saved us from sin irresistably, that would in now way obviate the possibility that the Atonement was also instructional in nature. Shows us how to be better.

    That you would be so myopically focused on grinding a theological axe that you’re incapable of recognizing a perfectly compatible proposition is one kind of sad. That you would conflate imperfection with total non-improvement is another.

  9. NG,

    I was trying to get with the spirit of the analogy but I was getting stuck on whether Jesus was the fellow prisoner playing the game optimally or playing some other role.

    The fact that you were tempted to switch to a non-standard defintion of cooperate which distinguishes the type of ends the joint action are directed toward illustrates my point that ethics is mainly about ends and cooperation (standard definition) is mainly a means. There are people who would say that anything people cooperate on is good, but I don’t think you are one of them.

    Again, I’m interested in your analogy but not sure if you are using Christ’s role in the analogy as an example to us in how we deal with others or as an always cooperate way of playing the game. I was thinking isn’the way we are taught to play the game forgive 7 times 70? Not critiquing, I’m just a bit confused :)

  10. Nathaniel, thank you kindly for the reply and clarification. I understood your article to be claiming that cooperation does result in a short-term disadvantage in favor of long-term gain. If, as I think we both agree (please correct me if I appear to be misunderstanding you), cooperation is in actuality never a disadvantage given the full ontological payoff matrix even when it appears to be in our perceived payoff matrix, I must ask how learning is occuring? Presumably, I don’t see the full damage sin does to my spirit until I’m dead, prohibiting me from contributing anymore to the large scale prisoner’s dilemma among the living. Thus, how does humanity as a whole move toward understanding the ontological payoff matrix, and thereby choosing cooperation more often, when some of it is invariably inaccessible to us via the separation of death? Would you claim that this is the role revelation plays, by giving us a more accurate picture of the payoff matrix? The large number of people chooosing cooperation without exposure to clear revelation (and indeed even prior to modern-day revelation) would lead me to believe that revelation cannot fully explain the overall learning you imply is occurring. Can you please expound on this idea of learning?

  11. The prisoner’s dilemma is derived from two different historical situations. One involves criminal conspirators, the other prisoners of war. In the former case, the actors try to alter the matrix of outcomes by adding their own penalties, especially revenge killing (the reason for witness protection programs). In the latter case, some nations specifically try to prepare their soldiers to maintain faith with their fellow POWs and with their military unit and nation, so that the individual prisoner’s perception of outcomes is changed.

    It seems to me that the system of covenants that Latter-day Saints are encouraged to make, including at baptism, ordination, and in the temples, invites us to reevaluate the possible outcomes of our choices when we are placed in dilemmas that appear to threaten harm to us if we keep faith. We are encouraged to choose the rewards of martyrdom in a future Celestial existence over the rewards of this Telestial world.

  12. Brian-

    Sorry for the delay in responding. I appreciate your questions. Here’s what I’m thinking. From the perspective of “the natural man”, cooperation (which, broadly speaking, to me means a willingness to weigh the interests of others at least as much as our own) involves a sacrifice. The payoff matrix that we see from a temporal perspective is the traditional PD payoff matrix, and in that case the only solution for supporting cooperative behavior is the threat of retribution: justice. Justice works to motivate good behavior in a context of superficial evaluation of outcomes.

    In order to solve the PD without threats, there are two possibilities, and I think they are both interesting. The first is that in an iterated game you start to get increasing returns from repeated cooperation. So the PD is accurate on a one-off basis, but projecting forward to the future, repeated cooperation leads to substantially greater outcomes. I think that this, in combination with the inherent uncertainty, would strengthen the dominance of tit-for-tat-with-forgiveness over tit-for-tat (justice).

    Alternatively, there’s the possibility that we just change the payoffs for every single matrix by looking at it from a spiritual perspective and realizing that the apparent costs are unreal (e.g. costs to our ego) and that there are benefits beyond what are apparent to the natural man (e.g. freedom of conscience). So we’re sort of acting out the contradiction Christ taught: whoever saves his life loses it, and whoever loses it for Christ’s sake saves it. We’re changing the payoffs.

    I’m a bit muddled because I like both of these approaches. I think there’s something to each one, but that they rely on slightly different analogies and probably need to be separated, clarified, and studied individually.

Comments are closed.