If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

Main content

Political: The Prisoner's Dilemma

In this video, Professor Geoffrey Sayre-McCord (UNC-Chapel Hill) explains the prisoner's dilemma. The prisoner's dilemma is a scenario where all parties making rational choices ensures a less desired result for each than if each actor had chosen individually less-preferred options.

Speaker: Dr. Geoffrey Sayre-McCord, Professor of Philosophy, University of North Carolina at Chapel Hill.

Want to join the conversation?

  • spunky sam blue style avatar for user dvdsknz
    Is the Prisoner Dilemma a reason why someone might falsely confess to something even if he/she is innocent? If he thinks that he/she will get a better sentence by falsely confessing?
    And if so how can we remedy that?
    (3 votes)
    Default Khan Academy avatar avatar for user
    • ohnoes default style avatar for user Tejas
      Most agree that an innocent person would have a different incentive structure than a guilty person. For instance, if one denies the crime and the other confesses, then the case will go to trial, where the person will probably be acquitted because he or she is innocent. In that case, the 15 years becomes 3, for the parole violation. Additionally, the one who confesses may be charged with perjury, so the 0 years becomes 5 years.

      Of course, all that is just changing the incentives structure, and that is the only way to change how rational actors will act. And if the incentives were ever to become too extreme, then the innocent people might not want to take that chance.
      (4 votes)
  • leaf green style avatar for user Halopenguin
    I wonder why the police even use the Prisoner's Dilemma at all, I mean they can just question them with a lie detector so that if one of the prisoners goes free, that "prisoner" might do another bad thing or commit another crime. Also if one of the prisoners actually didn't do the crime and the other one did, if the crime committing prisoner ratted the innocent one out for doing nothing and just lies, the crime-committer goes free to do more crimes and never be caught if that person repeatedly rats out an innocent person. So, knowing this, shouldn't the police officers just use the lie detector so that there's no risk that the criminal goes free to commit more crimes? That is my question.
    (2 votes)
    Default Khan Academy avatar avatar for user
    • piceratops ultimate style avatar for user IanGunkler
      Polygraphs are problematic at best. They do not detect 'lies,' but rather specific psycho-physiological changes from which deception is inferred. Largely, the results of a polygraph are inadmissible in court and the reliability of information obtained is highly questionable. They do not begin to approach the requirements of objectivity. Further, the dilemma is one that is faced whenever there are issues of cooperation placed in conflict with individual motivations. If we were to invent a perfect lie detector and crime was an isolated event and the only motivation of the investigators revolved around the one crime as an isolated incident, then we could possibly eliminate the prisoners dilemma from the prisoner situation, because those individual and cooperative motivations would no longer apply. The dilemma would still exist pretty much everywhere else.
      (4 votes)
  • piceratops seedling style avatar for user Sam
    Are there actual game theory videos on the prisoner's dilemma on khan academy?
    (2 votes)
    Default Khan Academy avatar avatar for user
  • leaf green style avatar for user Tomasz Stachowiak
    A crucial element in the classical dilemma is that they choose once without communication. With, say, pollution the parties can communicate, check and agree on what's best with full knowledge of the game table, because the process takes time, so it looks a bit more optimistic.

    Paul Bloom mentions some psychological experiments in his online lectures regarding the Hobbes argument. It seems indeed that when people are kept in check by possible punishment the "commune" does better than when they just rely on "trust".
    (2 votes)
    Default Khan Academy avatar avatar for user

Video transcript

(intro music) Hello! I'm Geoff Sayre-McCord. I teach philosophy at the University of North Carolina at Chapel Hill. I'm gonna speak to you today about the Prisoner's Dilemma. Consider the following situation. You and Isabella commit a diamond heist. Days later, you're both arrested, and the police have enough to charge and convict you of parole violation, for which you will each get three years in prison. Though they have their suspicions, and the police recovered the diamonds, they have no hard evidence that you two are the ones who robbed the jewelry store. Yet the detective is no slouch. She decides to make each of you an offer. If one of you, and not the other, confesses and rats on the other, agreeing to turn state's evidence, then that person will go free, while the other will serve fifteen years for robbery. It both you confess and rat on each other, then while neither will serve fifteen years, you will both serve ten years in prison. Assuming, for a minute, that you each only care about minimizing your time in prison, you would prefer that Isabella remain silent while you rat her out, and thus go free. You were just about to tell the detective you will testify against Isabella when you realize that she's in the very same situation. If she reasons as you have, so turn state's evidence against you, you and she will end up serving ten years each, not getting off scot-free. So, you realize, you're better off and she's better off if you both just remain silent, working together to foil the detective's efforts to get a confession. Since Isabella is in the same situation, you figure she must realize this too, and so your plan to remain silent is set, until, that is, it occurs to you that if Isabella is going to remain silent, then you can get off with no jail time simply by ratting her out. Moreover, if this last though occurs to Isabella and she decides to rat you out, you will still do better ratting her out, since instead of doing fifteen years in prison, you'd have to be only ten. So no matter what Isabella does, you do better turning state's evidence against her. And the same is true of her. As a result, you both conclude you need to confess and rat on the other, with the predictable and sad result of you both serving ten years, instead of the three years you would have had, if only you had together remained silent. But then some good luck strikes, and you and Isabella find ourselves alone in a room together. Taking the opportunity, you talk and agree to stay silent, so as to serve only three years, rather than ten. When you are then separated, you rest easy, thinking you have together been able to at least minimize your jail time. Until, that is, you realize that if you turn Isabella in, you won't have to serve any time, unless of course she turns you in too. But if she's going to break your agreement and turn you in, you'd better turn her in to avoid fifteen years in prison. You are stuck again, predictably, doing worse than you might have done. You both are. If only. If only what? Well, if you could count on her to keep her word, you could then keep yours and end up with a sentence of only three years. Or you could turn her in and go scot-free. But you can't count on her to keep her agreement, at least if she realizes that you might well not keep yours. So she needs to be confident that you won't break yours. But then she will have a strong incentive to break hers. And if you know that of her, you too will have a strong incentive to break yours. That is the prisoner's dilemma. Understanding the underlying structure this dilemma, it turns out, can shed light on a broad range of phenomena. Consider, for instance, the so-called "tragedy of the commons." The classic version of such a tragedy is found in thinking about a public grazing area, a village commons, where all members of a town are free to graze their sheep. If all restrain their use of the commons, enough will be available for all, and herds can thrive. But as long as others are restraining their uses the commons, each person in the village can do even better, for themselves of for their family or for the charity to which they will donate their earnings, by allowing their sheep to graze a bit longer. But if everyone in the village does that, then the herds will eventually fail. But if they're going to fail, then it's better to graze longer now, so as to allow your sheep to live longer. the incentives are such that, unless people can count on others not to graze the sheep too long, each has a strong incentive to allow their own sheep to graze longer. But if people can count on others not to graze too long, then each has a strong incentive to graze longer. The predictable result will be the destruction of the commons. We can replace the village commons in the story with, say, our oceans, and replace the grazing sheep with fishing and we will have a model of why, so often, communities are at risk of overfishing and entirely wiping out their livelihood. While all do better if they all restrain their fishing, each does better if she finishes more, whatever other people were doing. Or switch from oceans and fishing to our atmosphere and activities that generate pollution. If we're all better of engaging in some activities that generate some pollution, and each is better off if he or she can do what will generate a bit more pollution, assuming others do not, we will have a model, again, of why pollution becomes such a problem. In each case, costs and benefits are arranged in such a way that people have reason to cooperate with each other, refusing to rat eachother out, restricting how long they let their sheep graze, limiting the size of their catch, controlling how much they pollute, but nonetheless seem to have stronger reason to act otherwise, no matter how others act. Long ago, in Plato's Republic, Glaucon relied on such situations to explain the emergence of the principles of justice. According to him, in a world without justice, we regularly find ourselves facing choices where we stand to gain by exercising our power over others. But we also suffer, from others exercising their power over us. Recognizing that we all would benefit if only there were rules in place that in effect defined a protected zone, not to be infringed on by others, the rules of justice emerge. Of course, simply having the rules is not enough, since while we may suppose each person benefits from the restraint of others, each also stands to gain from sometimes breaking the rules in their own case. Since this may be true of virtually everyone, the rules are liable not to provide the protection hoped for, unless there's some way to enforce them. Hobbes thought that the risk of people violating the rules was so strong and so disastrous that we have overwhelming reason to set up an absolute authority, who would have power to enforce the rules and punish those who try to violate them. Others, for instance Hume, have thought less Draconian enforcement mechanisms would do the trick, say, the refusal of others to cooperate with those who violate the rules The mere fact that you might want to work with Isabella again, or that others would refuse to work with you on discovering that you ratted her out, might well provide effective reason for you to stick to your agreement. But of course, if these are effective reasons, they're effective reasons because they shift the costs and benefits away from just years in jail, in ways that mean you are no longer in a prisoner's dilemma. Some have suggested that the fundamental problem has to do with people being selfish. But this is a serious misunderstanding. The problem highlighted by the prisoner's dilemma remains, however we think of the costs and benefits at stake. The people facing choices may be as generous as you like, and as long as there are situations which fit the following structure, people will be facing a prisoner's dilemma. Suppose that A is better, by whatever measure, than B, and B is better by that measure than C, and finally C is better than D. If two people, or groups of people or corporations, or countries, face a choice with the following possible outcome, they will be facing a prisoner's dilemma. If both cooperate, then B is the result for both. If neither does the cooperative thing, then C is the result for both. If one does the cooperative thing and the other does not, then D is the result for the former and A is the result for the latter. In such situations, each agent does better, again by whatever measure, no matter what the other does, by failing to cooperate, even though the predictable result is that they each do worse than they would have done if only they had managed to cooperate. That's the prisoner's dilemma, and we face it all the time. Subtitles by the Amara.org community