The Uber Dilemma

[Note:  This item comes from friend David Rosenthal.  DLH]

The Uber Dilemma
By Ben Thompson
Aug 14 2017

By far the most well-known “game” in game theory is the Prisoners’ Dilemma. Albert Tucker, who formalized the game and gave it its name in 1950, described it as such:

Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge. They hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain. Each prisoner is given the opportunity either to: betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The offer is:

• If A and B each betray the other, each of them serves 2 years in prison
• If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa)
• If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge)

The dilemma is normally presented in a payoff matrix like the following:

What makes the Prisoners’ Dilemma so fascinating is that the result of both prisoners behaving rationally — that is betraying the other, which always leads to a better outcome for the individual — is a worse outcome overall: two years in prison instead of only one (had both prisoners behaved irrationally and stayed silent). To put it in more technical terms, mutual betrayal is the only Nash equilibrium: once both prisoners realize that betrayal is the optimal individual strategy, there is no gain to unilaterally changing it.


What, though, if you played the game multiple times in a row, with full memory of what had occurred previously (this is known as an iterated game)? To test what would happen, Robert Axelrod set up a tournament and invited fourteen game theorists to submit computer programs with the algorithm of their choice; Axelrod described the winner in The Evolution of Cooperation:

TIT FOR TAT, submitted by Professor Anatol Rapoport of the University of Toronto, won the tournament. This was the simplest of all submitted programs and it turned out to be the best! TIT FOR TAT, of course, starts with a cooperative choice, and thereafter does what the other player did on the previous move…

Analysis of the results showed that neither the discipline of the author, the brevity of the program—nor its length—accounts for a rule’s relative success…Surprisingly, there is a single property which distinguishes the relatively high-scoring entries from the relatively low-scoring entries. This is the property of being nice, which is to say never being the first to defect.

This is the exact opposite outcome of a single-shot Prisoners’ Dilemma, where the rational strategy is to be mean; when you’re playing for the long run it is better to be nice — you’ll make up any short-term losses with long-term gains.



Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s