Main content

### Course: AP®︎/College Microeconomics > Unit 4

Lesson 5: Oligopoly and game theory- Oligopolies, duopolies, collusion, and cartels
- Prisoners' dilemma and Nash equilibrium
- More on Nash equilibrium
- Why parties to cartels cheat
- Game theory of cheating firms
- Game theory worked example from AP Microeconomics
- Oligopoly and game theory: foundational concepts
- Game Theory

© 2024 Khan AcademyTerms of usePrivacy PolicyCookie Notice

# Game theory of cheating firms

We deepen our understanding of a Nash Equilibrium by exploring Pareto optimality and more on Nash Equilibrium. Created by Sal Khan.

## Want to join the conversation?

- In the table around3:00in, isn't every item in the right hand column pareto optimal? Pareto efficiency is defined as you said, as a state of affairs where no party can be made better off without taking away from another party. Every data point in the right hand column effectively represents a PPF point, which are all by definition pareto efficient.(15 votes)
- no, only (250,250) is Pareto Efficient. To get to (280, 200) or (200, 280) one player was made better off but only by making the other one worse off. If there had been a point (280, 250) then you have made one player better off without making the other worse off.(16 votes)

- So does Nash disprove Pareto or are they not rival in nature?(7 votes)
- I wouldn't say Nash disproves Pareto but Nash approaches from a game theory individual's perspective while Pareto approaches it from the perspective of a Social Planner.

Ryan describes the process of moving to a Nash Equilibrium well but I disagree with Ryan's answer with regards to Pareto efficiency because the Pareto optimal state doesn't have anything to do with "the best returns" all it means is that the only way someone can be made better off is by making someone else worse off. Every single point in this table where one of the firms is earning more than 250 (in the right hand column and on the bottom row) is actually Pareto efficient.(21 votes)

- In this case, which place would be most Pareto efficient when factoring in the consumer surplus? Don't the consumers gains increase as the price goes down? Also, assuming we have all the necessary data, is Pareto efficiency always well defined, or could there be multiple points for Pareto efficiency?(4 votes)
- There are almost always multiple Pareto efficient equilibria. This happens because all you need for Pareto efficiency is that no one can be made better off without someone else being made worse off. Even situations that we would probably call unfair can be Pareto efficient, for example: if you have all the money in the world and everyone else has nothing that is still Pareto efficient because you can't give other people some money without giving some up and making yourself worse off.(6 votes)

- But, that's just a theory...a game theory!(4 votes)
- It actually can happen in real life. O_O(1 vote)

- New guy here.. So this might sound stupid...

At about02:00mins in, Sal says that 250 is the Pareto Optimum, but what about 70, or 150, or 30, or 110, or 230, or 190???(2 votes)- Well as Sal says at2:15, it means that there is no other equilibrium the balance could go to without making one of the two parties worse of.

So if you look at the table. Right above 250 - 250, is 280-200. You see that if you add those up, 480 < 500. Now why is that? It's because even though one increased it's outcome with 30, the other lost 50!(3 votes)

- Unfortunately this video paints a somewhat misleading and simplified picture. The problem arises that real life duopolies/cartels aren't usually based on single market transactions, but on iterated transactions. Iterating games essentially means that a participant doesn't have just one chance to make a strategic decision, as it would be in a single transaction situation, but instead there are multiple subsequent transactions and the participant can constantly re-evaluate their strategy during every single transaction. Essentially the participants replay the game constantly and remember all of their own previous strategies in individual iterations, as well those of the other parties'. This leads to some major differences from what is portrayed in the video above, which is what I'll attempt to explain below.

All of the following is assuming that MPC, market demand structure, average unit cost etc. do not vary with time and the participants have perfect information.

In a single, discrete case of the prisoner's dilemma, the Nash equilibrium is always non-cooperation. However, if you iterate the prisoner's dilemma indefinitely and the parties know that the game will be replayed, the Nash equilibrium moves from not cooperating to cooperation. This has to do with the fact that they know that the next iteration of the game will also give them a profit if they cooperate and they can assign a value for it, provided that there's enough reliable data without excess noise (I won't go into games with imperfect/asymmetric information, market variables and probability). The notion is generally called discounted cash flow, or DCF for short.

In this scenario, the participants will discount their anticipated profits from all future iterations and take them into account while making strategic decisions in every single iteration, which radically alters the payoff matrix. An important factor in the discounting process is that duopolies and cartels usually have a somewhat unforgiving trigger strategy in regards to cooperation, in which if the other one decides not to cooperate (undercut cartel prices or increase production), it will lead to the other party "losing trust" and not cooperating for the rest of the iterations of the game, thus returning to the classic Nash equilibrium of the Prisoner's dilemma. Simply put, an attempt by one party at hogging the cartel profits will lead to the collapse of the cartel and both participants returning to perfect competition, exactly in the manner as shown in this video. This leads to the following point: if the game is iterated indefinitely, the profits from cooperation are infinitely higher than not cooperating, the latter of which would result in the loss of all future cartel profits due to the grim trigger. If the discounted cash flow from cooperation now and in future iterations will always be higher than non-cooperation, neither party can improve its strategy by undercutting prices or increasing production. The Nash equilibrium hops over to the Pareto Efficient solution (in the reference frame of the participants of the game, not society as a whole).

All this complex jargon can be summed up by the old idiom of the stupidity of killing the goose that lays golden eggs in hopes for a quick slightly higher one-time profit. I appreciate the attempt at just explaining the basics, but unfortunately the basics lead to oversimplification and demonstrably false models. To truly understand cartel behavior in game theory, you a) have to differentiate between single games and iterated games, and b) understand basic trigger strategies and which of them applies in any given model.(3 votes) - Since we know the behavior of firms in a duopolistic market where there is lack of coordination, ceteris paribus, could we use that understanding to find markets where coordination is present?(2 votes)
- Well, yes. but It's usually not a secret. OPEC doesn't hide the fact that it's an oil cartel. DeBeers doesn't hide the fact that it's a diamond cartel. There must be a large barrier to entry for a cartel to work - typically this involves land rights (mining), or intellectual property (IP).

An IP example would be licensed patents. A patent-holder for example might license his patent to multiple parties, but have them sign agreements to each only sell in certain markets or countries.

It would be hard to keep a commodity cartel secret though. As you imply, if there were only two producers of screws, and they both suddenly double their price (despite no increase in demand or dearth in factors of production like raw iron), people would immediately suspect foul play.(2 votes)

- How could we apply this theory in the business ? Will this be practical ? Or Is it just the way to analyze the situation and support in making decision(2 votes)
- The theory explains the existence of cartels in markets that in theory could be free markets. Recent EU decisions to crack down on cartels has shown cartels in markets for construction work, raw sugar and producing beer. These markets have enough suppliers to provide full competition and still they didn't. It also shows that due to the competitive nature of these markets there is a strong incentive for single suppliers to break the pact, which could lead to a dissolution of the cartel. In business it could be beneficial to collude, but you have to trust your partners. Knowing the economical impulse to cheat will be useful in business and that will support decision making in companies.(2 votes)

- I don't understand why 250|250 cannot be considered Nash Equilibrium if both parties have immediate knowledge of each other's decisions and the immediate ability to react. I'll be A and you be B. Right now we are both producing 25 for 250|250 profit:
`if (I decrease production) {`

I immediately loose profit; //duh

}

else if (I increase production) {

while (my profit >= 0 && your profit >= 0) {

you will increase production; //to recover partial profit but make me lose more

I will increase production; //to recover partial profit but make me lose more

}

I have lost all my profit; //along with you

}

Therefore, no matter what action I take from 250|250, I will lose profit. Wouldn't that mean 250|250 is Nash Equilibrium?(2 votes)- A Nash Equilibrium is a state where neither player has an incentive to cheat. In the 250|250 scenario, a firm could cheat and get 280 instead of 250, even though it hurts the market as a whole.(1 vote)

- Can an algortihm be made in matlab to get the nash equilibrium state? Do u have lessons for that?(2 votes)

## Video transcript

Male Voice: In the last video we saw how there could be an industry
that has two firms, a duopoly, and if those
two firms coordinate they could behave as a
monopolist and they could optimize their collective economic profit. In the last video we saw that would happen when they produced 50 units per period, and they could split
it, assuming these were two identical firms, by
each producing half of it. In the case of the last
video, it was 250 units per firm. Then we saw that
there was an incentive to cheat; that by producing extra units, from a market's point
of view, the marginal economic, or the economic profit on those incremental units would
be negative, so the whole economic profit would shrink a little bit as you produced units beyond that, but the cheater would get a bigger chunk of those units, or the
bigger chunk of that economic profit. The
cheater could actually gain, go from $250 per time period to $280, and it would be all at the
expense of the non-cheater, and then some, who would lose even more than what the cheater gained. Obviously who was
initially the non-cheater has an incentive now to cheat, and they'll both keep increasing,
they'll both keep increasing production so that if they wanted to keep doing this one-upmanship. They both have the incentive to keep
going assuming that they don't hold to their cartel agreement until you get to a
quantity where there's no economic profit left.
Right over here, the way I've drawn it, the demand curve intersects the average total cost
curve right over here, and there's no economic profit left. We're producing a good quantity. It looks like it's
about 75 units combined; 75 units for the whole market. But at this point, the market price is equal to the average total cost, and so there's no economic profit per unit on average. What I want
to do is think about this in kind of a Game theoretic way. Let's look at a bunch of states. This is the optimal state
that we are starting off in. You can actually call it
the Pareto optimal state, named after Vilfredo Pareto. All it means is that's
the state where there's no other state where you can make someone better off without making
the other person worse off. Any of the states here, there are states, for example, where blue is better off. For example, in this state right over here blue is better off,
but green is worse off. So that's why it's
called Pareto optimality. Now, what I want to think
about is how these characters will change their state
due to their incentives. Then we'll talk a little bit
about Nash Equilibrium as well. On this axis, up here,
lets' say this is one of the competitors. This
is where they produce 25, and let's say on the ultimate cheating quantity of 75, and this
is somewhat close to the market, or that is
the equilibrium quantity if this was perfect competition, they produce half of that, so this is them producing 37.5 units. As we
go from 25 to 37.5 units, they are cheating more.
This is more cheating and over here, this was no cheating. We can do the same thing
for the blue player. I'll write them as B.
This is them producing 25. This is them producing
37.5. As we go up and up and up, they are cheating more. This is a lot of cheating,
or more cheating. To think of it in a Game theoretical way, this is the Pareto optimal
state right over here. It's optimal in many ways. This is they've maximized
the total economic profit here. There's
no other state that one person would benefit without
making the other worse. Now, let's think about whether this is a Nash equilibrium. Let's remind oursleves what Nash equilibrium was. This was a state where
holding all the other players constant, so in
this case there's only one other player, a player can't
gain by changing strategy. In this case, changing
strategy is changing your output. Let's see if
that is true of this state right over here. Well,
let's hold A constant. If A is constant, we're
in this column right over here. Is there something B can do, is there change or strategy B can do, that would allow B to gain? Sure. B can increase production. That's what we saw in the last video. We would go from this bottom right state to one right above it.
Now B's economic profit is 280, A's is 200. The pie has shrunk, but B has got a larger chunk of it. That was not a Nash equilibrium. There is, holding all others constant, there is a player that can gain
by changing their strategy. The Nash equilibrium
definition, just to make sure, they say it's a state where holding others constant no player can
gain by changing strategy. We just showed that at
least one player can gain by changing strategy
holding others constant. The same would be true if we
went the other way around. If we held B constant at 25, A could gain by changing his strategy,
could go right over there. This is not a Nash equilibrium. Then regardless of what state we go to, if we go to this state, it's still not a Nash Equilibrium. If we hold A constant, B could improve by
increasing his production; or if we hold B constant, then A can still improve by cheating even more. None of these are Nash equilibriums. From any one of these states,
if you hold A constant, B could produce more; or
if you hold B constant, A could produce more and get some gain. Over here, A's going from
130 to 160 and getting some gain. You can imagine
this keeps happening incrementally. They keep
producing more and more and more. We kind of go
there, then we go there, then maybe we go there, then we go there. Then maybe A cheats some more, then B cheats some more, then A
cheats a little bit more, B cheats a little bit more,
maybe a little bit more past that, then A cheats
a little bit more. The whole time the whole
economic profit pie, which is the sum of A
and B, is getting smaller and smaller until finally A finally cheats and they're at zero economic profit. Now let's think about whether this is a Nash equilibrium.
Clearly, they won't want to move backwards. If you hold A constant, B would not want to
move down. Then he would lose economic profit. That doesn't work. He doesn't gain by doing
that. If you hold B constant, A wouldn't want
to move to the right. A would also lose economic profit. Now you might say what if
they produced beyond 37.5? Why can't they keep producing
and go beyond there? Holding A constant, if B were
to produce more than 37.5 from this state right
over here, then the total pie will get negative
and it doesn't matter if B's getting a larger or
smaller chunk of that pie. B's chunk is going to be negative. He's going to drive down
the price even more. You can see it over here. If they increase quantity beyond this market quantity of 75, 37.5 each, if we go beyond that,
the price that they would be selling at, at that
quantity over there, is lower than the average total cost. You're going to be, the total economic, the average economic profit per unit is going to be negative.
There will be a total of negative economic profit. Neither of them will want to produce more from this state either. All of a sudden in this top-left state,
holding others constant; if you hold A constant, B can't gain by changing his strategy,
and if you hold B constant A can't gain by changing his strategy, so we are, up here, in a Nash equilibrium. This is a Nash equilibrium. Like the prisoner's dilemma, it was not the optimal state. The
optimal state was here, but because they both wanted to cheat, they both wanted to do this one-upmanship, they both broke their contracts, they could end up in this state over here. But this state is stable. There's nothing holding
the other party equal. There's nothing that they could
do to change, to optimize. What they could do, and
this is not what Nash applies to, they could say okay, we've been really ruining
each others' business. Let's go coordinate again
and I'm going to decrease production if you decrease production. That is not, and they
could maybe try to go back to this state, and that does
not mean that this is not a Nash equilibrium because
by coordinating again we're not holding the others constant. We're saying I'm changing
my strategy while you're changing your strategy. Maybe only through another agreement
they could go over here. That still doesn't mean that this is not a Nash equilibrium. This
is a Nash equilibrium. If there's no coordination,
if you hold one player constant, the other player
cannot change their strategy, or change their production, for a gain.