The difference between Public Goods Problems and Coordination Problems… and whether it matters.

By Michael Zargham, Scott Moore, and Matt Stephenson (author order randomized)

Public Goods Problems and Coordination Problems describe some of the most fundamental and important challenges we face both in Web3 and the wider world. While they are related concepts, they are theoretically distinct. Is this a distinction without a difference? In this note we explore why game theorists and even behavioral economists (like one of your co-authors) make such a distinction. We will describe what the difference is and why it might matter.

Let’s begin with a simple example:

Suppose you’re at a concert and everyone is comfortably sitting down and enjoying the performance. Suddenly people several rows in front of you start standing up to get a better view, which makes everyone behind them need to stand. Now, while you’re still comfortable in your seat, you can’t see anything. So you give up and stand too.

Now let’s analyze the situation: nobody’s view has actually improved by standing instead of sitting--the backs of people’s heads are just as in-the-way as they were before. But now everyone is less comfortable. We have thus moved to a less optimal outcome. And worse still the situation is more sticky, more “equilibrium-ey”[1] if you will. Initially we all stood because just a single person standing could affect everyone behind them, totally changing things. But now if someone sits it doesn’t really matter--we all just have to stand to see over everyone else standing in front us. To solve the problem, perhaps we could get everyone to sit at the same time.[2]

This sounds like a coordination problem, right? Not so fast!

It’s only coordination if “everyone sitting” is a stable equilibrium. And so as we've defined it, getting everyone to sit again isn't actually a solution. This is the essential difference between Public Goods Problems and Coordination Problems: in a Coordination Problem the optimal outcome is stable, whereas in a Public Goods Problem the optimal outcome is not. More technically, we can say that coordination games exhibit multiple pareto-ranked equilibria which include the social optimum (Dutta, 2012) whereas public goods games have a non-equilibrium social optimum (Hichri & Kirman 2007).

If you find that confusing don’t worry-- there was literally an oscar-winning movie about John Nash that got this wrong. In A Beautiful Mind, when it came time to describe the Nash Equilibrium (Nash’s defining contribution) the movie instead discussed an unstable non-Nash outcome just like the “sitting down at a concert” example. You can watch it here.

The distinction to be made is that, unlike a coordination game, our concert example and Russell Crowe’s bar-room musings in the movie both feature an “incentive to defect” (aka the “free rider problem.”) In the concert example, when everyone else sits down and you stand you actually do get a better view.[2] Or using the movie’s dated example, when your friends go for the brunettes and don’t “block” you, you are actually clear to “go for the blonde.”

You can see in the illustration below that the difference between the two games depends precisely on the relative magnitude of the “defect” payoff. The Public Goods game on the left can be transformed into the coordination game on the right with a single change: reducing the payoff to the defector (circled in red).[3]

But this means that, having defined the concert problem as a Public Goods game, it doesn’t matter if you get your bullhorn out and coordinate everyone back into their seats. Someone will always stand back up. And soon we will all be standing again. That’s what it means to have a “non-equilibrium social optimum”--you can see it, you just can’t live in it.

But that’s just the problem as we’ve defined it. Is that actually the problem?

What game is it?

Let’s pause and reflect for a second.

We are not trying to convince you that you can’t make people sit down at concerts. Nor do we care about the semantic differences between “coordination” vs. “public goods” problems. This difference matters, we contend, because the public goods framework is good for stating problems and the coordination framework is good for conceptualizing solutions.

The likely reason that A Beautiful Mind got the Nash Equilibrium wrong is that Public Goods problems almost always present to us humans as, well... problems. That is, when we see them we are often drawn to solving them. And here’s where you’re going to start to feel the edges of not just game theory but any sort of human-regarding theory.

The mind balks at Public Goods Problems. “Don’t you see” you want to ask, incredulous, “that if you stand up, soon everyone will stand up and your view will be the same as before but now you’re less comfortable?” That’s what a public goods problem feels like. Let’s live in them for a little bit and feel their power before we move on and discuss how to solve them.

The prototypical Public Goods game has been run in a laboratory setting countless times. Run it with more than a few players and “players do not, in general, manage to coordinate on cooperative behaviour” (ibid.) Worse still is that, unlike some two-person games, repeating these games doesn’t help. In fact, cooperation typically gets worse over time as you can see in the figure below:

The way a laboratory public goods game works is: everyone gets some money, say 10 people get $20.00 each, which they can contribute to the public pool or keep. The experimenters then increase the money everyone contributes by e.g. another 20% which is split among everyone in the game. Crucially, this split happens for everyone, whether they contributed or not.[4]

On the graph above you can see the average “contribution” on the y-axis, with rounds on the x-axis.[5] And have a look at period 1 up there on the left, where most everyone is hopeful and optimistic and so the average contribution is quite high. Everyone does pretty well. What goes wrong to make everything unravel?

Having personally run, and participated in, these public goods experiments many times one of your co-authors can explain what’s largely going on here. To get a feel for it, here’s what this game looks like with four players:

Look familiar? It’s that annoying “incentive to defect” again (circled in red). What’s happening is that the free rider is not excluded from receiving the “common pool” split. And since they also get to keep the $20 they didn’t contribute, this means they always make the most money. And then in the next rounds this happens again and again, the gap growing ever wider.

After a few rounds your fellow contributors are starting to get a little pissed off. They believe in the common good, sure. But you can’t just sit back and allow these free riders to make everyone a sucker like this can you? The unraveling of cooperation that you see in experimental chart is often the “we’re not going to be suckers” feeling spreading through a population.

That may look a bit depressing, but there’s some good news too. For one thing, even if you repeat this for a very long time, there is typically ~10% of the population who never give up the faith and just continue to contribute no matter what. These are the saints. For another, if you just shuffle the players and start the whole game over you’ll notice something amazing; most everyone regains their optimism. It looks about like this:

That is, in each period 1 you get this “restarting” effect as everyone tries to jump start cooperation again. Hope springs eternal! Of course, because they are playing a true Public Goods game (unfortunately) cooperation devolves again. But this behavior is a crucial building block of a real-world solution. This hope we observe is why we get to try and fail and try again.

How do we solve these things?

We saw that a public goods problem is characterized by an unstable social optimum, vulnerable to individual incentives to defect. But we also saw the experimental evidence that some people never give up, and that in a new game most everyone will try again to reach the social optimum, hoping that maybe this time the unstable outcome will be stable. They are hoping, we might say, that this time the Public Goods game will turn out to be a Coordination game.

With that in mind, we can propose an ad hoc hierarchy for solving these problems. Speaking roughly, a public goods problem is solved by transforming/reimagining it to be a coordination problem:

Starting from the top right you have the best possible solution concept, called a “Prisoner’s Delight.” It means that the socially optimal outcome is the only stable equilibrium! This is not even a coordination game--it’s a solved problem. Whereas we saw that Public Goods Problems almost always strike us humans as problems to be solved, the Prisoner’s Delight “problem” really doesn’t seem like a problem at all.

As a result, our example is going to feel a little silly. A good example might be “setting your house on fire”. Most people don’t want to set their house on fire and so “everyone’s house not on fire” is stable with no incentive to defect. This is the social optimum. And what about the situation where everyone else’s house is on fire, does that make you want to set your house on fire too? Of course not! Which means there’s not even an equilibrium at “everyone sets their house on fire” because now the temptation is to, well, “not set your house on fire”. Absent insurance shenanigans, “don’t set your house on fire” is the strictly dominant strategy and any deviations from it do not destabilize the system.[6]

So let’s move down the chart to proper coordination problems (with multiple equilibria). In this world, the equilibria are at least stable once people are in them and so we “just” need to get them there. Making people choose the “good” equilibrium in a coordination problem can be described as making it “focal.”

Here is a (non-exhaustive & ad hoc) hierarchy of coordination games ranked by how “focal” the equilibrium tends to be, ranked from best to worst:

  1. Payoff Dominance. A preferred equilibrium which provides the highest payoff for all players in the game is extremely focal.[7] If you play a game like this in the lab it will feel silly and hardly like a game at all because everyone does the preferred thing.
  2. Risk Dominance. A preferred equilibrium which is risk dominant is more fragile but can be naturally coordinated on. Picking an action that is risk dominant roughly means that if other players don’t do what you expect you’re still relatively better off than you would have been otherwise (mutatis mutandis).
  3. Inertia. If the preferred equilibrium isn’t payoff dominant (as in #1) or risk dominant (as in #2), you can still make use of the fact that people will just stay with what they’ve done in the past. Thus if you can start there or get people there they may well stay.

If we keep only to monetary utilities, leaving the behavioral mechanism design out of it, these solution concepts are basically: “Everybody gets rich” (1), “Everybody avoids losing money” (2), and “Who cares about getting rich, let’s just do things the old fashioned way” (3). And, speaking somewhat loosely, you can actually use the commitment aspect of smart contracts to change the game along these dimensions.[8]

Expanding to the behavioral sphere our options really open up. Now we can grant, say, social status to coordinators. We can signal disapproval to those who, say, stand up in front of us at a concert. We could even make “doing things the old-fashioned way” have a sort of identity bonus, an additional utility stemming from the embracing of tradition.

We might be revealing our bias, but real solutions will often live in the behavioral realm at least to some degree. When you think about solving the “concert problem” from earlier you might be inclined to say to the stander, “Excuse me but would you mind sitting? The rest of us would prefer it.” And when they then sit and everyone is better off, whatever kindness, reciprocity, or even shame aversion that person felt that made them sit is what drove the coordinating equilibrium.

Which is all to say that “transforming” a game along these lines is a fundamentally creative act. And Lin Ostrom’s work reminds us that creative solutions abound, and that communities can solve what look to outsiders like Public Goods problems (thereby demonstrating that they are actually coordination problems.) Ostrom was the essential inspiration for our solution concepts above, reimagining Public Goods games as Coordination games.[9]

Do Public Goods Problems Actually Exist?

The Public Goods framing remains invaluable, at least insofar as it captures what a social problem feels like before it is solved. Each of us knows there are situations in which we can all see the social optimum but can’t quite figure out how to get there yet. The sink full of dishes that all the roommates would like to have washed but would rather someone else do it.

And we also know there are situations where we think we’ve reached a solution, only to find it’s unstable--we can feel the frustration after we whip all the roommates into a Sunday morning cleaning frenzy to finally get all the dishes washed... only to watch over time as the sink fills back up with the roommates nowhere to be found. This is the frustration of the would-be coordinator discovering that they’re still playing a Public Goods game.

On the other hand we can’t help but feel -- watching people unhappily standing up at a concert, writing valuable code in a private repository, or failing to develop scientific ideas -- that there must be solutions. And it’s comforting to know that earnest attempts to solve these problems may be met by the hopefulness we saw in the experiments, the inextinguishable belief that maybe this time we can all get to the social optimum and stay there.

In the end, “what game are we playing” is not really an answerable question. To the pure game theorist, the idea that you can “reimagine” a public goods game as something else is nonsensical--it’s either a public goods problem or it isn’t. In that light, let’s choose to imagine that all Public Goods problems are just Coordination Problems in disguise. And so let’s solve them.

[1] Zargham would like his discomfort with this phrasing noted in the record. The concert example is adapted from Stephan Meier.
[2] And, to be slightly technical, the utility you get from the better view is greater than the disutility from not sitting.
[3] This is a solution concept that we will return to later. And note that you could also, say, transform the game by raising the cooperative payoff above (15, 15). The stylized facts about the formal relation between coordination-type games and public goods/PD type games descend from Luce and Raiffa 1957.
[4] This feature, called non-excludability, is fundamental to what a public good is.
[5] The different colored lines represent outcomes across different national populations (almost always students). We’d suggest not reading too much into those differences unless it’s to rib our Aussie friends. Chart from
[6] Problems like “who pays for the fire department” bring us back into public goods land of course, but that’s not what we’re talking about here.
[7] Assuming this payoff is not risk dominated, in which case see #2.
[8] This article was an elaboration on this piece and emerged from discussions with Cathy Barrera and the community, later elaborated on well by Virgil Griffith.
[9] And there is thoughtful and interesting work being done on “Ostrom Compliant” organizations. See also this excellent twitter thread on ways to think about what these games are and how they can be transformed (and thank you to Oliver Beige for pointing us to this).

Subscribe to Planck Manuscripts
Receive the latest updates directly to your inbox.
This entry has been permanently stored onchain and signed by its creator.