Home

  • Inspired by Goffman – pedestrian traffic

    Inspired by Goffman – pedestrian traffic

    Our starting point is Goffman’s Relations in Public Chapter 1.II on “Vehicular Units”. Goffman is here interested in the norms that regulate traffic, especially but not only pedestrian traffic. He first quotes Edward Alsworth Ross, Social Control, New York: The Macmillan Company (1908), page 1: “A condition of order at the junction of crowded city thoroughfares implies primarily an absence of collisions between men or vehicles that interfere one with another.”

    Goffman on page 6 then states the following: “Take, for example, techniques that pedestrians employ in order to avoid bumping into one another. These seem of little significance. However, there are an appreciable number of such devices; they are constantly in use and they cast a pattern on street behavior. Street traffic would be a shambles without them.”

    In this post I want to take up this claim and provide a model that allows us to discuss how people avoid bumping into each other. I will use Goffman’s work to help me to identify the appropriate model for this issue.

    Let me first identify the players. It seems that, while there are many people involved in street traffic, typically we encounter these people one by one. So I think for a first attempt it might be sufficient to study the situation of two people who are currently on course to bump into each other and who are trying to get past each other in order to avoid a collision. So we have two players in often fairly symmetric positions. Now here is one statement by Goffman (on page 8) about actions: “Pedestrians can twist, duck, bend, and turn sharply, and therefore, unlike motorists, can safely count on being able to extricate themselves in the last few milliseconds before impending impact.” Despite the fact that Goffman mentions so many possible actions I will for a first attempt consider only two. Try to pass on the left or try to pass on the right. But if we feel it may be useful we can go back and think more about the possible moves pedestrians can make. Now what about payoffs? Talking about cars or road traffic, Goffman, on page 8, states that “On the road, the overriding purpose is to get from one point to another.” For pedestrian or street traffic he states “On walks and in semi-public places such as stadiums and stores, getting from one point to another is not the only purpose and often not the main one”. He has more to say about payoffs on page 8: “Should pedestrians actually collide, damage is not likely to be significant, whereas between motorists collision is unlikely (given current costs of repair) to be insignificant.” All this strikes me as important to understand pedestrian traffic. Let me see why. Suppose we ignore these last few statements, especially the one about pedestrians often having more than one purpose. We might then be tempted to say, and perhaps this is a good model of car traffic, that the game is simple. We have two players (the drivers facing each other), each has two possible choices (pass on the left L or pass on the right R) and if they pass each other that’s great (they both get a payoff of say one) and if they bump into each other that’s awful (they both get a payoff of say zero). In other words the game can be written in matrix form as follows:

     \begin{tabular}{c|cc} & L & R \\ \hline L & 1,1 & 0,0 \\ R & 0,0 & 1,1 \\ \end{tabular}

    What are the evolutionary stable norms of behavior in this game? They must be a Nash equilibrium, which means no player should have an incentive to deviate from the norm. Could the norm be that everyone passes on the right? Yes! If everyone passes on the right, you would be foolish to pass on the left, because that would mean you bump into everyone and get a payoff of zero! If you instead also pass everyone on the right, you indeed do get past everyone and you enjoy your payoff of one. Completely analogously the norm could be that everyone passes everyone on the left. And indeed both of these norms exist for car traffic. In Japan people drive on the left, in Chile they drive on the right (most of the time). Recall that Goffman was well aware of the possibility of different norms being possible (in different societies or places) – see the previous post.

    A quick aside: game theory experts will have noted that the game has a third Nash equilibrium, an equilibrium in so-called mixed strategies. Under a Harsanyi purification (Harsanyi, 1973) interpretation of this mixed equilibrium we could describe it like this. Half of all people pass on the left and the other half of all people pass on the right. This is an equilibrium, because if that’s indeed what the others are doing you are equally well of passing on the left and passing on the right: half of the time you avoid an accident and the other half of the time you are dead (have an accident) either way. This is an equilibrium, but not an evolutionary stable one. Why not? Suppose slightly more than half of all people pass on the left. After a while you might notice this and then you find it slightly better to also start passing people on the left. But then the more people pass on the left the better this strategy becomes and gradually we move towards the norm of everyone passing on the left. This is probably more or less how the whole thing evolved in the early days of cart traffic. You may want to read Peyton Young’s “Individual Strategy and Social Structure” Princeton University Press (2001). 

    This all seems fine for cars, but what about pedestrians who supposedly, according to Goffman, use many “devices” and “techniques […] in order to avoid bumping into one another”? There seems to be absolutely no need for this here. So I think something is missing from this game. We should recall that pedestrians, according to Goffman, have side interests in addition to getting from A to B as fast as possible. It does not seem to be the pedestrian’s only goal to get past the oncoming person, the pedestrian might have a slight preference for which side would be better for her. Think of a person that you face in a corridor and that person, after they pass you, would like to turn left to the bathroom for instance. This person probably has a slight preference, when possible, to pass you on her left. The problem with this now is that you don’t necessarily know that she wants to go to the bathroom, and thus, you don’t know that she prefers passing you on her left. That’s why she might want to use a “device” – a signal of some sort – that tells you that she wants to pass on the left. Ok, so hold on a moment. We need to proceed slowly. I first need to discuss this game without “devices” so that we can see why “devices” might be useful. So how do I take into account these possible side issue preferences that pedestrians might have? Well, I need to modify the payoffs people get and these payoffs are now different for different people and I need to make the information about these preferences private. What I mean is that I will assume that everyone knows their own preferences (or payoffs) – they know whether or not they want to go to the bathroom on the left after they pass you – but you, their “opponent” do not know. So how does this work? I will simply change the game as follows:

     \begin{tabular}{c|cc} & L & R \\ \hline L & 1-u,1-v & 0,0 \\ R & 0,0 & u,v \\ \end{tabular}

    What are these u’s and v’s? You should think of each u and v representing a possible person with a particular preference for passing left and right. A person with a u (or v) close to a half is a person who cares only about getting past their “opponent” and does not care in any way on which side this happens. A person with a u (or v) less than but close to one is a person who would much prefer to pass their “opponent” on the right. Say this person really urgently needs the bathroom just behind you on her right. A person with a u (or v) greater than but close to zero is similar but has a strong preference to pass on the left.

    How do we do this? Well, this is one of Harsanyi’s great contributions to the body of game theory. We assume that both u and v are drawn from some distribution F on a subset of the real line that includes the interval from zero to one. Then every person learns their own u (or v) but learns nothing (as yet) about their opponent’s v (or u). Every person only knows that her opponent’s v (or u) is random and that the randomness is described by the cumulative distribution function F. In fact we here make a radical assumption and one that we should probably challenge later. We assume that not only does every person believe her opponent’s v (or u) is distributed according to F, but we also assume that everyone knows this fact, and that everyone knows that everyone knows this, and so on ad infinitum (as game theorists like to say). In short we assume that this distribution F that governs the likelihood of the various preference types you might encounter is common knowledge among the two players. Modern game theory also has ways of dealing with deviating from this assumption. But for the moment we shall assume it. Under an evolutionary interpretation this assumption is less worrisome than one might initially think, but we should probably come back to it.

    So how do we “solve” this game? There are two ways one can look at a game with incomplete information. One can either consider each possible person (with a specific u or v) separately – the so-called interim view – or one can consider the problem from the ex-ante point of view, where each person has a strategy for all possible u’s that this person could end up with. These two approaches are equivalent but sometimes one is easier than the other for the analyst. Here the second, the ex-ante, approach is easier. So consider a person who many times throughout her life has to navigate pedestrian traffic. In each situation she might have a different u. Sometimes she just wants to get past her opponent, her u is a half, sometimes she wants to turn left right after passing her opponent, her u is close to zero, sometimes she wants to turn right right after passing her opponent, her u is close to one. She develops a strategy as a function of her u. Now what would be a good strategy? Suppose there is some norm of behavior that people follow, a function from their u’s to passing left or right. For some such norm, what would be the best individual response to this norm? As you with your u do not know your opponent type, their v, knowing the norm that is in place only tells you with what probability (or frequency) your opponents will choose left or right. Suppose you know this probability of opponents going left (from your knowledge of the norm and the distribution function F) and call this probability  \alpha , then what is your implicit tradeoff between going left and going right? Recall that we are at the moment studying a situation where people do not communicate with each other (they do not use any “devices”). Well, if you go left you avoid bumping into each other with probability  \alpha and you do bump into each other with the remaining probability  1-\alpha . Your average (or expected) payoff from going left yourself is, thus,  \alpha (1-u) . Similarly, your average (or expected) payoff from going right is  (1-\alpha)u . When is left (strictly) better than right for you? Well if and only if  (1-u)\alpha > (1-\alpha)u . Calculating we get  u < \alpha .

    This means that, whatever the norm is, your best response to this norm is to use a simple cut-off strategy. Basically what you do is this. You observe the frequency of people going left and right (induced, as we said, by the combination of the prevailing norm and the distribution of preference types F) and you choose left yourself if your u for this interaction is less than the observed frequency of left and choose right otherwise.

    But if this is your best response to this norm, then it is everybody’s best response to this norm and it will become the norm itself! So everyone will be using the same cut-off strategy! But what will the cut-off be? Well if everyone uses a cut-off of say x, some real number between zero and one, then the probability that people use action left is the probability that their u is less than x, which is given by  F(x) . So if the cut-off people use is x, the probability of people going left is F(x) and this is the best response cut-off they will use. So we must have that x=F(x).

    So any stable norm must at least satisfy that we are in equilibrium, meaning that x=F(x). But is this enough for a stable norm of behavior? Not quite. To discuss this it is best to consider two examples of possible distributions F that could be present in different places of human pedestrian traffic.

    Suppose the preference u is, like so many things in life, normally distributed. Let’s say it is normally distributed with a mean of a half and a relatively low variance so that not too many people have a u less than zero or more than one. Please excuse the low tech (but I think sufficient) rendering of this example:

    20180502_162731.jpg

    What are the Nash equilibria of this game with such a distribution F? There are three. First we have F(x)=x for a value of x that is positive but pretty close to zero. What does this mean? It means that the norm is such that almost everyone attempts to pass others on the right except for very few people who have a very strong interest to pass on the left. This is an equilibrium that is pretty close to the equilibrium of the car-driving game of always driving on the right. There is a similar equilibrium with x=F(x) where x is just less than but very close to one. Here almost everyone attempts to pass others on the left except for very few people who have a strong interest to pass on the right. There is another equilibrium, however, at x equal to one half, where we also have x=F(x). Here we have that everyone who has the slightest inclination for passing on the left attempts to pass on the left and everyone who has the slightest inclination for passing on the right attempts to pass on the right. This is a mayhem equilibrium. But is it stable? No. Why not? Suppose that people use a slightly larger cut-off than one half, call it y. Then we find that, as F is quite steep at one half, F(y) > y. This means that no people’s best response cut-off F(y) is higher than the prevailing cut-off of y. So we expect people to adjust their cut-off upwards. This will go on until we reach the other equilibrium with a cut-off close to one. Similarly, a cut-off of just less than one half will lead to lower and lower cut-offs and eventually to the equilibrium cut-off close to zero.

    So what have we achieved? Not so much. The whole situation is very similar to the much simpler game without the u’s and v’s and all that. So, again, it seems that we would not need any “devices” and “techniques” of “scanning” and “intention display” (Relations in Public, pages 11 and 12) in this situation. Even without this we obtain a stable norm of behavior in which there are (almost) no collisions. I will come back to this after another example.

    Suppose now that the place of pedestrian traffic that we are interested in has a very different F. Suppose that most people have a relatively strong preference for either left or right. For instance you can imagine a doorway that people need to get through before they then want to turn left or right pretty quickly after that. For these people, encountering each other in the doorway, the density f behind the cumulative distribution F is probably best described as being relatively high around low and high values of u and relatively low for medium values of u close to one half. Let us assume that F is symmetric around one half. Let us also assume that still there is almost no weight (in f) on values of u less than zero and larger than one. A picture of this situation:

    20180502_163536

    Now what equilibria do we get here? Actually we get only one equilibrium and it is a mayhem equilibrium. It is cutoff equilibrium with cutoff x equal to one half, much as the mayhem equilibrium in the normal distribution case. But now the mayhem equilibrium is stable. Why? Because F is rather flat around the value of one half, if we consider a cut-off of y that is slightly larger than one half we have that F(y) < y and the best response cut-off is thus smaller than the y cut-off and we expect that the cut-off evolves back to a value of one half.

    By the way, what I have described here is essentially the paper “Evolution in Bayesian Games II: Stability of Purified Equilibrium” by Bill Sandholm, Journal of Economic Theory, 136 (2007), 641-667.

    Now you might say that we do not often observe such a stable mayhem equilibrium and you are probably right. In fact this is where we should finally introduce Goffman’s “devices” and “techniques” of “scanning” and “intention display” (Relations in Public, pages 11 and 12). The way I would model this (and this is now finally ongoing research I am currently undertaking with Yuval Heller at Bar Ilan University) is as follows. I would allow the players after they know their own u to send one from a set of possible messages to their opponent, to be understood as their “intention display”. I would assume both players are “scanning” for messages of their opponent and that the players can then condition where they try to pass their opponent on the two observed messages. You may want to think about this as players making a slight movement towards the left or right (this can be done a long time before the two actually meet) with the idea of signaling their intention as to where they would prefer to pass their opponent. What Yuval and I find so far is that for many distributions F (including the two I mentioned before) there is a universal and simple strategy (or norm) that is evolutionary stable. If you have a u less than one half you send the message to be read as “I intend to pass on the left” and if you have a u greater than one half you send the message to be read as “I intend to pass on the right”. If both send the same message they follow through with their displayed intentions. If they send different messages – that is, a slight conflict of interest is revealed – they fall back on a background norm of always passing on the left (or always passing on the right). We are not quite done yet with this project, but I hope you will be able to read about it very soon.

    So how did game theory add to Goffman’s study? In many ways I think. First, we had to be very explicit about the various strategies (potential norms) people could be following in our model. Second, we can then explain why a specific norm among all the potential norms is expected (a stable equilibrium). Third, the formal analysis allows us to identify conditions under which different potential norms are stable or unstable. Fourth, we can now ask new questions. For instance, is a stable norm of behavior in pedestrian traffic efficient (maximizes the sum of utilities)? The answer, by the way, is typically no. And finally, the theory is so explicit in its predictions that it can be tested.

  • Inspired by the work of Erving Goffman – Introduction

    Inspired by the work of Erving Goffman – Introduction

    In April 2018 I spent a week at the Research Center for Social Complexity (CICS in Spanish) at the Universidad del Desarrollo (UDD) teaching a PhD research course on game theoretic modelling. The idea of this course, developed together with Carlos Rodriguez-Sickert, was to make it an experiential course of model building from question to model. We would start by reading parts of chapters of two books by Erving Goffman that deal with how people interact in public places and then attempt to provide game theoretic models of what we read.

    The books we used were “Behavior in Public Places – Notes on the Social Organization of Gatherings” published by The Free Press (1966), from which students decided to read Chapter 6 “Face Engagements” and Chapter 9 “Communication Boundaries”, as well as “Relations in Public – Microstudies of the Public Order” published by Basic Books (1971), from which we discussed the preface as well as parts of Chapters 1 “The Individual as a Unit” and 2 “Territories of the Self”.

    In this first of at least seven posts that I have planned on this subject, I explain why Goffman’s work is very amenable to game theoretic analyses and what game theory could possibly add to Goffman’s work.

    Goffman’s is, in his own words, a “naturalistic” study of the “public order”. He identifies the often subtle norms of behavior that underlie everyday human interaction and provides insights into why these norms are as they are.

    So why is this suitable for a game theoretic analysis? The everyday human interactions that Goffman describes and discusses are often between a small and well-defined group of individuals. These are the players in the game. In fact often it is about two individuals only. It is often relatively straightforward to see what possible actions people can take and Goffman describes the possibilities very well. In fact he employs, among other things, an extremely clever method by comparing everyday human behavior in the “normal” sphere with human behavior in a mental hospital. This allows Goffman to see what possible actions people could have chosen, but typically do not choose in the “normal” world. Finally, Goffman identifies the goals that people have in these interactions. This is what a game theorist calls the individual’s payoffs or utility. These are here rarely in monetary terms. But this is all we need for a game theoretic analysis: players, actions, and payoffs.

    Well, one more thing should be discussed: information. Who knows what? In fact in most of the human everyday interaction that Goffman discusses there are bits of information that not everyone who participates in the interaction has. One of Goffman’s other books has the title “The Presentation of Self in Everyday Life”. We would not need to present ourselves in some way or another if our co-players in the interaction know everything about us from the beginning. In fact information, and who knows what, will be important in most of the examples that I will discuss in this series of posts. By the way, Goffman was well aware of the game theory of his time, such as von Neumann and Morgenstern’s 1944 book “Theory of Games and Economic Behavior” including zero-sum games as well as Schelling’s work including that on coordination games, focal points, and conflict. He could hardly have been aware of Harsanyi’s important work on incomplete information game theory as that came in the very late 60’s and early 70’s and most of Goffman’s work predates this. But this theory of incomplete information game theory will be very useful to us in our game theoretic modelling attempts of selections of Goffman’s work.

    So I have argued that game theory is highly suitable to study human everyday interaction as Goffman describes it. But game theory is actually not one theory; it is a collection of many theories. In fact it is probably better termed a collection of models and solution concepts. A solution concept, as much a misnomer as the term “game theory” itself, is simply what we expect the outcome of the game to be. Game theory, however, is awash, if this is the term I want, with solution concepts, from the many concepts of dominated strategies and rationalizability (in simultaneous and sequential interaction) to the many possible refinements of Nash equilibrium. And, by the way, I have already implicitly restricted attention to non-cooperative game theory. There is a whole world of additional solution concepts for models of cooperative game theory. I think, however, that for the most part Goffman’s work is best understood using non-cooperative games (as described above) with the solution concept of evolutionary stability, typically a particular case of Nash equilibrium.

    This is so because human everyday interaction satisfies all the assumptions of evolutionary game theory. The interaction is relatively small-scale, short-lived, and simple (much simpler than chess, for instance), the interaction is “recurrent” meaning we face the same kind of interaction many times in our life and with often changing “opponents” (not like the interaction we have with our family members or co-workers – which however could also be studied, albeit with somewhat different tools and solution concepts – see e.g. my blog post on lying II and III). This is the setting in which theory finds that we can, in many cases, expect Nash equilibrium play. In fact we can even expect special Nash equilibrium play, equilibrium play that is also evolutionary stable, stable, that is, with respect to small changes in behavior. For an overview of the findings of evolutionary game theory see for instance the books “Evolutionary Game Theory” by Jörgen Weibull, MIT Press (1995), “Evolutionary Games and Population Dynamics”, by Josef Hofbauer and Karl Sigmund, Cambridge University Press (1998), and “Population Games and Evolutionary Dynamics” by Bill Sandholm, MIT Press (2010).

    Now, finally, why is Goffman’s work especially amenable to game theoretic analyses? This is because Goffman’s view of these everyday human interactions and the norms that guide them is already very close to those of an (evolutionary) game theorist. For instance, on p.xx of the preface to “Relations in Public” he states that “the rules of an order are necessarily such as to preclude the kind of activity that would have disrupted the mutual dealings, making it impractical to continue with them.” Translated into the language of game theory this means that the rules are such that individuals cannot benefit from deviating from them. In other words these rules constitute a Nash equilibrium. On p.xx he states further that “However, it is also the case that the mutual dealings associated with any set of ground rules could probably be sustained with fewer rules or different ones,…”. In other words Goffman recognizes that many games have multiple equilibria. On p.xx he continues the last sentence as follows: “…, that some of the rules which do apply produce more inconvenience than they are worth.” In other words he realizes that Nash equilibria are not necessarily efficient.

    Another quote from “Relations in Public” on p. 59 perfectly demonstrates Goffman’s game theoretic view: “Second, the traditional way of thinking about threats to rules focuses on a claimant and a potential offender, and although this certainly has its value, especially when we examine closely all the means available for introducing remedies and corrections, still the role of the situation is usually thereby neglected. A better paradigm in many ways would be to assume a few participants all attempting to avoid outright violation of the rules and all forced to deal with the contingencies introduced by various features of various settings. Here the various aims and desires of the participants are taken as given – as standard and routine – and the active, variable element is seen to be the peculiarities of the current situation.” The participants are the players, their various aims and desires are their goals or payoffs, and the situation is the collection of the sets of available actions (based possibly on whatever information players have). Goffman, thus, suggests we can keep players and their goals fixed and consider how the structure of the game, the situation these people are in, induces human behavior. This is very much the view of game theory as well.

    To show you that a formal game theoretic analysis can provide additional insights over those gained by Goffman himself, I will, in at least the next six blog posts, actually build game theoretic models based on Goffman’s work (and based on the class discussion the students, Carlos, and I had at CICS). You can then check for yourself whether or not you see added value in these formal models. The “proof of the pudding is in the eating” as they say.

  • On Lying, III

    On Lying, III

    In my previous post I argued that a person can be kept truthful (in a repeated setting) by the threat of never believing this person again once this person has been caught lying even once. This is a strategy that many proverbs suggest.

    In this post I want to ask the question whether this threat is a credible one. I will have two answers to this question. Yes and no.  

    Haha. Well, it depends on what you call a “credible” threat. The most commonly known notion of a non-credible threat is due to Reinhard Selten. Paraphrasing his work somewhat, a threat is not credible if, once asked to actually go through with it, people do not find it in their own interest to do so. Reinhard Selten then defined a Nash equilibrium to be free of non-credible threats if it is a subgame perfect equilibrium, that is a Nash equilibrium that is also a Nash equilibrium in every subgame. What does that mean? Well this means the strategy profile that both players in the game play is such that no matter what has happened in the game so far the strategy profile from any point onwards is still so that no player would want to deviate from it if they believe the other player to follow their part of this strategy profile.

    Let us come back to the nappy-changing game as described in my first post in this series. Here is a brief summary of this game. You are a parent and ask your toddler son Ernest if his nappy is full (after some initial but uncertain evidence pointing slightly in this direction). Ernest can make his answer depend on the true state of his nappy (full or clean) and this answer can either be “yes” or “no”. You then listen to his answer and make your decision whether to check the state of his nappy or not as a function of what answer he gave. Let me reproduce the normal form depiction of this game again here:

     \begin{array}{c|cccc} & \mbox{always c} & \mbox{trust} & \mbox{opposite} & \mbox{never c} \\ \hline \mbox{always yes} & 0,\alpha & 0,\alpha & 1,1-\alpha & 1,1-\alpha \\ \mbox{truthful} & 0,\alpha & 1-\alpha,1 & \alpha,0 & 1,1-\alpha \\ \mbox{opposite} & 0,\alpha & \alpha,0 & 1-\alpha,1 & 1,1-\alpha \\ \mbox{always no} & 0,\alpha & 1,1-\alpha & 0,\alpha & 1,1-\alpha \\ \end{array}

    The one-shot game only has (Bayes Nash) equilibria in which you do not trust Ernest’s statement and always check his nappy. This is bad for both of you. It would be better for both of you if Ernest was truthful and you believed him. This, I then argued in the previous post in this series can be made an equilibrium outcome if the two players play the grim trigger strategy as suggested by all these proverbs. Note that I keep assuming that you (as a player in this game) can always find out about the true state of the nappy sooner or later. [One could here talk about the more recent literature on repeated games with imperfect monitoring, but I will refrain from doing so at this point – the reader may want to consult the 2006 book by Mailath and Samuelson on Repeated Games and Reputations]

    The grim trigger strategy is as follows. You believe Ernest as long as he was always truthful in the past (and as long as you were always trusting in the past). Once you catch Ernest lying (or once you have not trusted Ernest) you never again believe him and always check his nappy from then on. Ernest is truthful as long as you have always trusted him (and as long as he has always been truthful). Once he catches you not trusting him (or once Ernest himself was untruthful) Ernest will always do so from that point on. The statements in brackets probably seem strange to someone not used to game theory, but they are needed for the statements below to be fully correct.

    This grim trigger strategy, then, is a subgame perfect equilibrium, provided Ernest’s discount factor  \delta > \alpha . Why? I have already argued that, in this case, Ernest would not want to deviate from the equilibrium path of being truthful, because lying would lead to you never trusting him again and this is sufficiently bad for him if his discount factor is sufficiently high. You certainly have no incentive to deviate from this equilibrium path, because you get the best payoff you can possibly get in this game. But subgame perfection also requires that the threat, when we are asked to carry it out, is in itself also equilibrium play. So suppose Ernest did lie at some point and you (and Ernest) are now supposed to carry out the threat. What do you do? Well, you now play the equilibrium of the one-shot game forever and ever. But this is of course an equilibrium of the repeated game, so the grim trigger strategy described here does indeed constitute a subgame perfect equilibrium.

    So according to Reinhard Selten’s definition the punishment of never ever believing Ernest again is a credible threat.

    But others have argued that Reinhard Selten’s notion of a credible threat is only a minimal requirement and further requirements may be needed in some cases. I do not know what Reinhard Selten thought of this, but I guess he would have agreed. So what is the issue?

    You and Ernest, when you look at this game, should realize that you would both like to play the truthful and trusting equilibrium path. To incentivize you to do so, especially Ernest in this case, you need to use this threat of you never again believing him if you catch Ernest lying. But suppose we are in a situation in which you have to carry out this threat. Then you would both agree that you are in a bad equilibrium and that you would want to get away from it again. In other words, you both would want to renegotiate again. But if Ernest foresees this, that you would always be willing to renegotiate back to the truthful and trusting outcome of the game, then his incentives to be truthful are greatly diminished.

    With something like this in mind Farrell and Maskin (1989)  and others have put forward different versions of equilibria of repeated games that are renegotiation-proof, that are immune to renegotiation.

    They call a strategy profile of a repeated game a weakly renegotiation proof equilibrium if the set of all possible prescribed strategy profiles (after any potential history) are Nash equilibria in the subgame and cannot be Pareto-ranked. This means that for any two possible prescribed strategy profiles (or continuation equilibria as they call it) must be such that one person prefers one of the two while the other person prefers the other. Note that this is not the case in the grim trigger equilibrium of the repeated nappy changing game. In this subgame perfect equilibrium both you and Ernest prefer the original equilibrium of the game over the one carried out after Ernest is lying.

    So what is weakly renegotiation proof in the repeated nappy-changing game? Well, I have made some calculations and the best weakly renegotiation proof equilibrium for you (in terms of your payoffs) that I could find is this: On the equilibrium path Ernest is truthful and you randomize between trusting Ernest with a probability of  \frac{1}{2-\alpha} and with a probability of  \frac{1-\alpha}{2-\alpha} you play “do not check (regardless of what Ernest says)”. For this to work you have to randomize using dice or something like this in such a way that Ernest can verify that you correctly randomize. If Ernest ever deviated you then verifiably (to Ernest) randomize by playing “check nappy regardless of Ernest’s answer” with a probability of  \frac{\alpha}{2-\alpha} and trusting Ernest with a probability of  \frac{2(1-\alpha)}{2-\alpha} . Ernest then continues to be truthful. Ernest incentivizes you to behave in this way (after all you are letting Ernest do what he wants sometimes, which is not what you would want) by punishing you, if you ever deviate, with the following strategy. He would continue to be truthful but ask you to play “do not check (regardless of what Ernest says)”. This complicated “construction” works only if the punishment is not done forever, but only for a suitably long time after which you go back to the start of the game with the equilibrium path behavior. Whenever one of you deviates from the prescribed behavior in any stage you simply restart the punishment phase for this player.

    This probably all sounds like gobble-dee-gook and I admit it is a bit complicated. Before I provide my final verdict of this post let me just clarify this supposedly best-for-you renegotiation proof equilibrium. Suppose  \alpha is essentially one half. Then in the prescribed strategy profile, on the equilibrium path, you are randomizing between trusting Ernest (which you find best given that he is truthful) with a probability of 2/3, that means two thirds of the time. But one third of the time you leave Ernest in peace even when he tells you his nappy is full. As this happens one half of the time you actually leave him in peace despite a full nappy in one sixth of all cases. You have to do this in order for your punishment of him to be actually preferred by you over the equilibrium path play. In the punishment you mostly switch probability from leaving Ernest in peace to checking him always, which is something that you do not find so bad, but that Ernest really dislikes.

    Well, I do not know if you find this very convincing, but I do think that the grim trigger strategy is not really feasible when it comes to teaching kids not to lie. What I would actually use in real life is a simple trick. I would not punish behavior only within the nappy-changing game itself. I would use television watching rights, something outside the game I just described. The great thing about this is that while, I assume, Ernest does not like it when his television time is reduced you are, I assume, actually quite happy when he watches less television and so this works as a renegotiation proof punishment. But the fact that you do this, if you do, can be explained by the failure of the grim trigger strategy to be renegotiation proof.

  • A side remark on lying: The boy who cried wolf

    A side remark on lying: The boy who cried wolf

    You probably know the story of the boy who cried wolf. A boy is charged by his elders to watch their flock of sheep and to call them as soon as he sees a wolf approaching. The wolf supposedly would want to kill one of the sheep, and the boy’s cry of “wolf” would bring the elders running to fend of the wolf to protect their sheep. In the story the boy on two occasions cries wolf when there is no wolf, with the effect that the elders come running both times and being very upset at his “lying” (and the boy pleased). But when he does cry wolf for a third time, this time when there actually is a wolf, the elders do not believe him and stay away. This, of course, has the disastrous (?) effect that the wolf kills one of the sheep.

    The nappy-changing game as I have written it down in my post on lying (which you may need to read before you can read this post) can also be seen as the game between the boy and his elders. There are two states of nature. Either there is a wolf or there is not. The boy, who is watching the sheep, knows which state it is and the elders, who are somewhere else, do not. The boy has four (pure) strategies: never say anything, be honest (cry wolf when there is one, be quiet when there is none), use “opposite speak”, and always cry wolf. The elders who listen to the boy’s cry also have four (pure) strategies: always come running, trust the boy, understand the boy as if he was using opposite speak, and never come running. Supposedly, the elder’s preferences are just as the parent’s are in the nappy-changing game. They would like to come running if there is a wolf, and they would like to keep doing whatever it is they are doing when there is no wolf. The boy’s preferences seem to be the same as Ernest’s in the nappy-changing game. If there is a wolf the boy would like to see his elders to come running to help, but the boy would like the elders to come running even when there is no wolf (he gets bored I suppose). The one slight difference between the two games seems to be that the assumed commonly known probability of a wolf appearing,  \alpha is now less than a half (if we assume that the payoffs are still just ones and zeros). Well, what matters is that the ex-ante expected payoff of coming running is lower than the ex-ante expected payoff of staying put. We infer this from the elders’ supposed actions of staying where they are when they do not believe that there is a wolf. If the elders had found a wolf attack really disastrous and at the same time sufficiently likely, then after finding the boy not trustworthy, they would have decided to come always, that is to watch out for wolves themselves. The fact that they let the boy do the watching (and to then ignore his warnings – because they do not believe him) tells us that without further information about the likelihood of the presence of a wolf, they prefer to stay where they are (probably doing something important) and risk losing one sheep to a wolf over keeping constant watch for wolves.

    In any case the same model as the nappy-changing game, but now with  \alpha < \frac12 , now takes account of the supposed (long-run) behavior in this story. The game still has only two pure equilibria and they involve the boy either crying wolf in both states (or not doing so in both states), but now with the effect that the elders never come.

  • On Lying, II

    On Lying, II

    There is a German saying about lying: “Wer einmal lügt, dem glaubt man nicht, und wenn er auch die Wahrheit spricht.” The closest corresponding idiom in English is probably this: “A liar is not believed even when he speaks the truth.” This is good enough for the moment but there is a little bit more information in the German saying than in the English one and this little bit more will become interesting in my discussion further below.

    Both statements are sufficient for a first quick side discussion I want to provide here as they both contain “even when he speaks the truth.” As a child, I have been made aware of this idiom on a few occasions. While I recall that I always understood it to mean that I should not lie, I also recall that the statement in itself puzzled me. I thought that if this liar speaks the truth then of course I will believe him. It took me some time to realize that there is a specific information structure assumed in this statement that is not made explicit. It should really say that “a liar is not believed even when he speaks the truth, and the truth is not known by the listener”. This addition was probably omitted for two reasons, one it makes the statement shorter, and two it should be obvious that this is what is meant. In other words, any statement made by someone generally known to be a liar will not be taken at face value. It will be ignored. This means that after a liar makes a statement we know as much as before, no more and no less. Note that this is true in the nappy-changing game between you and your child, Ernest, that I described in my previous post. Here is a brief summary of this game. You ask Ernest if his nappy is full (after some initial but uncertain evidence pointing slightly in this direction). Ernest can make his answer depend on the true state of his nappy (full or clean) and this answer can either be “yes” or “no”. You then listen to his answer and make your decision whether to check the state of his nappy or not as a function of what answer he gave. Let me reproduce the normal form depiction of this game again here (with  1 > \alpha > \frac12 ).

     \begin{array}{c|cccc} & \mbox{always c} & \mbox{trust} & \mbox{opposite} & \mbox{never c} \\ \hline \mbox{always yes} & 0,\alpha & 0,\alpha & 1,1-\alpha & 1,1-\alpha \\ \mbox{truthful} & 0,\alpha & 1-\alpha,1 & \alpha,0 & 1,1-\alpha \\ \mbox{opposite} & 0,\alpha & \alpha,0 & 1-\alpha,1 & 1,1-\alpha \\ \mbox{always no} & 0,\alpha & 1,1-\alpha & 0,\alpha & 1,1-\alpha \\ \end{array}

    We found that the only equilibrium of this game is that Ernest lies (in that he either always says yes or always says no – regardless of the state of his nappy) and that you do not believe him and always check his nappy. Now note that this equilibrium is bad for both Ernest and you. Ernest is faced with the reality that you ignore his answer and check him no matter what he says, which is very annoying to him. You are faced with the reality that you cannot trust Ernest and have to check his nappy even in those cases when it is clean. Thus we have that this little liar (a bit too strong a term really for your little son) is not believed even when he speaks the truth, that is, even when his nappy is not full.

    Looking at the matrix we can see that we here have a situation that is somewhat reminiscent of the prisoners’ dilemma. There is a potential outcome in this game that is a Pareto improvement, that means it is better for both you and Ernest, than the equilibrium outcome. If Ernest was truthful and you could trust him you would both be better off. You would not have to check his nappy when it is clean and Ernest would now only be bothered when the nappy is clean. In the matrix this can be seen as the payoffs in this case are  1-\alpha,1 instead of  0,\alpha .

    Isn’t there some way of getting these payoffs and making Ernest honest and you trusting? Well, there is hope. The nappy changing game is one that you and Ernest play many times. It is really what the literature calls a repeated game. True, the  \alpha is not always the same – sometimes you probably have stronger suspicions that the nappy is full than at other times – but this is not so important for the discussion. The big question in this repeated game is the question of how forward looking the two players are. Well, as a grown-up you are presumably very forward looking. This means your discount factor, with which you discount the future relative to the present, is very close to one. You value payoffs in the future almost as much as in the present.  For Ernest this is unclear. In fact I believe that the older children get the higher their discount factor becomes. Very young children don’t seem to care one bit about what happens even in one hour. The now is everything. When they are older they can be more easily incentivized to do something now with a promise or a threat about tomorrow or next week or even xmas when it is quite far away.

    You will see that the discount factor plays an important role in the possibility of achieving higher payoffs in the nappy changing game. Let us see what we can do in the repeated game. Note first that in this game you will always learn the true state of the nappy eventually. So you can always check later at some point whether Ernest was truthful or not. This is very important of course. Lying is much easier when there is no chance of being detected. This would be an interesting topic for another blog post.

    Recall that I said that there was more information in the German saying than in the English one. But clearly both statements are to be understood as a threat. If you lie you will be called a liar and liars won’t be believed. This is supposedly a bad thing also for the liar, as it is in my nappy-changing game. The German saying is more explicit about what induces people to call you a liar. In fact, according to the German saying, you only have to lie once to be called a liar. Literally translated it says “He who has lied once will not be believed even when he speaks the truth.” The German saying prescribes a strategy in the repeated game that the literature calls the “grim trigger” strategy. It is essentially as follows. You trust Ernest as long as he was always truthful in the past. If he was not truthful even once (and no matter how long ago this was) you will never believe him anymore and you will always check his nappies from then on. Ernest’s strategy is to be truthful at all times unless you have, at one point, not been trusting.

    Under what circumstances is this strategy a Nash equilibrium in the repeated game? If Ernest is always truthful then you are always trusting and Ernest gets a payoff of  1-\alpha every time. If he lies at one point by saying no even though the nappy is full he gets a payoff of one once and then zero ever after. With (the usual) exponential discounting and with  \delta < 1 denoting the discount factor, this means that Ernest prefers to be truthful if  1-\alpha > 1- \delta or, equivalently, if  \delta > \alpha . Recall that  \alpha > \frac12 . So if Ernest is sufficiently forward looking, the grim trigger strategy described in the German saying would indeed incentivize Ernest to be truthful at all times.

    I think that there is one lesson we can take from this discussion. If we want to teach our kids to be truthful we may have to wait until they are old enough to be sufficiently forward looking. But on the issue whether the grim trigger strategy really works, and whether this is really a feasible way to teach honesty, I have more to say in my next blog post.

  • On Lying, I

    On Lying, I

    There are many forms of lying, from so called white lies that are really just a form of politeness to deliberate attempts to misrepresent the truth to fashion policy (of some institution) in your own interest. I am here interested in something somewhere in the middle of the lying spectrum, children lying about something to avoid a slightly unpleasant duty. We all know that a child’s answer to “Have you brushed your teeth?” is not always necessarily completely truthful.

    In this and the next two blog posts, using the language of game theory, I want to discuss the incentives to lie and how one could perhaps teach children not to lie.

    (more…)
  • Welfare optimal beach pricing

    Welfare optimal beach pricing

    This post builds on the previous two, tragedy of the common beaches and common beaches – a first model. I have started this, so I need to finish this now. In this post I will finally try to build a small model in which it is true that “charging a perhaps even substantial price for beach access would be welfare improving for all potential beach goers”.

    Here is a first attempt. Let me assume that all potential beach goers have the same utility of going to the beach (over their second best activity) as a function of the number of other people on the beach,  \displaystyle u(k) . We now need to introduce prices. It seems a safe assumption that, ceteris paribus as the economists like to say (meaning all else the same), people prefer to pay less over more. In principle we could work with any final utility function  \displaystyle v(k,p) that depends on both the number of other people  \displaystyle k and the price  \displaystyle p as long as it is decreasing in both arguments. We do not lose much, and it is (a little bit) easier to understand, if we use  \displaystyle v(k,p) = u(k) - p .

    Now fix any positive price  \displaystyle p . What would the new equilibrium number of beach goers be? By the same argument as in the previous post, and with the same caveats, we now expect a number of beach goers, call it  \displaystyle \hat{k}(p) , that makes everyone indifferent between going to the beach and doing their second favorite thing. In other words,  \displaystyle \hat{k}(p) must be such that  \displaystyle u(\hat{k}(p))=p . And as  \displaystyle u is decreasing in  \displaystyle k we have that the higher the price  \displaystyle p the lower is  \displaystyle \hat{k}(p) – the fewer people are on the beach.

    Are people more happy now having to pay for beach access? No. However, there are also not less happy. Why? This is so because the higher price does two things. First, the people who go to the beach now have to pay this price, which they do not like. But second, there are now fewer people on the beach, a fact that they do like. However, on balance, the two effects wash out. So we are back to square one.

    But I have not played all my cards yet! In fact I have at least two routes to go. Let me take the less obvious one first, and I will come back to the more obvious one later (what is it?). I have so far assumed that all potential beach goers have the same utility function, an assumption that we agreed, I assume, is not very plausible. Let me now introduce heterogeneity among our potential beach goers. There are at least two ways of doing this. I will assume that people still do not differ in their  \displaystyle u function, but in their willingness to pay in order to get some  u through accessing the beach. And now, while I could stay with the model with a finite number  \displaystyle n of potential beach goers, it strikes me as more elegant and easier to turn to a model with a continuum of beach goers. I think you will see why. Let me take an arbitrary potential beach goer. Her or his utility shall now be given by the function  \displaystyle v(\beta,p) = u(\beta) - \alpha p , where  \displaystyle \beta is now the proportion of all potential beach goers that actually end up going to the beach,  \displaystyle p is still the price, and  \displaystyle \alpha is a parameter that describes this person’s willingness to spend money. A person with a low  \displaystyle \alpha does not suffer that much from paying a high price (I guess this is probably a wealthy person, but could also be just a beach fanatic), while a person with a high  \displaystyle \alpha is very reluctant to spend any money in order to get beach access. I can easily introduce heterogeneity now by assuming that there are different people with different  \displaystyle \alpha . In fact, and that is why a model with a continuum of potential beach goers is now easier, it is easier to assume that a person’s  \displaystyle \alpha is distributed according to some continuous distribution. One could of course also work with only a finite number of possible values for  \displaystyle \alpha , but this is more clumsy in the analysis.

    In order to make some calculations I will assume a more specific setting. I will assume that  \displaystyle u(\beta) = 1-\beta . It is strictly decreasing in  \displaystyle \beta and is zero exactly only at  \displaystyle \beta = 1 (that is when all potential beach goers actually go to the beach). I will also assume that people’s willingness to pay parameter  \displaystyle \alpha follows a uniform distribution on the interval  \displaystyle [0,1] .

    Having made all these assumptions (and making the assumption that we shall have an equilibrium in this game with a continuum of players – see my previous post on why I believe equilibrium makes sense in this context), I can now let math take over to work through the consequences of this model.

    Now every person is different and different persons make different choices. For a given price  \displaystyle p , a person with a given  \displaystyle \alpha goes to the beach if and only if  \displaystyle 1 - \beta - \alpha p > 0 , or equivalently, if and only if  \displaystyle \alpha < \frac{1-\beta}{p} . Note that because of the continuum assumption I can ignore people who are exactly indifferent between going to the beach and their second favorite activity. They have zero mass in such a model.

    This now means that for a given price  \displaystyle p and a given proportion of actual beach goers  \displaystyle \beta the actual beach goers are exactly those people with an  \displaystyle \alpha < \frac{1-\beta}{p} . How many of these do we have? Or, more accurately, what is their proportion? Well this is given by the probability that  \displaystyle \alpha < \frac{1-\beta}{p} . And, as we assumed that  \displaystyle \alpha is uniformly distributed on  \displaystyle [0,1], this probability is given by  \displaystyle \frac{1-\beta}{p} . But this now means that when the proportion of actual beach goers is  \displaystyle \beta it will then be given by  \displaystyle \frac{1-\beta}{p} . So the two (and this is the equilibrium condition) must be the same:  \displaystyle \beta = \frac{1-\beta}{p} and we obtain an equilibrium proportion of actual beach goers of  \displaystyle \beta^*(p) = \frac{1 }{1+p} . In this model, if people are charged a price  \displaystyle p to go to the beach a fraction of  \displaystyle \beta^*(p) = \frac{1 }{1+p} actually pay this amount and show up at the beach.

    We can now do all kinds of interesting things with this model. First, we can verify that if the price is zero, the model leads to the same conclusion as the previous one. Here, everyone goes to the beach, but everyone is in the end indifferent between going to the beach and the second favorite activity and nobody derives an actual positive benefit from being on the beach. Second, we can finally compute the welfare optimal prices as I have promised. This opens a new can of worms, of course. What is welfare? Let me just use, without discussion, what is typically called utilitarian welfare, but you can use your own measure if you like. Utilitarian welfare is simply the equally weighted sum of all people’s utility. In our case the sum will have to be an integral, as we have a continuum of individuals. Taking our equilibrium condition as given utilitarian welfare is given by   \displaystyle W(p) = \int_{0}^{\frac{1}{1+p}} \left(1- \frac{1}{1+p}-\alpha p\right) d \alpha , which one can compute to be   \displaystyle W(p) = \frac{p}{2} \frac{1}{(1+p)^2} . The reader can verify that the welfare maximizing price, in this model, is equal to one (a trick: maximize the natural log of welfare).

    So what do we have? At a price of one (in whatever currency we are working with here), half of all people go to the beach. These are the people with a high willingness to pay. That is, with an  \displaystyle \alpha < \frac12 . These people derive a strictly positive benefit from being on the beach and are therefore better off in this case than under zero prices. The other half of the people does not go to the beach and pursues their second favorite activity. They derive zero extra benefit. So they are not better but also not worse off than under zero prices. Going from zero to positive prices we therefore have what is called a Pareto-improvement, we make some people better off without making anyone worse off.

    Could we make all people better off? Yes (and this by the way is the route two I mentioned earlier). I have so far not said where the money that all these people are paying actually goes. Supposing that the group of potential beach goers is an easily identifiable group (the people who live in the area as well as all registered tourists), then the income generated from the beach goers could be paid out equally to all potential beach goers, those who go and those who don’t. In our model all potential beach goers would therefore receive a money amount of  \displaystyle \frac{p}{2} regardless of whether they go to the beach or not. Note that this does not change their incentives to go to the beach, unless their utility function changes when given a small amount of additional wealth. But then now everyone is better off under positive beach prices compared to zero prices. The world would be a better place.

    The reader may now want to come back to the assumption that all people have the same  \displaystyle u(\beta) function. Sufficient heterogeneity here will change some of the insights somewhat and clear Pareto improvements will typically not be possible. But the utilitarian welfare will typically not be maximized at zero prices in such a model either.

    Another thing one could do now is to ask how the price is chosen. Perhaps it is chosen by majority voting among all potential beach goers. What price would they vote for? Would it be the welfare maximizing price?

  • Common beaches – a first model

    Common beaches – a first model

    I will try and build a small model in which it is true that “charging a perhaps even substantial price for beach access would be welfare improving for all potential beach goers”, a claim I made in my last post. In this post I will take a first few steps in this direction, first only demonstrating my claim that beaches potentially suffer from the “tragedy of the commons” before I will tackle the main question in the next post.

    By the way, the interested reader may want to look at the literature on the economics of clubs for more on this topic. A good starting point may be “Clubs” by  Suzanne Scotchmer, 2008, in the New Palgrave Dictionary of Economics also available here.

    So what do we need in this model? We need potential beach goers and we need to think about the benefit that these beach goers derive from going to the beach. We already have a lot of options here. We could have a finite number of potential beach goers or we could think of them as a continuum of beach goers. The first assumption is obviously empirically correct but the latter may be more practical when we are thinking of a lot of beach goers. Let me here start with having a finite number of beach goers, but I might change this later. A beach goer is now assumed to derive a “utility” from going to the beach (versus pursuing his or her second best alternative) that is a function of how many other people there are on the beach. Let us call this function  \displaystyle u(k) , where  \displaystyle k is the number of other people on the beach (excluding the person whose utility we are here looking at). Of course, in reality different potential beach goers have different such utility functions, and of course, people do not really have such a clear function in their mind at all. But people are probably more or less happy being on the beach with more or fewer other people on the beach and people probably at least to some degree make their choices to which beach they go dependent on their expectation of the number of other people on the beach. More worrying than our assuming the existence of such a utility function is our assumption that all people have the same utility function. This is almost surely wrong, although it is actually not so easy to assess this empirically. I will assume that all people have the same utility function for the moment, but we should keep in mind that this is most likely wrong. We may want to come back to this question at the end.

    Now to the shape of this utility function. I would assume that it is ultimately decreasing in  \displaystyle k and eventually negative for sufficiently large  \displaystyle k . As I am not so interested in beaches that do not suffer from an overuse problem I will simply assume that the utility function is decreasing for all  \displaystyle k and of course positive for  \displaystyle k = 0 .

    With the model as it is so far I can now replicate (or demonstrate) my argument of the previous post that beaches can be inefficiently overcrowded. Suppose that the number of potential beach goers, call it  \displaystyle n , is such that  \displaystyle u(n) is negative. Given this, how many people will go to the beach?

    In an equilibrium (call it Nash equilibrium if you like – as what I described here is really an n-player game), we expect essentially so many people, call it  \displaystyle k^* , to go to this beach such that  \displaystyle u(k^*)=0 . Why? Well if fewer people than  \displaystyle k^* go to the beach then a potential beach goer who is not on the beach would derive a positive net utility (over the second best alternative) from going to the beach. So she should go. And more people will come until we have  \displaystyle k^* people on the beach. If more people than  \displaystyle k^* are on the beach, people on the beach will suffer a negative utility and will start leaving until the remaining number is again than  \displaystyle k^* .

    Of course in reality we do not exactly expect  \displaystyle k^* people to go to the beach for various reasons. One possible reason is that our model is simply not a 100% accurate description of the real world. But even if our assumptions about people’s utility functions were completely correct we still have made an implicit and quite radical assumption about what people know about the number of people on the beach when they make the decision whether or not to go to the beach. In reality when you make this decision you do not know how many people are on the beach already and how many people will still come later. Also, once you have driven perhaps a fairly long way to the beach and then you see that it is rather crowded you may decide to stay even if, had you known about this before you left, you would not have driven to the beach in the first place. However, as I argued in my previous post, the approximate number of beach goers at various beaches is often roughly commonly known. People who live in the area have a pretty good idea about these numbers and tourists can also inform themselves fairly well from their respective hosts. This information is to some extent typically also available on the internet. On any given day you might find that you were lucky and that the actual number is somewhat lower than expected or unlucky and the actual number is somewhat higher, but on average the numbers are not so far off from what people expected.

    Now back to the equilibrium number of beach goers on our beach,  \displaystyle k^* , what have we learnt from this simple analysis so far? Well, we have the tragedy of the commons in a nutshell. Despite the fact that all potential beach goers would derive a potentially high extra benefit (over their second favorite activity) from going to the beach – if only the beach is not overcrowded – the equilibrium number of beach goers is such that everyone is just indifferent between going to the beach and doing their second favorite thing (staying at home, for instance, or going to another beach).

    If somehow we could cap the number of beach goers at some lower level, say  \displaystyle k^{**} < k^* , for which by assumption  \displaystyle u(k^{**}) > 0 , we could improve the utility of all the people who are allowed to go to the beach without hurting those who are not allowed. This is because without the cap the latter group would have been indifferent between going to the beach and their second favorite thing anyway. One should probably now re-examine the assumption that all people have the same utility function. I will leave it to the reader at this point, but will tackle this in my next post when, I hope, I will finally demonstrate how it is possible to impose a cap through a price for beach access and how this can be welfare-optimal.

  • The tragedy of the common beaches

    The tragedy of the common beaches

    There are things you want to do only with lots of other people. Go see a football game, for instance, or a pop concert. You would feel rather silly being the only one clapping and cheering. You might also prefer not to be the only couple in a restaurant. Aside from the growing feeling that you are probably in the wrong place, you miss the background chatter, the gentle clashing of dishes, the constant moving around of busy waiters. You would miss atmosphere.

    But the beach, for me at least, could do with fewer people. In fact, I would almost go as far as saying that I would prefer to be the only person on a beach, except family and friends of course. To be fair, the Cornish beaches that I have been on in the last couple of weeks are typically not too crowded, and if there weren’t a significant number of beach goers, the beach wouldn’t have a café and bathrooms. And I do appreciate being able to buy an ice-cream cone now and then.

    But there is also a beach not far from the others that we essentially never go to. In the last almost ten summers, in which I have spent parts of the summer in Cornwall, I have been to this beach only once. And this is actually probably the nicest beach of all. It has islands and caves and multiple bays. The tide does interesting things to the beach. It has lovely walks around the cliffs. It is simply amazing. But it is just always full of people. Being there once, I realized that I much prefer to go to a less attractive beach with fewer people over the more attractive beach with lots of people.

    I expect that I am not the only person on the beach wishing others away and that beaches generally suffer from what is known as “the tragedy of the commons”. Beaches in Cornwall are “commons” – see also my post on a conflict over access to stream water on a Cornish beach. They are not owned by any individual or group but are owned by all. By the way, I am very happy about this. I remember once being at a conference at Stony Brook, New York, where I spent a free late afternoon driving around to find some access to the coast. As a naïve European I thought I could just drive around and find a sign pointing me to a beach somewhere. In fact I drove around for hours without ever getting close to the coast at all. I always ran into private property, or at least signs indicating as much. And I suppose that if I had found a publicly accessible beach it would have been packed. So I do appreciate the fact (I believe) that in most of Europe (including the UK) most beaches are “commons”.

    But the problem with commons, or at least a possible problem, is that they are overused, just as is supposedly the case with the overfishing of international waters and, historically, with the overgrazing of the commonly owned village meadow.

    The problem is this. When a new person arrives on the beach, while this person is apparently – by a revealed preference argument – deriving some positive benefit from being on the beach, the well-being of the people already on the beach often deteriorates. The typical Cornish beach is wind-swept and prone to the occasional shower. People, therefore, bring wind-breakers and even tents to the beach. Imagine your joy when a family of four tents puts up camp a few yards from your feet right between you and your view of the sea. Or when you go bodyboarding in the surf, trying to stay within the very narrow bounds allowed to you by the life-guards, you continuously bump into all those other surfers or, worse, they into you.

    If this problem is severe, one could imagine welfare improving prices that people have to pay for going to the beach. Note that through parking fees such prices are actually sometimes already in place. It is indeed possible in some cases – even if it sounds a bit paradoxically – that charging a perhaps even substantial price for beach access would be welfare improving for all potential beach goers. The reader may want to try and construct a model (a set of assumptions and a logical argument) in which this statement is true. I will do this in my next blog post.

    I would like to finish this post by pointing out that this “tragedy of the commons” is probably present in many other situations. The last time I went to the Natural History Museum in London with my kids, it was so packed with people that we did not really enjoy ourselves much and left again pretty quickly. When I was in Kyoto (on a two-month sabbatical) we mostly avoided visiting the “top temples” because they were so busy that we felt the whole point of visiting a calm and serene temple garden was lost. Yes, all these places do charge entry prices, but I am not convinced that these prices are welfare-optimizing for the “consumers” of these places. Generally, any place you visit with potentially lots of other people with which you compete over access rights in some form or another – a children’s play ground, an amusement park, a museum, etc. – potentially suffers from a bit of “the tragedy of the commons”.

  • Allocating stream water

    Allocating stream water

    It could easily have ended in a fist fight. Or more likely a plastic shovel fight. This is how it began. My kids, my wife, and I went down to a Cornish beach. A beach with an interesting feature. There is a small stream that runs high up the beach parallel to the sea for more or less the whole length of the beach. Kids find this stream almost more fun to play in than the often pretty rough sea, which is mostly inhabited by bodyboarders smashing into each other. As the stream runs essentially over and through sand with lots of stones around as well, it is very malleable. Kids (and, invariably, their fathers – mothers do not seem so keen) love to build little dams, dig up new channels, and create little pools to play in. On the given day, easily 20 to 30 kids (and some of their fathers) were happily engaged this way somewhere along the length of the stream, when suddenly the water was reduced to a tiny trickle and had stopped flowing altogether further down the stream.

    An investigation was launched that quickly revealed the source of the sudden disappearance of water. One of the more ambitious projects, not only involving a father but a set of uncles as well, had been successfully carried out further upstream. The stream had been dammed so well, that the water had now found a new course much more directly into the sea, avoiding the long meandering now empty river bed. It was a project that was every father’s dream, of course, and I could see and empathize with the sense of satisfaction on all the faces. Careful planning and tireless construction work with attention to detail had led to a substantial change in our surroundings. Men (and their children) have changed nature to suit their own needs.

    Yet, glorious as it undoubtedly was, the successful engineering feat had left the 20 or 30 kids downstream with nothing to play with anymore. The dam builders were approached and asked to open one of their hatches at least a little to let some water to also flow down the usual path. Reluctantly they did so. Not much later, however, the water supply downstream dwindled again. The opened hatch had been closed. This now led to subversive acts of sabotage with people opening hatches without asking for permission or explaining why they did so. This in turn left the original engineers with frequent repair work. One had a sense that the engineers knew that they were on morally shaky ground, but they tried to hang on regardless to see their project through. People were trying to avoid an open confrontation and after a couple of hours of oscillating water levels on both sides the problem was solved by the engineers finally deciding to join the bodyboarders in the sea. As by now some kids had started working down the new (but much shorter) stream, eventually some water was allowed to flow both ways.

    The problem of allocating water to the different parts of the stream was certainly inefficient for some time. In that time most of the water flowed down the short new river bed that only allowed a few kids to play in it, while when the stream was flowing down the old river bed it had created happiness for many more children. The, for the most part silent, dispute over the allocation of water, was created by the absence of a clear property rights structure. The beach is a “commons ‘’. It is not owned by any one person but by all of them. Nobody has a right to an exclusive use of the stream or to decide alone on how the stream should be used. If someone owned the beach or at least the stream (this could be a government agency or a private person) beach goers could pay for any amount of water (subject to availability) going down to their desired branch with some reasonable hope that prices will solve the allocation problem. I would not really advocate this here for two reasons. One, as the issues at stake are not that very high, people in most cases find reasonable agreements through communication with the implicit threat of public shaming if people act too selfishly. And then they are better off compared to a situation with the same final allocation but with everyone having to pay. Two, it wouldn’t be nearly as much fun to watch.