-
Dirk Gently’s Holistic Decision Theory – Model Uncertainty

Dirk Gently talks a lot about the impossibility of certain events that have actually happened. In “Dirk Gently’s holistic detective agency,” Dirk is much taken by a conjuring trick a professor performs at a high table dinner at one of the Cambridge colleges. Only he seems to realize that the particular trick is actually totally impossible, at least given what we know about the world. The professor makes a “salt cellar” disappear and then reappear in a 200-year-old pot that needed cracking open to get to the salt cellar (on pages 34-37 of my edition of the book). Dirk calls this event (p. 189) “completely and utterly impossible.” After asserting that, luckily, “there is no such word as ‘impossible’ in my dictionary,” in fact a lot of pages are missing, he rephrases this to “completely and utterly – well let us say, inexplicable … [I]t cannot be explained by anything we know.” What he means is that the model of the world that we have cannot explain this phenomenon. Because of this fact we should entertain a different model of the world. Indeed, eventually it becomes clear that the professor was able to travel back in time 200 hundred years, something our model of the world does not allow, and to have the salt cellar put into the pot when it was made. Dirk’s ability to think beyond the current model of the world made it possible for him to understand the seemingly impossible.
In “The long dark tea-time of the soul” Dirk goes further in claiming that he often prefers an impossible explanation over possible but highly improbable ones. Among a number of interesting patients in the ”Woodshead hospital,” there is a ten-year-old girl sitting in a wheelchair and murmuring constantly and “soundlessly to herself.” She is murmuring stock market prices, but yesterdays (with a precise 24-hour delay). She has no apparent access to outside news of any kind. Yet, the psychologists explains that “[w]ell, as a scientist, I have to take the view that since the information is freely available, she is acquiring it through normal channels.’’ This is happening on pages 121-123 in my edition of the book.
When told about this, Dirk is confronted with Sherlock Holmes’ (I guess really Sir Arthur Conan Doyle’s) statement that “[o]nce you have discounted the impossible, then whatever remains, however improbable, must be the truth.” Dirk responds with “I reject that entirely, the impossible often has a kind of integrity to it which the merely improbable lacks.” Applied to the present case of the girl in the wheelchair he states that “[t]he idea that she is somehow receiving yesterday’s stock market prices out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don’t know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. We should therefore be very suspicious of it and all its specious rationality.”
I like Dirk’s statement a lot, for two reasons. The first is that Dirk points out that we should be aware of model uncertainty. We should always entertain the possibility that the model that we have about the world is not completely correct, and sometimes new evidence should lead us to reconsider the model. This is somewhat comical in Dirk’s context, where we are really talking about the physics of the world (although of course physics models can also be wrong). The idea is much more important, however, in game-theoretic models (as used most heavily in economics), as these models are typically radical simplifications of the world. If a game-theoretic models leads to bizarre predictions (or policy implications), you should probably go back to the drawing board for a more appropriate game-theoretic model, rather than accept these predictions as truth. By the way, Dirk’s approach of questioning the underlying model based on possibly but highly improbable observations given the model, is a standard tool of statistics. Indeed, in statistics, we tend to reject a null hypothesis (our current model of the world) if the kind of data that we observe is sufficiently hard to explain with the null hypothesis; if the p-value is below the level of significance that you choose to begin with (you know, the often chosen
of 5% for instance). So, while the way Dirk says this, may appear ludicrous, he is really just stating sound scientific reasoning.
The second part of Dirk’s statement that I like is that he puts human incentives even above physical laws (of a possibly flawed physical model). Why would this ten-year old girl maintain such a “laborious hoax” that is “of no benefit to herself”? Taking human incentives into account is what game-theoretic models (with a longer tradition in economics) are really good at. Douglas Adams would have made a great game theorist (or even economist ;).
-
Dirk Gently’s Holistic Decision Theory – Model Choice

On pages 146-150 of my edition of Douglas Adams’ “Dirk Gently’s Holistic Detective Agency,” Dirk Gently has a phone conversation with one of his clients, whose cat he has not been able to find in the past seven years. They are debating the bill that the holistic detective had sent, and we only hear his side of the conversation. I will give you the key snippets: “Sadly, no sign as yet of young Roderick, I’m afraid, but the search is intensifying as it moves into what I am confident are its closing stages.” … “I grant you, Mrs. Sauskind, that nineteen years is, shall we say, a distinguished age for a cat to reach, yet can we allow ourselves to believe that a cat such as Roderick has not reached it?” And after some very entertaining bits about Dirk’s “quantum mechanical view” of the world and the psychological cost his client’s skepticism puts on him (also itemized on the bill) we get to Dirk’s decision-theoretic view: “I do appreciate, Mrs. Sauskind, that the cost of the investigation has somewhat strained from its original estimate, but I am sure that you will in your turn appreciate that a job that takes seven years to do must clearly be more difficult than one that can be pulled off in an afternoon and must therefore be charged at a higher rate. I have continually to revise my estimate of how difficult the task is in the light of how difficult it has so far proved to be.”
The pleasure one gets from reading this derives from the strong suspicion that Dirk’s implicitly stated model of the underlying problem is probably not appropriate, certainly different from his client’s model, and probably also not Dirk’s true model.
Dirk’s implicit model could be something like this. The cat is equally likely in any one of a large number of, say k, places. [Let us ignore the complication that arises from the possibility that the cat could, in reality, also move while the search is going on.] The cost of searching differs from place to place. Dirk seems to work under the hypothesis that Mrs. Sauskind attaches a very high value to finding her cat, so that the search should go on at any cost. Assuming the detective searches optimally (given his model), he goes to the place with the lowest search cost first, then the second lowest, and so on. Given this optimal behavior, the revised expected total cost of searching for the cat increases with every additional fruitless search. Moreover, the daily additional search cost goes up every time. Under this model, Dirk is then indeed right in saying that he has to “continually revise [his] estimate of how difficult the task is in the light of how difficult it has so far proved to be”.
Now, Mrs. Sauskind’s model of the problem seems to be different in at least one crucial aspect. She appears to have the view that, while there may be the same places that the cat could be in, there is also a non-negligible possibility that the cat is already deceased (or simply unfindable). This means she attaches a total probability to the cat being at any of the k places that is less than one. Assume, therefore, that Mrs. Sauskind’s model is such that the probability of the cat being at place i is
with, importantly
Then
is the ex-ante probability that the cat is dead. Finally, it seems evident from the dialogue that Mrs. Sauskind attaches positive value to the cat being found, but that she is also quite cost-sensitive. That is, at any moment of time (as long as the cat is not yet found), she would weigh the future chance of finding the cat against the expected (additional) costs of searching for the cat.
Given her model, as time goes by without any sign of the cat, Mrs. Sauskind attaches a higher and higher probability of the cat being already deceased. Let us label the places in the order in which the detective searches them. Then, if the cat was not found in the first place, by Bayes’ law, the probability of the cat being dead increases to
after two unsuccessful searches it increases to
and so on. All this without even the concern that the cat is getting older and older, while the search is going on over, apparently, a number of years.
During all this time, Mrs. Sauskind’s expected total costs are continually rising, and, given the detective’s statements, it seems also her expected future costs are constantly rising. It would now depend on the exact utility function over the cat being found and over how much money she has left to determine Mrs. Sauskind’s optimal search policy, or optimal stopping time, as the literature on these problems likes to call it. But it seems clear from the conversation that Mrs. Sauskind would have preferred to have stopped the search some time ago.
So, the holistic detective and his client really have a fundamental disagreement about the true model of the world. It is therefore not surprising that they cannot agree on the reasonableness of the bill.
-
Dirk Gently’s Holistic Decision Theory – Bayes’ law

After watching a season of Dirk Gently’s Holistic Detective Agency on Netflix with the kids, I re-read Douglas Adams’ original books. I find them exceptionally funny, but also full of wisdom, especially in the realm of decision theory. In a short series of posts, I want to go through some fine examples of these.
On page 115 of my edition of “The long dark tea-time of the soul” we overhear a psychologist talking to a client over the phone. We only hear the psychologist’s side of the conversation.
“Yes, it is true that sometimes unusually intelligent and sensitive children can appear to be stupid. But, Mrs. Benson, stupid children can sometimes appear to be stupid as well. I think that’s something you might have to consider.”
A bit harsh, of course, but probably true. I especially like the repetition of “sometimes”. We understand what the psychologist is trying to say, of course. But, to make it probably unnecessarily clear, let me sketch a simple decision-theoretic model of the psychologist’s thinking. There is a true state of the world that the child is in: it is either unusually intelligent (UI) or stupid (S). Presumably, the child can also be something in between, but let me ignore this in my simple model. Observing the child for a bit provides us with some information, which can be described by what Blackwell would have called an information structure, or an experiment, or perhaps a signal-generating system. Ultimately, we obtain a signal. The signal is either that the child appears stupid (AS) or that it does not appear stupid (NAS). The probability that each signal is generated depends on the state. These probabilities are known to the expert psychologist. There is the probability that an unusually intelligent child appears stupid,
which, the psychologist admits, is positive (the first “sometimes” in the quote above). [In probability theory
is often referred to as the probability of appearing stupid (AS) conditional on the child being unusually intelligent.] And there is the probability that a stupid child appears stupid,
which, the psychologist claims, is also positive (the second “sometimes” in the above quote). Apparently, it may be less than one.
When the psychologist says that ”that’s something you have to consider” he means that you should compute the probability of the child being stupid, conditional on the child appearing stupid,
and that this depends also heavily on the probability that a stupid child appears stupid,
Formally, using Bayes’ law, we get that
Presumably, and the first word in “unusually intelligent” seems to suggest this, the ex-ante (or a priori as Bayesians like to say) probability of a child being unusually intelligent,
is low. This means that
is probably almost negligible in the calculation relative to
which would really suggest an ex-post (or a posteriori as Bayesians like to say) probability of a child being stupid when it appears stupid
to be definitely positive, if not even relatively close to one.
The psychologist ends the phone conversation with “I know it’s very painful, yes. Good day, Mrs. Benson.”
-
Innovative Strategies in Warfare

In C. S. Forester’s The Commodore, the main character Horatio Hornblower is in charge of a small squadron of ships of the (British) Royal Navy in the Baltic Sea. We are in the year 1812 and in the middle of the Napoleonic Wars. I am here interested in an episode in the book, in which Hornblower is assisting the Russians (who have just entered the war against Napoleon’s France) in their defense against the siege of Riga (which really happened).
One of the people in charge of the Russian defense in Riga (in the book, not in reality) is the Prussian officer Carl von Clausewitz. While he was probably not involved in the siege of Riga in real life, Clausewitz did leave the Prussian army (when it was part of the Napoleonic armies) in 1812 to support Russia against France. Clausewitz wrote a very influential treatise on warfare, mostly after the Napoleonic Wars, that emphasized strategic thinking. I find it quite interesting that Forester used Clausewitz as one of the main characters in the siege and through him we learn a lot about sieges.
A siege in the 18th and early 19th century seems to have been developed to the point where every part of it is predictable. Over the years, soldiers have learned (and been trained) how to behave in a siege so much so that, game theoretically speaking, equilibrium play has been reached. Even if both sides know exactly what the other side is (and will be) doing, they can only do what the other side expects them to do in return. There is no better strategy out there for either side. This is nicely described in the book. When Clausewitz and Hornblower overlook the siege from the gallery of a church, we are informed of (what are probably) Hornblower’s thoughts: “To a doctrinaire soldier a siege was an intellectual exercise. It was mathematically possible to calculate the rate of progress of the approaches and the destructible effect of the batteries, to predict every move and countermove in advance, and to foretell within an hour the moment of the final assault.”
So, what is the apparent equilibrium in a siege, then? Well, we should probably take a step back and briefly sketch the game we are talking about. First, the players. We are dealing with a war between two sides. Each side is, in reality, composed of a large group of individuals, but each group acts in a very coordinated manner as they all share the same goal (or are at least made to share the same goal). So, I feel it is safe to assume that there are simply two players: the besiegers and the besieged. It is also pretty clear what each side wants: the besiegers want to conquer Riga, the besieged want to avoid this from happening.
Ideally, from the besiegers’ point of view, they would just run up to Riga and claim it as theirs. But that is not a good strategy, as the besieged are protected by walls and guns and would just shoot the besiegers. So, instead, the besiegers start digging trenches and putting up “gabions”, something that “looked like a wall”, along these trenches. Such a protected trench is called a “sap”. The starting point of the sap is at a reasonably safe distance from the town’s guns. Then slowly the besiegers extend the sap forward along “parallels,” I guess, this means in a zigzag manner. While the besiegers slowly but persistently do this, the besieged shoot at them with their big guns. This shooting is rather ineffective, and they can apparently only damage the newest bit of the wall while it is erected. Hornblower at one point asks Clausewitz: “Why do your guns not stop the work on the sap?” to which Clausewitz replies: “They are trying, as you see. But a single gabion is not an easy target to hit at this range, and it is only the end which is vulnerable.” This sapping allows the besiegers to not only slowly come closer to the town walls, but to also bring up some of their big guns. As Clausewitz explains further: “And by the time the sap approaches within easy range their battery-fire will be silencing our guns.”
I expect that much of warfare has these very predictable aspects, especially if a war goes on for some time. And these predictable aspects can probably be well described by an equilibrium of an appropriate game between the two sides. Harder to understand using game theory (or anything else, really) are cases of innovative warfare. This is what Hornblower in all the books excels at, but you feel that these cases are rarer in real life. I once read that Hornblower is probably not modeled on any single real person in the British Royal Navy, but on many of them. One person can probably not come up with as many innovative strategies as Hornblower has throughout all the books. To be fair, most people probably didn’t even have the opportunity to do so.
In The Commodore, Hornblower, watching the siege operations alongside Clausewitz, is struggling to think how he could help with Riga’s defense. Clausewitz asks him at one point: “Can you not bring your ships up, sir? See how the water comes close to the works there. You could shoot them to pieces with your big guns.” But the problem was that the water there was way too shallow for Hornblower’s ships. Hornblower explains this to “an unsympathetic ear” and is frustrated by his inability to help. He walks around his cabin and is further frustrated by the restricted space, when suddenly, while just climbing over a rail, with “one leg in mid-air”, an “idea came to him”. [I quite like how Hornblower’s ideas come to him – it is very much like I (and many theorists, I believe) do research.] Hornblower realizes that there could be a way to lift his ships almost out of the water by attaching little loaded boats or barrels full of sand (or something like that) to the ship and then unloading them. He then has the two “bomb-ketches” in his squadron lifted in this way and brings them into action for a few hours to devastating effect on the sapping operation.
Hornblower’s novel strategy was something that the besiegers were clearly not even aware of as being possible. For the besiegers, this was, in Donald Rumsfeld’s terminology, an “unknown unknown”. Similar surprising moves have been made recently by the Ukraine bombing Russian bomber planes using a complex operation deep in Russia, and by Israel with the exploding pagers. In those two cases, the other side was most likely also not even aware of these possibilities. Having done this once, however, one would assume that this cannot easily be done again, as the other side, now being aware of such possibilities, can probably put preventative measures in place. In Hornblower’s case, the French react by bringing up a battery of guns towards the lifted ships within a few hours and keep them there for the remainder of the siege.
Interestingly, and that again, because of the highly predictable siege equilibrium, Clausewitz can precisely quantify that Hornblower’s innovative strategy has delayed the besiegers by no more than four days.
-
Predictable and Predictably Unpredictable Warfare

[Photo by Rafael Garcin on Unsplash]
In C. S. Forester’s Hornblower and the Hotspur, Horatio Hornblower is a captain of a three-masted sloop (one of the smaller ships at the time), the Hotspur, in the (British) Royal Navy. It is 1803 and there is a temporary peace, the peace of Amiens, during which the Hotspur is patrolling some parts of the coast of France. The episode I want to study begins with a French ship, the Loire, a frigate that is a good deal bigger than the Hotspur, leaving her anchorage in the direction of the Hotspur. Captain Hornblower correctly suspects (through an interesting series of what one could describe as Bayesian probabilistic inferences) that war has been declared and, given the size disadvantage, sets a course to avoid a confrontation.
This situation can now be described as a game between two players, the two ships (or their two captains), with opposing preferences: The Loire wants to catch up with the Hotspur, and the Hotspur wants to evade the Loire. To finalize our model, we need to specify the available strategies. These are all the different directions that the ships could go in, taking into account some geographical (and weather) constraints. From the goals that the two captains have, we can derive payoffs for any strategy combination. These are presumably so that the Loire always prefers to go in the same direction as the Hotspur, and the Hotspur prefers to go in any direction that is different from the direction the Loire takes. A more careful reading of the book suggests a secondary payoff-relevant concern. Given the possibility of something (exogenously, as we like to say) happening (such as bad weather or the sudden appearance of another (most likely British) ship), the Loire would probably like to catch up with the Hotspur as quickly as possible, while the Hotspur would like to delay such an event as much as possible. This is highly relevant in the present case, because it quickly emerges that the Loire is the (slightly) faster ship.
Having thus verbally fully described the game between the two ships, we can turn to the equilibrium analysis. Equilibrium play seems very plausible in this case for two reasons. One, the game is relatively simple, and two, this is not the first time in the history of naval warfare that one ship tries to catch another ship. Together, these two observations make it quite likely that each of the two captains of the two respective navies (from their training and their experience) has learned to behave optimally given the other captain’s behavior. In short, it seems likely that they have learned Nash equilibrium behavior.
It seems that, at least in the present case, the best course of action for the Hotspur and, thus, the Hotspur’s equilibrium behavior, assuming (correctly) that the Loire would follow wherever the Hotspur goes, is to sail into the wind. I don’t know that much about sailing, but I have been given to understand that you cannot sail directly into the wind. You can only sail at some (maximally acute) angle against the direction that the wind is coming from. The geographical realities in the present case are such that the two ships cannot go in the same direction against the wind for too long before they would run aground just off the coast. The escaping ship, therefore, has to occasionally “tack”, that is, to turn to move along the opposite (maximally) acute angle against the wind, thereby going into the wind in a zigzag fashion. In equilibrium and, indeed, also in the book, the Loire tacks whenever the Hotspur does to lose as little time as possible. This highly predictable behavior now goes on for quite some time. During this time the officers of the Hotspur make fairly precise and worrying predictions as to how long they have before the Loire catches them.
But then there is an interesting twist. Some small isolated low clouds appear just above the water. Noticing these, Hornblower decides to delay tacking his ship beyond what would otherwise be seen as optimal until the Hotspur is hidden by one of these clouds. When the Hotspur comes out of the cloud on the other tack, Hornblower realizes to his surprise that the Loire is also already on the other tack. Her captain has wisely predicted the Hotspur’s movements and has tacked at the same time as the Hotspur, thereby not losing any time at all. Hornblower tells himself that he will not use this trick again, and when a second cloud covers the Hotspur, he does indeed not tack; he does not even think about tacking, in fact. After coming out of this cloud, Hornblower finds that, again to his surprise, the Loire has tacked. Presumably, the captain of the Loire thought that the Hotspur would be tacking and did the same. But in this case, the Loire made a mistake and lost some valuable time.
The presence of the clouds has changed the nature of the game. Without the clouds, that is, with full visibility, the game is essentially a sequential move game. The Loire can simply observe what the Hotspur does and make her choice afterwards. This makes it easy for the Loire to match her action to that of the Hotspur, while making it impossible for the Hotspur to prevent that. With the clouds, the game is better described by a simultaneous move game. While one of the ships is hidden from view within a cloud, neither captain can see the other captain’s move. The game they are now playing is an instance of the so-called matching-pennies game. Both captains have two choices: they can tack or not tack. The captain of the Loire would like the two ships’ actions to match; Hornblower would like them to mismatch. This game also has a Nash equilibrium, but it is in what game theorists call “mixed” strategies: this means both captains choose randomly whether to tack or not. Given the description in the book about Hornblower’s thought process, it does not sound like he is actually randomizing. All that is really needed, however, is that the two players’ actions are unpredictable for their opponent. And it seems that Hornblower’s thought process was not exactly what the captain of the Loire thought it was (at least not in the second cloud instance). In any case, with just the two data points that we have (there being only two instances with clouds), we cannot reject that the two captains are playing the equilibrium of this game.
The game without clouds had an equilibrium in pure strategies, in which both players were able to predict each other’s moves precisely. The game with clouds also had an equilibrium, but in mixed strategies. In this case, the two captains should not expect to be able to fully predict their opponent’s choice, but they should be able to predict the extent of unpredictability. They should know that there is essentially a 50-50 chance of either move and, therefore, should never really be super-surprised by anything the other captain does.
Donald Rumsfeld, as US Secretary of Defense, once puzzled the world a bit with a statement about the differences between “known knowns”, “known unknowns”, and “unknown unknowns”. One can give a purely decision-theoretic discussion of this statement, but one can also see this in game-theoretic terms. In the equilibrium of the game without clouds, every future move is known to the two captains. In the game with clouds, the captains (should) know that they don’t quite know their opponent’s next move (in the cloud). In another blog post, I will use another Hornblower story to discuss situations in which players don’t even know what they don’t know.
-
Why happiness is elusive

There is an Austrian saying that “happiness is a bird” (“Das Glück is a Vogerl”). The idea, I think, is that happiness is hard to catch and even harder to keep hold of. In this blog post, I want to offer a formal model and definition of happiness that is able to generate the fleeting nature of happiness.
First, a bit of casual introspection to set the stage for modeling ideas. Imagine that sometime around early winter you finalize your summer holiday plans. You are planning a road trip through Australia (in their winter and your summer), something you are very excited about. Imagine that a month later – still (your) winter – you learn that something happened that is not a big problem in itself, but that will prevent you from going to Australia after all. Say, it turns out you can’t take that particular time off after all, but that is the only time that would have worked for the other people you were planning to go with on this trip. You will probably be very unhappy. And you will be unhappy right then and there, in (your) winter, months before you were supposed to be going on this trip. You don’t wait to be unhappy until the summer comes along. In fact, when the summer does come along, you will probably already be less unhappy, you have already “worked through” your grief.
The key element that I want to take up from this casual introspection is that humans are very forward-looking. They create expectations of the future and, in a sense, “consume” at least part of their future expectations before these are realized. And, as I will argue, humans who are good at forming correct expectations will likely only be happy for short amounts of time. They will not be able to live in a permanent state of bliss. [On the flip side they will also only be unhappy for short amounts of time.]
One way to see things is that there are many possible paths that your life could take. You control some aspects of which path you get, but no matter how much you control things, how much you “take life into your own hands,” there is always some leftover uncertainty. In fact, there is probably a lot of leftover uncertainty. You make educational choices, you decide what to study, and you decide which jobs you apply for, but what job you end up getting and where is not only up to you. You make friends and have a family, but who exactly they are and what happens to them, something you also care about, is again not all down to you.
Turning to a mathematical description of your life, we can collect all possible paths of life that could happen to you in one big set
At the beginning of your life, you have a belief about the likelihood of these various possible paths, which we capture by a probability distribution over
Ok, you probably have to grow up a bit before your beliefs form, but, at some point, you will probably have one. And yes, you might not exactly be able to write it down and articulate it fully, and maybe you have a more diffuse notion of your future that you don’t feel you can capture by a probability distribution, but I think you will see that this is a useful notion. The next ingredient to studying your happiness is to consider how much you would value different paths of life
Here, I am not sure whether what I propose is the best way to model this – I am following standard models of intertemporal choice in economics and I haven’t thought deeply enough about possibly better alternatives. The idea is that a path in life
gives you a level of instantaneous satisfaction at any moment of time
I will, for simplicity, count time discretely in, say, days. A path
would then give you a sequence of instantaneous levels of satisfaction for all days, from day zero (now) until the end of days. Call these levels of satisfaction
I am using
because in economic models this is what you often see, with
for utility. As people are forward-looking at any time
they care not only about the time-t instantaneous level of satisfaction but also about those in the future. A nice and simple way to capture this idea is that you do what firms are supposed to do when they consider long-term investment decisions: you compute the net present value of, in your case, all your future levels of instantaneous satisfaction. Each path of life
for every moment of your life
then yields a time-t lifetime satisfaction, let’s call it
for some discount factor
Note that you can potentially live forever here. However, we can interpret the so-called discount factor
as at least partly reflecting your less-than-certain chance of surviving until the next day. In that case, even if you could live forever in theory, the chances of that happening are zero. The discount factor can partly also reflect your degree of impatience.
So, we have formalized the possible paths of life and their consequences for us in terms of lifetime satisfaction. I have not yet mentioned happiness. And happiness will not be the same as lifetime satisfaction. I guess this is a bit controversial, but I believe that happiness is what we experience when things turn out better than expected. And we are unhappy when things turn out worse than we expected. To capture this, we introduce events – things that can happen to us. One way to see this is that, as time goes on, we can rule out more and more paths in life. This can be captured by a stochastic process that is a filtration. It has the property that whatever you know to be true at time
you also know to be true at time
for all
You don’t forget and you may learn new things. Suppose we call
the information (about your path in life) that you have received up to and including time
I can now finally define your happiness as the difference between your “updated” expected time-t lifetime satisfaction and your “original” expected time-t lifetime satisfaction. Formally, happiness is given by
I should probably cite some literature now that justifies my definition of happiness. The best I can do is to point you to the work of Arthur Robson on the biological basis of human (economic) behavior. I am not sure he would quite agree with my model here, but it is partly based on my, possibly imperfect, reading of his work. I came to the belief that happiness is not the same as lifetime satisfaction and that mother nature uses our pursuit of happiness (through the clever use of short-lived dopamine bursts) not to make us happy but to make us always want to achieve more and more and more – ever to increase our evolutionary fitness. Given mother nature’s biological constraints, she chose to make happiness have less to do with the level of lifetime satisfaction but with how it changes when certain events happen to you.
If happiness is given like this, continued happiness (undermining mother nature’s goals) would be best achieved by maintaining low expectations. Sage advice I would think, but hard to follow. Ideally, you would never expect a good meal and always be surprised when you get one. “Oh boy, I am getting something nice for breakfast!” This is difficult to keep up when you get a good breakfast every day. But it would quite possibly be a happier life.
When I lived in Chicago, I flew back to Austria to see family about twice a year. I collected air miles and fairly soon had a good stash thereof. I don’t know if it was a glitch in the airline’s system, but when just before boarding I asked for an upgrade based on my air miles, I often got one without the airline ever taking any miles off my account. I kept getting upgraded. The first time this happened to me I was extremely happy. It was one of the best flight experiences I ever had. This is so, I believe, because it came as a surprise – I did not expect to be upgraded. But after a while, the experience became more routine and did not give me that much happiness. I came to expect an upgrade. When I then did not get one, I was pretty unhappy. I was, in fact, much less happy than in the earlier days when I was never upgraded and never expected to be upgraded.
My kids form high expectations almost too easily. We had ice cream after lunch one Friday, and happened to have ice cream after lunch on the following Friday as well. When the kids didn’t get ice cream after lunch on the next Friday, they were unhappy and were asking us “what happened to Friday ice cream?”
Of course, I have described only one aspect of happiness. I am, for instance, ignoring things like clinical depression, which I would find harder to model and even harder to explain. I am also ignoring happiness that stems from achieving something. For instance, I would value the view on a mountain peak very differently depending on how I got to this peak. I believe I would get much more “out of” the view at the peak if it came as a reward at the end of a long and challenging hike rather than being the result of being dropped off by a helicopter. All I wanted to offer in this post was a formal model that can generate the fleeting nature of happiness, at least as I perceive it. But there is a lot more that could be said about the strange nature of human happiness.
-
Giving tenure to researchers on non-tenure track positions

At Austrian universities, many (young) researchers are employed on fixed-term contracts without a clearly specified path to tenure (a permanent position). Young researchers on such fixed-term contracts are rightly worried about their future and would, of course, love to get permanent contracts. Some time ago the Austrian minister for Education, Science, and Research publicly said that universities should consider giving tenure to a substantial number of researchers currently on fixed-term contracts. I don’t think that this is a good idea. To be more specific, I believe that there is a much better way of giving young researchers long-term career perspectives: The universities should offer more tenure-track positions, perhaps, as I have seen in the USA and my field of economics, even for researchers who have just finished their doctoral studies.
At first glance you might say that this is exactly what the minister said. Surely, there is not much difference between giving people fixed-term contracts and then giving some of them tenure after all and giving them tenure-track positions from the start. But there is a huge difference. The difference can be explained with two notions from economics: adverse selection and moral hazard.
Let me first explain the adverse selection problem. Put yourself in the shoes of a promising young researcher (somewhere in the world) who has just finished their PhD and is now looking for a job. They are looking through the job adverts and find two categories of jobs: fixed-term positions (without any apparent possibility of being tenured) and tenure-track positions. Which would they prefer? Of course, there are other considerations, such as salary, the quality and quantity of the group of researchers at this place, the location, and so on. But I would conjecture that for many, ceteris paribus as economists like to say, tenure-track beats fixed-term by a large margin. Considering this problem from the point of view of the university, this means that by offering fixed-term positions when others offer tenure-track positions, the university will probably, on average, not receive the best applicants for these jobs. If the university then ultimately and surprisingly gives tenure to some of the fixed-term employed researchers, the university is likely not giving the job to the best people they could have found if they had offered tenure-track positions to begin with. This is the adverse selection problem.
In addition, there is also the moral hazard problem. Now put yourself in the shoes of a (young) researcher employed in a fixed-term position who is told there may be a chance to get tenure after all. You would ask yourself and your boss(es) what you should do to improve your chances of this. I suspect that, without clearly pre-specified criteria for getting tenure, it is down to this (young) researcher’s boss to lobby the higher university authorities for the (young) researcher to get tenure. Would your boss use the same (unstated) criteria that a (universally, or at least within the university) agreed and publicly communicated tenure-track contract would specify? Not necessarily. I would conjecture that some bosses would favor pushing those young researchers who help their bosses rather than those who do great independent research. As a young researcher, hoping that your boss will lobby for you to get tenure, what would you do if this boss asks you to jump in to teach their class tomorrow or to replace them at a meeting and to keep notes for them? Well, you would probably do it. A pre-specified catalog of achievements and obligations necessary for getting tenure will, however, likely not have such items on their list. I have a strong feeling that many (young) researchers in fixed-term positions who hope to be given tenure after all, end up wasting valuable time on for them and for science on the whole irrelevant things.
In short, I have argued that giving tenure to researchers on non-tenure track positions entails an adverse selection problem and a moral hazard problem. The effect of this is that the university ultimately does not hire the best researchers they could have hired, and these researchers do lots of work that has little to do with excellent research per se.
-
Meeting inside a crowded stadium without communication

Three of us have just been to Emirates Stadium to see a football game. We witnessed and participated in some interesting rational herding walking to the stadium, as I described in my previous post. But once inside, we had another game-theoretic problem. The problem was caused by the fact that we didn’t all have seats together. After a quick bite to eat at one of the food stalls inside the stadium before kickoff, we went to our separate seats without communicating how we would meet again at halftime. It didn’t occur to us at the time that maybe we should have talked about this. When we then got to halftime, when people started flooding back from their seats into the food stall area, I realized that we hadn’t arranged where we would meet, and I was wondering for a moment how we would manage to do so. The food stall area is huge. It goes all the way around the stadium, I believe. So where was I supposed to go?
I thought about it a bit, and realized that there is really only one place that sticked out, most likely not only in my mind, but quite possibly also in the mind of my family members I was trying to meet: the stand-up table where we had our food together and that was also the last place at which we were together before the game started. And this is indeed where we found each other, pretty quickly, and without having to resort to communicating with our (Austrian) phones (that don’t work so very reliably in London).
In principle, we had a difficult coordination problem. We could have met anywhere in the stadium. And the stadium is huge and full of people. If I had thought for some reason that my family members would go to, say, Stand-Up-Table 157 (counting from the entrance, say), I should have gone there as well. If I had thought that my family members would all come to my seat, I should have waited for them at my seat. If I had thought that my family members would go to the entrance we came in through, I should have done the same. Game-theoretically, this situation is well described by a large pure coordination game in which all three of us have the same large strategy space (the set of all possible places we could meet) with payoffs such that we all get the highest possible payoff, say 1, if we all choose the same strategy, and, for simplicity, 0 otherwise. Such a game has as many (pure strategy) Nash equilibria as it has strategies. A Nash equilibrium is a strategy for each of us such that if the others follow it, I also want to do so (and the same is true for the others). So, for every place that we could have met at, the strategy of going there is a Nash equilibrium strategy.
Contrary to popular belief, game theory does not generally predict (Nash) equilibrium play, even if the players are assumed to be extremely rational. In fact, even common knowledge of rationality does not imply equilibrium play. Common knowledge of rationality means that everyone involved is rational (which in turn means that everyone has clear goals and chooses actions that best achieve these goals) and that everyone knows that everyone is rational and that everyone knows that everyone knows that everyone is rational, and so on ad infinitum (as we like to say). We probably rarely have common knowledge of rationality in actual real-life situations of strategic interaction (as in our case here). But, it would, in any case, not imply that we would play an equilibrium. Common knowledge of rationality only implies that the outcome will be, what is called rationalizable. Without telling you what rationalizable strategies are, I can tell you that there are some games in which there is only one rationalizable strategy profile, and that would have to be a Nash equilibrium. But in coordination games the assumption of common knowledge of rationality yields the following prediction: anything is possible.
So, how did we manage to meet after all in these difficult circumstances? In some situations of strategic interaction there are strategies that Thomas Schelling called “focal points,” see the Wikipedia entry for a starting point. As you will notice when you read this entry, there is no generally workable definition of a focal point given. I have once attempted to provide such a definition of a focal point in a paper with Carlos Alós-Ferrer, but while I like it a lot, it is perhaps only partially satisfying. The idea is relatively straightforward, though. A focal point is a strategy (profile) that, among all other strategies, jumps out at all of you: it is specially earmarked, relative to all other options, in the minds of all the people involved. In our case, the table at which we had food together, and that was also the last place before we went our different ways to our seats, was so earmarked in all our minds. And given this earmarking, we all followed the strategy of “go to the place you have collectively earmarked.” This strategy only works if you indeed earmark the same thing. In reality, many situations lack such a clear focal point, with the implication that you do not manage to meet, or at least not quickly, or not without further communication. One of the problems with the theory of focal points, is that it is a bit difficult to state when and when not it should work. But it did work in our case, and I was happy about that. One could say that because of Newton we did not float into space, and because of Schelling we managed to meet.

