-
Gambling laws

There seems to be a universal consensus that gambling is something people should be wary about. Through gambling, you can lose a lot of money in a short amount of time; you can also become addicted and lose money steadily; and by doing so, you may not only negatively affect your own well-being but also that of your spouse and kids and maybe even other people. To protect people from gambling, at least to some extent, most countries have a legal definition of gambling and additional legislation to regulate it. [Countries differ in how much they regulate gambling: some countries have very permissive gambling laws, others (like Austria) even make it a state monopoly so that no one can offer gambling services unless they have a (rarely given) special state license.]
My interest in this matter arose when a lawyer asked me to help him understand some statistical jargon that an expert witness was using in a court case. The court case centered around the question of whether a given online game was a “game of chance” as defined in the Austrian legal text. The definition of a “game of chance” in Austria is similar to that of many other countries. It says that a game of chance is (translated fairly literally by ChatGPT) “a game in which players are required to provide a consideration of monetary value and in which the decision on the outcome of the game depends solely or predominantly on chance.” In plain language, a game is a game of chance, according to Austrian law, if there is something of monetary value at stake and the outcome of the game is mostly driven by chance. And, at least in Austria, offering such “games of chance” as a business is not allowed (unless the business is run by the government).
I don’t find this definition of a “game of chance” very satisfying. Luckily, I have a game-theoretic definition of a “bad game” and in what follows I will try to persuade you that my definition is much better than the current definition of a “game of chance.”
One could probably be even more philosophical than I will be here, debating if there even is such a thing as chance. But I accept that chance is something we can meaningfully talk about. However, I would at least differentiate between three forms of chance (as the current literature on decision theory does).
There is risk (or objective uncertainty), which is the type of chance you encounter in casinos, where we (mostly) all agree about how to quantify chance by means of probabilities. Most of us would agree, for instance, that a ball thrown into an (officially checked to be fair) spinning roulette wheel has an even chance (of 1/37) of landing at any of the 37 numbered holes. Similarly, most of us would agree that the probability of drawing, say, the ace of hearts, from a properly shuffled deck of 52 cards is 1/52.
Then there is subjective uncertainty (or Knightian uncertainty, or ambiguity), which is such that we all agree about the possible outcomes, but do not necessarily all agree about their likelihoods. Think of a football (soccer) game, for instance. We all know that the final outcome is that either one team wins, or the other, or that there is a draw. But we don’t necessarily all have the same opinion about how likely a draw, say, is.
Finally, there is the type of uncertainty, where we don’t even all agree about the possibilities that could happen, let alone agree on how to attach probabilities to these possibilities. This uncertainty is often referred to as unawareness in decision theory, see Burkhard Schipper’s unawareness project.
If we are talking about chance, especially in a supposedly well-defined legal context, I would have liked to see these formal distinctions made and addressed. It matters, I feel. Compare the following two games, for instance: blackjack, for which most of us would agree that all chance is objective, and rock-paper-scissors, in which there is no actual device used to generate chance. Blackjack would fall under the legal definition of a “game of chance” if played for money. I am not so sure about rock-paper-scissors (if played for money). [You know the game: rock beats scissors, scissors beat paper, and paper beats rock; and these are your only three choices.] One could argue that there are only two components in this (and any) game that generate outcomes: chance (of which there is none in this game) and the players’ behavior. Now we know that the minimax recommendation for playing rock-paper-scissors (which is also its unique Nash equilibrium) is to play uniformly randomly, that is, to use each of the three strategies with a probability of 1/3 each. If that is what the players do, then there is chance in this game. But would the players play like this? At the very least, I would say that the game exhibits subjective uncertainty, and this uncertainty exclusively derives from the other player’s strategy. Austrian law is often interpreted as if the only factors that can determine the outcome of any game are chance and the players’ skill. But the only chance in rock-paper-scissors is that generated by the players, which in turn is determined by the players’ skill. A bit of a muddle, I find.
Now, consider tic-tac-toe. Most of us would probably agree that the game has absolutely no chance component. The game indeed has a (pretty simple) optimal strategy for both players and, if both play that, the game ends in a draw. But now suppose that an entrepreneur offers a platform that allows people to play tic-tac-toe for money (and the entrepreneur keeps a percentage share). Suppose, for the sake of the argument, that a range of people play this game for money on this platform, with some who understand the optimal strategies and some who do not. Then these games do not all necessarily end in a draw. I do not see, however, how this game could satisfy the legal definition of a game of chance, as it is hard to argue that there is any chance in this game at all. Just as there isn’t in chess (or checkers, or Connect Four). Yet, if these games are played for money and organized by an entrepreneur who keeps some of this money, to me this seems rather similar to playing blackjack in a casino (at least for some players).
There are games that are played for money (to an extent) that are not typically classified as “games of chance,” even though they satisfy the legal definition of a game of chance if we allow subjective uncertainty also to count as chance. Take any sport, really. Consider a football game or a game of golf, for instance. The existence of a betting market about these sports proves that there is “chance” at least in the eyes of (most of) the beholders. And there is indeed a chance component to these games, probably not only derived from the players’ strategies, but also from varying weather conditions and other factors (think of gusts of wind in golf, for instance). If we accept “chance” to also cover subjective uncertainty and if these games are offered with money at stake, then these games also qualify as “games of chance.” [Most sports have the feature that players’ “salaries” depend on their success; so, there is clearly some money at stake.]
In fact, sports betting and even financial (stock market) trading satisfy the legal definition of a game of chance. This is definitely so, if we allow that other people’s behavior, which is typically the only thing that determines the betting odds and the prices of financial assets, counts as “chance.” How much a bet or a financial asset pays out depends on more subjective uncertainty (about how the sporting event results or about how the price of the financial asset is revised over time, which in turn is determined by some people’s behavior). If we were to believe in the so-called efficient market hypothesis (which we probably should, at least to a high degree, see a previous post of mine), then the betting odds or the prices of assets accurately reflect all pertinent information that there is out there. This would make the uncertainty involved in sports betting and on the stock market almost objective. In fact, much of financial theory assumes that this is so. But then, nobody could really know more than all there is to know – nobody can be much better at betting or at financial investments than anyone who does not do silly things, so skill doesn’t really come into it. But then, sports betting as well as financial trading are games of chance according to the legal definition.
Apparently, neither sports betting nor financial trading is currently considered a game of chance in Austria. Offering such services seems to be allowed at the moment. Maybe this is the case because sports betting and financial trading have been classified (incorrectly, in my opinion) as games, in which the skill component is higher than the chance component.
There is a nice attempt to assess how much skill versus chance there is in games in a paper by Peter Duersch, Marco Lambrecht, and Joerg Oechssler, Measuring skill and chance in games, European Economic Review, Volume 127, 2020, 103472. The idea is simple: for whatever game you are interested in, consider or construct an (ELO) ranking of the players that play this game. Let me here suppose that we are dealing with a two-player game. Then, using data on game play, estimate the probabilities of one of the players winning the game as a function of the two players’ (ELO) rank. If a player with a low rank still has a decent probability of winning against a player with a high rank, then the game has a large chance component. I believe that if one were to extend this approach to sports betting and financial trading, one would find that there is mostly chance in both of these “games.”
In any case, this is not the direction I want to take here – although it would be interesting to have a look at. I would recommend a different definition, maybe not of a game of chance, but of a “bad game” that governments might want to regulate. My definition would be this: A game is “bad” if it is subzero-sum for the contestants. In other words, the total net payout to the players of this game is negative. No mention of chance is necessary. Nobody has to assess how much chance versus skill there is, either. All we would need to check is whether the total net payout to players is negative. Doing this is very easy.
Let me revisit the examples of games I used above to see whether they are “good” or “bad” games. Let’s begin with games in the casino: they are all “bad.” This is because the players put in money, and that money is collected and paid out again, with the casino keeping a percentage share. According to my definition, poker is just as bad as blackjack or roulette. Note, however, and this is important, if poker is played live on TV and many people watch this (and at least indirectly pay for this privilege by watching commercials), such that the total net payout to the poker players was positive, then this would be a “good” game. Most sports are “good” games like this. Sports competitions are performed for the benefit of spectators who pay to watch the event. The money this generates is (partially) used to pay the contestants’ wages or prizes. Similarly, there are two ways one could offer a platform for people playing rock-paper-scissors for money. If all that happens is that the platform makes this possible and takes a cut of winnings, then this is a “bad” game. If people are paying to watch this game, such that some added value is generated and redistributed to the players to make the game more than zero-sum, then this is a “good” game.
The definition of a bad vs. a good game is also great to differentiate sports betting and financial market trading. Sports betting is typically offered by a broker who simply keeps a cut of the money that is placed as bets before paying out the rest. So, the game is subzero-sum and “bad”. Financial assets typically grow in value over time, that is, they have a positive return on average. The game of financial trading is super-zero-sum and thus “good”.
Now, you could still have qualms even about “good” games. In “good” games, people could also lose a lot of money, even though, on average, net winnings are positive. If lawmakers worry about this, they could add limits to how much you can bet or trade. One should, however, probably have different limits depending on the perceived risks of financial assets (one could lose much more trading options than trading stocks, for instance). Indeed, this is what the Basel framework does for banks (adopted in most countries): banks are required to hold adequate capital to cover their risk exposure; the minimum capital requirement depends on the risk of the bank’s portfolio of asset holdings. Something like this could also be imposed on private financial traders if one is worried about them losing too much even in “good” games.
Anyway, I feel my definition of a “bad game”, games that are subzero-sum, would be a much better basis for gambling law than the hard to assess current legal definition of a “game of chance.”
-
Dirk Gently’s Holistic Decision Theory – Model Uncertainty

Dirk Gently talks a lot about the impossibility of certain events that have actually happened. In “Dirk Gently’s holistic detective agency,” Dirk is much taken by a conjuring trick a professor performs at a high table dinner at one of the Cambridge colleges. Only he seems to realize that the particular trick is actually totally impossible, at least given what we know about the world. The professor makes a “salt cellar” disappear and then reappear in a 200-year-old pot that needed cracking open to get to the salt cellar (on pages 34-37 of my edition of the book). Dirk calls this event (p. 189) “completely and utterly impossible.” After asserting that, luckily, “there is no such word as ‘impossible’ in my dictionary,” in fact a lot of pages are missing, he rephrases this to “completely and utterly – well let us say, inexplicable … [I]t cannot be explained by anything we know.” What he means is that the model of the world that we have cannot explain this phenomenon. Because of this fact we should entertain a different model of the world. Indeed, eventually it becomes clear that the professor was able to travel back in time 200 hundred years, something our model of the world does not allow, and to have the salt cellar put into the pot when it was made. Dirk’s ability to think beyond the current model of the world made it possible for him to understand the seemingly impossible.
In “The long dark tea-time of the soul” Dirk goes further in claiming that he often prefers an impossible explanation over possible but highly improbable ones. Among a number of interesting patients in the ”Woodshead hospital,” there is a ten-year-old girl sitting in a wheelchair and murmuring constantly and “soundlessly to herself.” She is murmuring stock market prices, but yesterdays (with a precise 24-hour delay). She has no apparent access to outside news of any kind. Yet, the psychologists explains that “[w]ell, as a scientist, I have to take the view that since the information is freely available, she is acquiring it through normal channels.’’ This is happening on pages 121-123 in my edition of the book.
When told about this, Dirk is confronted with Sherlock Holmes’ (I guess really Sir Arthur Conan Doyle’s) statement that “[o]nce you have discounted the impossible, then whatever remains, however improbable, must be the truth.” Dirk responds with “I reject that entirely, the impossible often has a kind of integrity to it which the merely improbable lacks.” Applied to the present case of the girl in the wheelchair he states that “[t]he idea that she is somehow receiving yesterday’s stock market prices out of thin air is merely impossible, and therefore must be the case, because the idea that she is maintaining an immensely complex and laborious hoax of no benefit to herself is hopelessly improbable. The first idea merely supposes that there is something we don’t know about, and God knows there are enough of those. The second, however, runs contrary to something fundamental and human which we do know about. We should therefore be very suspicious of it and all its specious rationality.”
I like Dirk’s statement a lot, for two reasons. The first is that Dirk points out that we should be aware of model uncertainty. We should always entertain the possibility that the model that we have about the world is not completely correct, and sometimes new evidence should lead us to reconsider the model. This is somewhat comical in Dirk’s context, where we are really talking about the physics of the world (although of course physics models can also be wrong). The idea is much more important, however, in game-theoretic models (as used most heavily in economics), as these models are typically radical simplifications of the world. If a game-theoretic models leads to bizarre predictions (or policy implications), you should probably go back to the drawing board for a more appropriate game-theoretic model, rather than accept these predictions as truth. By the way, Dirk’s approach of questioning the underlying model based on possibly but highly improbable observations given the model, is a standard tool of statistics. Indeed, in statistics, we tend to reject a null hypothesis (our current model of the world) if the kind of data that we observe is sufficiently hard to explain with the null hypothesis; if the p-value is below the level of significance that you choose to begin with (you know, the often chosen
of 5% for instance). So, while the way Dirk says this, may appear ludicrous, he is really just stating sound scientific reasoning.
The second part of Dirk’s statement that I like is that he puts human incentives even above physical laws (of a possibly flawed physical model). Why would this ten-year old girl maintain such a “laborious hoax” that is “of no benefit to herself”? Taking human incentives into account is what game-theoretic models (with a longer tradition in economics) are really good at. Douglas Adams would have made a great game theorist (or even economist ;).
-
Dirk Gently’s Holistic Decision Theory – Model Choice

On pages 146-150 of my edition of Douglas Adams’ “Dirk Gently’s Holistic Detective Agency,” Dirk Gently has a phone conversation with one of his clients, whose cat he has not been able to find in the past seven years. They are debating the bill that the holistic detective had sent, and we only hear his side of the conversation. I will give you the key snippets: “Sadly, no sign as yet of young Roderick, I’m afraid, but the search is intensifying as it moves into what I am confident are its closing stages.” … “I grant you, Mrs. Sauskind, that nineteen years is, shall we say, a distinguished age for a cat to reach, yet can we allow ourselves to believe that a cat such as Roderick has not reached it?” And after some very entertaining bits about Dirk’s “quantum mechanical view” of the world and the psychological cost his client’s skepticism puts on him (also itemized on the bill) we get to Dirk’s decision-theoretic view: “I do appreciate, Mrs. Sauskind, that the cost of the investigation has somewhat strained from its original estimate, but I am sure that you will in your turn appreciate that a job that takes seven years to do must clearly be more difficult than one that can be pulled off in an afternoon and must therefore be charged at a higher rate. I have continually to revise my estimate of how difficult the task is in the light of how difficult it has so far proved to be.”
The pleasure one gets from reading this derives from the strong suspicion that Dirk’s implicitly stated model of the underlying problem is probably not appropriate, certainly different from his client’s model, and probably also not Dirk’s true model.
Dirk’s implicit model could be something like this. The cat is equally likely in any one of a large number of, say k, places. [Let us ignore the complication that arises from the possibility that the cat could, in reality, also move while the search is going on.] The cost of searching differs from place to place. Dirk seems to work under the hypothesis that Mrs. Sauskind attaches a very high value to finding her cat, so that the search should go on at any cost. Assuming the detective searches optimally (given his model), he goes to the place with the lowest search cost first, then the second lowest, and so on. Given this optimal behavior, the revised expected total cost of searching for the cat increases with every additional fruitless search. Moreover, the daily additional search cost goes up every time. Under this model, Dirk is then indeed right in saying that he has to “continually revise [his] estimate of how difficult the task is in the light of how difficult it has so far proved to be”.
Now, Mrs. Sauskind’s model of the problem seems to be different in at least one crucial aspect. She appears to have the view that, while there may be the same places that the cat could be in, there is also a non-negligible possibility that the cat is already deceased (or simply unfindable). This means she attaches a total probability to the cat being at any of the k places that is less than one. Assume, therefore, that Mrs. Sauskind’s model is such that the probability of the cat being at place i is
with, importantly
Then
is the ex-ante probability that the cat is dead. Finally, it seems evident from the dialogue that Mrs. Sauskind attaches positive value to the cat being found, but that she is also quite cost-sensitive. That is, at any moment of time (as long as the cat is not yet found), she would weigh the future chance of finding the cat against the expected (additional) costs of searching for the cat.
Given her model, as time goes by without any sign of the cat, Mrs. Sauskind attaches a higher and higher probability of the cat being already deceased. Let us label the places in the order in which the detective searches them. Then, if the cat was not found in the first place, by Bayes’ law, the probability of the cat being dead increases to
after two unsuccessful searches it increases to
and so on. All this without even the concern that the cat is getting older and older, while the search is going on over, apparently, a number of years.
During all this time, Mrs. Sauskind’s expected total costs are continually rising, and, given the detective’s statements, it seems also her expected future costs are constantly rising. It would now depend on the exact utility function over the cat being found and over how much money she has left to determine Mrs. Sauskind’s optimal search policy, or optimal stopping time, as the literature on these problems likes to call it. But it seems clear from the conversation that Mrs. Sauskind would have preferred to have stopped the search some time ago.
So, the holistic detective and his client really have a fundamental disagreement about the true model of the world. It is therefore not surprising that they cannot agree on the reasonableness of the bill.
-
Dirk Gently’s Holistic Decision Theory – Bayes’ law

After watching a season of Dirk Gently’s Holistic Detective Agency on Netflix with the kids, I re-read Douglas Adams’ original books. I find them exceptionally funny, but also full of wisdom, especially in the realm of decision theory. In a short series of posts, I want to go through some fine examples of these.
On page 115 of my edition of “The long dark tea-time of the soul” we overhear a psychologist talking to a client over the phone. We only hear the psychologist’s side of the conversation.
“Yes, it is true that sometimes unusually intelligent and sensitive children can appear to be stupid. But, Mrs. Benson, stupid children can sometimes appear to be stupid as well. I think that’s something you might have to consider.”
A bit harsh, of course, but probably true. I especially like the repetition of “sometimes”. We understand what the psychologist is trying to say, of course. But, to make it probably unnecessarily clear, let me sketch a simple decision-theoretic model of the psychologist’s thinking. There is a true state of the world that the child is in: it is either unusually intelligent (UI) or stupid (S). Presumably, the child can also be something in between, but let me ignore this in my simple model. Observing the child for a bit provides us with some information, which can be described by what Blackwell would have called an information structure, or an experiment, or perhaps a signal-generating system. Ultimately, we obtain a signal. The signal is either that the child appears stupid (AS) or that it does not appear stupid (NAS). The probability that each signal is generated depends on the state. These probabilities are known to the expert psychologist. There is the probability that an unusually intelligent child appears stupid,
which, the psychologist admits, is positive (the first “sometimes” in the quote above). [In probability theory
is often referred to as the probability of appearing stupid (AS) conditional on the child being unusually intelligent.] And there is the probability that a stupid child appears stupid,
which, the psychologist claims, is also positive (the second “sometimes” in the above quote). Apparently, it may be less than one.
When the psychologist says that ”that’s something you have to consider” he means that you should compute the probability of the child being stupid, conditional on the child appearing stupid,
and that this depends also heavily on the probability that a stupid child appears stupid,
Formally, using Bayes’ law, we get that
Presumably, and the first word in “unusually intelligent” seems to suggest this, the ex-ante (or a priori as Bayesians like to say) probability of a child being unusually intelligent,
is low. This means that
is probably almost negligible in the calculation relative to
which would really suggest an ex-post (or a posteriori as Bayesians like to say) probability of a child being stupid when it appears stupid
to be definitely positive, if not even relatively close to one.
The psychologist ends the phone conversation with “I know it’s very painful, yes. Good day, Mrs. Benson.”
-
Innovative Strategies in Warfare

In C. S. Forester’s The Commodore, the main character Horatio Hornblower is in charge of a small squadron of ships of the (British) Royal Navy in the Baltic Sea. We are in the year 1812 and in the middle of the Napoleonic Wars. I am here interested in an episode in the book, in which Hornblower is assisting the Russians (who have just entered the war against Napoleon’s France) in their defense against the siege of Riga (which really happened).
One of the people in charge of the Russian defense in Riga (in the book, not in reality) is the Prussian officer Carl von Clausewitz. While he was probably not involved in the siege of Riga in real life, Clausewitz did leave the Prussian army (when it was part of the Napoleonic armies) in 1812 to support Russia against France. Clausewitz wrote a very influential treatise on warfare, mostly after the Napoleonic Wars, that emphasized strategic thinking. I find it quite interesting that Forester used Clausewitz as one of the main characters in the siege and through him we learn a lot about sieges.
A siege in the 18th and early 19th century seems to have been developed to the point where every part of it is predictable. Over the years, soldiers have learned (and been trained) how to behave in a siege so much so that, game theoretically speaking, equilibrium play has been reached. Even if both sides know exactly what the other side is (and will be) doing, they can only do what the other side expects them to do in return. There is no better strategy out there for either side. This is nicely described in the book. When Clausewitz and Hornblower overlook the siege from the gallery of a church, we are informed of (what are probably) Hornblower’s thoughts: “To a doctrinaire soldier a siege was an intellectual exercise. It was mathematically possible to calculate the rate of progress of the approaches and the destructible effect of the batteries, to predict every move and countermove in advance, and to foretell within an hour the moment of the final assault.”
So, what is the apparent equilibrium in a siege, then? Well, we should probably take a step back and briefly sketch the game we are talking about. First, the players. We are dealing with a war between two sides. Each side is, in reality, composed of a large group of individuals, but each group acts in a very coordinated manner as they all share the same goal (or are at least made to share the same goal). So, I feel it is safe to assume that there are simply two players: the besiegers and the besieged. It is also pretty clear what each side wants: the besiegers want to conquer Riga, the besieged want to avoid this from happening.
Ideally, from the besiegers’ point of view, they would just run up to Riga and claim it as theirs. But that is not a good strategy, as the besieged are protected by walls and guns and would just shoot the besiegers. So, instead, the besiegers start digging trenches and putting up “gabions”, something that “looked like a wall”, along these trenches. Such a protected trench is called a “sap”. The starting point of the sap is at a reasonably safe distance from the town’s guns. Then slowly the besiegers extend the sap forward along “parallels,” I guess, this means in a zigzag manner. While the besiegers slowly but persistently do this, the besieged shoot at them with their big guns. This shooting is rather ineffective, and they can apparently only damage the newest bit of the wall while it is erected. Hornblower at one point asks Clausewitz: “Why do your guns not stop the work on the sap?” to which Clausewitz replies: “They are trying, as you see. But a single gabion is not an easy target to hit at this range, and it is only the end which is vulnerable.” This sapping allows the besiegers to not only slowly come closer to the town walls, but to also bring up some of their big guns. As Clausewitz explains further: “And by the time the sap approaches within easy range their battery-fire will be silencing our guns.”
I expect that much of warfare has these very predictable aspects, especially if a war goes on for some time. And these predictable aspects can probably be well described by an equilibrium of an appropriate game between the two sides. Harder to understand using game theory (or anything else, really) are cases of innovative warfare. This is what Hornblower in all the books excels at, but you feel that these cases are rarer in real life. I once read that Hornblower is probably not modeled on any single real person in the British Royal Navy, but on many of them. One person can probably not come up with as many innovative strategies as Hornblower has throughout all the books. To be fair, most people probably didn’t even have the opportunity to do so.
In The Commodore, Hornblower, watching the siege operations alongside Clausewitz, is struggling to think how he could help with Riga’s defense. Clausewitz asks him at one point: “Can you not bring your ships up, sir? See how the water comes close to the works there. You could shoot them to pieces with your big guns.” But the problem was that the water there was way too shallow for Hornblower’s ships. Hornblower explains this to “an unsympathetic ear” and is frustrated by his inability to help. He walks around his cabin and is further frustrated by the restricted space, when suddenly, while just climbing over a rail, with “one leg in mid-air”, an “idea came to him”. [I quite like how Hornblower’s ideas come to him – it is very much like I (and many theorists, I believe) do research.] Hornblower realizes that there could be a way to lift his ships almost out of the water by attaching little loaded boats or barrels full of sand (or something like that) to the ship and then unloading them. He then has the two “bomb-ketches” in his squadron lifted in this way and brings them into action for a few hours to devastating effect on the sapping operation.
Hornblower’s novel strategy was something that the besiegers were clearly not even aware of as being possible. For the besiegers, this was, in Donald Rumsfeld’s terminology, an “unknown unknown”. Similar surprising moves have been made recently by the Ukraine bombing Russian bomber planes using a complex operation deep in Russia, and by Israel with the exploding pagers. In those two cases, the other side was most likely also not even aware of these possibilities. Having done this once, however, one would assume that this cannot easily be done again, as the other side, now being aware of such possibilities, can probably put preventative measures in place. In Hornblower’s case, the French react by bringing up a battery of guns towards the lifted ships within a few hours and keep them there for the remainder of the siege.
Interestingly, and that again, because of the highly predictable siege equilibrium, Clausewitz can precisely quantify that Hornblower’s innovative strategy has delayed the besiegers by no more than four days.
-
Predictable and Predictably Unpredictable Warfare

[Photo by Rafael Garcin on Unsplash]
In C. S. Forester’s Hornblower and the Hotspur, Horatio Hornblower is a captain of a three-masted sloop (one of the smaller ships at the time), the Hotspur, in the (British) Royal Navy. It is 1803 and there is a temporary peace, the peace of Amiens, during which the Hotspur is patrolling some parts of the coast of France. The episode I want to study begins with a French ship, the Loire, a frigate that is a good deal bigger than the Hotspur, leaving her anchorage in the direction of the Hotspur. Captain Hornblower correctly suspects (through an interesting series of what one could describe as Bayesian probabilistic inferences) that war has been declared and, given the size disadvantage, sets a course to avoid a confrontation.
This situation can now be described as a game between two players, the two ships (or their two captains), with opposing preferences: The Loire wants to catch up with the Hotspur, and the Hotspur wants to evade the Loire. To finalize our model, we need to specify the available strategies. These are all the different directions that the ships could go in, taking into account some geographical (and weather) constraints. From the goals that the two captains have, we can derive payoffs for any strategy combination. These are presumably so that the Loire always prefers to go in the same direction as the Hotspur, and the Hotspur prefers to go in any direction that is different from the direction the Loire takes. A more careful reading of the book suggests a secondary payoff-relevant concern. Given the possibility of something (exogenously, as we like to say) happening (such as bad weather or the sudden appearance of another (most likely British) ship), the Loire would probably like to catch up with the Hotspur as quickly as possible, while the Hotspur would like to delay such an event as much as possible. This is highly relevant in the present case, because it quickly emerges that the Loire is the (slightly) faster ship.
Having thus verbally fully described the game between the two ships, we can turn to the equilibrium analysis. Equilibrium play seems very plausible in this case for two reasons. One, the game is relatively simple, and two, this is not the first time in the history of naval warfare that one ship tries to catch another ship. Together, these two observations make it quite likely that each of the two captains of the two respective navies (from their training and their experience) has learned to behave optimally given the other captain’s behavior. In short, it seems likely that they have learned Nash equilibrium behavior.
It seems that, at least in the present case, the best course of action for the Hotspur and, thus, the Hotspur’s equilibrium behavior, assuming (correctly) that the Loire would follow wherever the Hotspur goes, is to sail into the wind. I don’t know that much about sailing, but I have been given to understand that you cannot sail directly into the wind. You can only sail at some (maximally acute) angle against the direction that the wind is coming from. The geographical realities in the present case are such that the two ships cannot go in the same direction against the wind for too long before they would run aground just off the coast. The escaping ship, therefore, has to occasionally “tack”, that is, to turn to move along the opposite (maximally) acute angle against the wind, thereby going into the wind in a zigzag fashion. In equilibrium and, indeed, also in the book, the Loire tacks whenever the Hotspur does to lose as little time as possible. This highly predictable behavior now goes on for quite some time. During this time the officers of the Hotspur make fairly precise and worrying predictions as to how long they have before the Loire catches them.
But then there is an interesting twist. Some small isolated low clouds appear just above the water. Noticing these, Hornblower decides to delay tacking his ship beyond what would otherwise be seen as optimal until the Hotspur is hidden by one of these clouds. When the Hotspur comes out of the cloud on the other tack, Hornblower realizes to his surprise that the Loire is also already on the other tack. Her captain has wisely predicted the Hotspur’s movements and has tacked at the same time as the Hotspur, thereby not losing any time at all. Hornblower tells himself that he will not use this trick again, and when a second cloud covers the Hotspur, he does indeed not tack; he does not even think about tacking, in fact. After coming out of this cloud, Hornblower finds that, again to his surprise, the Loire has tacked. Presumably, the captain of the Loire thought that the Hotspur would be tacking and did the same. But in this case, the Loire made a mistake and lost some valuable time.
The presence of the clouds has changed the nature of the game. Without the clouds, that is, with full visibility, the game is essentially a sequential move game. The Loire can simply observe what the Hotspur does and make her choice afterwards. This makes it easy for the Loire to match her action to that of the Hotspur, while making it impossible for the Hotspur to prevent that. With the clouds, the game is better described by a simultaneous move game. While one of the ships is hidden from view within a cloud, neither captain can see the other captain’s move. The game they are now playing is an instance of the so-called matching-pennies game. Both captains have two choices: they can tack or not tack. The captain of the Loire would like the two ships’ actions to match; Hornblower would like them to mismatch. This game also has a Nash equilibrium, but it is in what game theorists call “mixed” strategies: this means both captains choose randomly whether to tack or not. Given the description in the book about Hornblower’s thought process, it does not sound like he is actually randomizing. All that is really needed, however, is that the two players’ actions are unpredictable for their opponent. And it seems that Hornblower’s thought process was not exactly what the captain of the Loire thought it was (at least not in the second cloud instance). In any case, with just the two data points that we have (there being only two instances with clouds), we cannot reject that the two captains are playing the equilibrium of this game.
The game without clouds had an equilibrium in pure strategies, in which both players were able to predict each other’s moves precisely. The game with clouds also had an equilibrium, but in mixed strategies. In this case, the two captains should not expect to be able to fully predict their opponent’s choice, but they should be able to predict the extent of unpredictability. They should know that there is essentially a 50-50 chance of either move and, therefore, should never really be super-surprised by anything the other captain does.
Donald Rumsfeld, as US Secretary of Defense, once puzzled the world a bit with a statement about the differences between “known knowns”, “known unknowns”, and “unknown unknowns”. One can give a purely decision-theoretic discussion of this statement, but one can also see this in game-theoretic terms. In the equilibrium of the game without clouds, every future move is known to the two captains. In the game with clouds, the captains (should) know that they don’t quite know their opponent’s next move (in the cloud). In another blog post, I will use another Hornblower story to discuss situations in which players don’t even know what they don’t know.
-
Why happiness is elusive

There is an Austrian saying that “happiness is a bird” (“Das Glück is a Vogerl”). The idea, I think, is that happiness is hard to catch and even harder to keep hold of. In this blog post, I want to offer a formal model and definition of happiness that is able to generate the fleeting nature of happiness.
First, a bit of casual introspection to set the stage for modeling ideas. Imagine that sometime around early winter you finalize your summer holiday plans. You are planning a road trip through Australia (in their winter and your summer), something you are very excited about. Imagine that a month later – still (your) winter – you learn that something happened that is not a big problem in itself, but that will prevent you from going to Australia after all. Say, it turns out you can’t take that particular time off after all, but that is the only time that would have worked for the other people you were planning to go with on this trip. You will probably be very unhappy. And you will be unhappy right then and there, in (your) winter, months before you were supposed to be going on this trip. You don’t wait to be unhappy until the summer comes along. In fact, when the summer does come along, you will probably already be less unhappy, you have already “worked through” your grief.
The key element that I want to take up from this casual introspection is that humans are very forward-looking. They create expectations of the future and, in a sense, “consume” at least part of their future expectations before these are realized. And, as I will argue, humans who are good at forming correct expectations will likely only be happy for short amounts of time. They will not be able to live in a permanent state of bliss. [On the flip side they will also only be unhappy for short amounts of time.]
One way to see things is that there are many possible paths that your life could take. You control some aspects of which path you get, but no matter how much you control things, how much you “take life into your own hands,” there is always some leftover uncertainty. In fact, there is probably a lot of leftover uncertainty. You make educational choices, you decide what to study, and you decide which jobs you apply for, but what job you end up getting and where is not only up to you. You make friends and have a family, but who exactly they are and what happens to them, something you also care about, is again not all down to you.
Turning to a mathematical description of your life, we can collect all possible paths of life that could happen to you in one big set
At the beginning of your life, you have a belief about the likelihood of these various possible paths, which we capture by a probability distribution over
Ok, you probably have to grow up a bit before your beliefs form, but, at some point, you will probably have one. And yes, you might not exactly be able to write it down and articulate it fully, and maybe you have a more diffuse notion of your future that you don’t feel you can capture by a probability distribution, but I think you will see that this is a useful notion. The next ingredient to studying your happiness is to consider how much you would value different paths of life
Here, I am not sure whether what I propose is the best way to model this – I am following standard models of intertemporal choice in economics and I haven’t thought deeply enough about possibly better alternatives. The idea is that a path in life
gives you a level of instantaneous satisfaction at any moment of time
I will, for simplicity, count time discretely in, say, days. A path
would then give you a sequence of instantaneous levels of satisfaction for all days, from day zero (now) until the end of days. Call these levels of satisfaction
I am using
because in economic models this is what you often see, with
for utility. As people are forward-looking at any time
they care not only about the time-t instantaneous level of satisfaction but also about those in the future. A nice and simple way to capture this idea is that you do what firms are supposed to do when they consider long-term investment decisions: you compute the net present value of, in your case, all your future levels of instantaneous satisfaction. Each path of life
for every moment of your life
then yields a time-t lifetime satisfaction, let’s call it
for some discount factor
Note that you can potentially live forever here. However, we can interpret the so-called discount factor
as at least partly reflecting your less-than-certain chance of surviving until the next day. In that case, even if you could live forever in theory, the chances of that happening are zero. The discount factor can partly also reflect your degree of impatience.
So, we have formalized the possible paths of life and their consequences for us in terms of lifetime satisfaction. I have not yet mentioned happiness. And happiness will not be the same as lifetime satisfaction. I guess this is a bit controversial, but I believe that happiness is what we experience when things turn out better than expected. And we are unhappy when things turn out worse than we expected. To capture this, we introduce events – things that can happen to us. One way to see this is that, as time goes on, we can rule out more and more paths in life. This can be captured by a stochastic process that is a filtration. It has the property that whatever you know to be true at time
you also know to be true at time
for all
You don’t forget and you may learn new things. Suppose we call
the information (about your path in life) that you have received up to and including time
I can now finally define your happiness as the difference between your “updated” expected time-t lifetime satisfaction and your “original” expected time-t lifetime satisfaction. Formally, happiness is given by
I should probably cite some literature now that justifies my definition of happiness. The best I can do is to point you to the work of Arthur Robson on the biological basis of human (economic) behavior. I am not sure he would quite agree with my model here, but it is partly based on my, possibly imperfect, reading of his work. I came to the belief that happiness is not the same as lifetime satisfaction and that mother nature uses our pursuit of happiness (through the clever use of short-lived dopamine bursts) not to make us happy but to make us always want to achieve more and more and more – ever to increase our evolutionary fitness. Given mother nature’s biological constraints, she chose to make happiness have less to do with the level of lifetime satisfaction but with how it changes when certain events happen to you.
If happiness is given like this, continued happiness (undermining mother nature’s goals) would be best achieved by maintaining low expectations. Sage advice I would think, but hard to follow. Ideally, you would never expect a good meal and always be surprised when you get one. “Oh boy, I am getting something nice for breakfast!” This is difficult to keep up when you get a good breakfast every day. But it would quite possibly be a happier life.
When I lived in Chicago, I flew back to Austria to see family about twice a year. I collected air miles and fairly soon had a good stash thereof. I don’t know if it was a glitch in the airline’s system, but when just before boarding I asked for an upgrade based on my air miles, I often got one without the airline ever taking any miles off my account. I kept getting upgraded. The first time this happened to me I was extremely happy. It was one of the best flight experiences I ever had. This is so, I believe, because it came as a surprise – I did not expect to be upgraded. But after a while, the experience became more routine and did not give me that much happiness. I came to expect an upgrade. When I then did not get one, I was pretty unhappy. I was, in fact, much less happy than in the earlier days when I was never upgraded and never expected to be upgraded.
My kids form high expectations almost too easily. We had ice cream after lunch one Friday, and happened to have ice cream after lunch on the following Friday as well. When the kids didn’t get ice cream after lunch on the next Friday, they were unhappy and were asking us “what happened to Friday ice cream?”
Of course, I have described only one aspect of happiness. I am, for instance, ignoring things like clinical depression, which I would find harder to model and even harder to explain. I am also ignoring happiness that stems from achieving something. For instance, I would value the view on a mountain peak very differently depending on how I got to this peak. I believe I would get much more “out of” the view at the peak if it came as a reward at the end of a long and challenging hike rather than being the result of being dropped off by a helicopter. All I wanted to offer in this post was a formal model that can generate the fleeting nature of happiness, at least as I perceive it. But there is a lot more that could be said about the strange nature of human happiness.
-
Giving tenure to researchers on non-tenure track positions

At Austrian universities, many (young) researchers are employed on fixed-term contracts without a clearly specified path to tenure (a permanent position). Young researchers on such fixed-term contracts are rightly worried about their future and would, of course, love to get permanent contracts. Some time ago the Austrian minister for Education, Science, and Research publicly said that universities should consider giving tenure to a substantial number of researchers currently on fixed-term contracts. I don’t think that this is a good idea. To be more specific, I believe that there is a much better way of giving young researchers long-term career perspectives: The universities should offer more tenure-track positions, perhaps, as I have seen in the USA and my field of economics, even for researchers who have just finished their doctoral studies.
At first glance you might say that this is exactly what the minister said. Surely, there is not much difference between giving people fixed-term contracts and then giving some of them tenure after all and giving them tenure-track positions from the start. But there is a huge difference. The difference can be explained with two notions from economics: adverse selection and moral hazard.
Let me first explain the adverse selection problem. Put yourself in the shoes of a promising young researcher (somewhere in the world) who has just finished their PhD and is now looking for a job. They are looking through the job adverts and find two categories of jobs: fixed-term positions (without any apparent possibility of being tenured) and tenure-track positions. Which would they prefer? Of course, there are other considerations, such as salary, the quality and quantity of the group of researchers at this place, the location, and so on. But I would conjecture that for many, ceteris paribus as economists like to say, tenure-track beats fixed-term by a large margin. Considering this problem from the point of view of the university, this means that by offering fixed-term positions when others offer tenure-track positions, the university will probably, on average, not receive the best applicants for these jobs. If the university then ultimately and surprisingly gives tenure to some of the fixed-term employed researchers, the university is likely not giving the job to the best people they could have found if they had offered tenure-track positions to begin with. This is the adverse selection problem.
In addition, there is also the moral hazard problem. Now put yourself in the shoes of a (young) researcher employed in a fixed-term position who is told there may be a chance to get tenure after all. You would ask yourself and your boss(es) what you should do to improve your chances of this. I suspect that, without clearly pre-specified criteria for getting tenure, it is down to this (young) researcher’s boss to lobby the higher university authorities for the (young) researcher to get tenure. Would your boss use the same (unstated) criteria that a (universally, or at least within the university) agreed and publicly communicated tenure-track contract would specify? Not necessarily. I would conjecture that some bosses would favor pushing those young researchers who help their bosses rather than those who do great independent research. As a young researcher, hoping that your boss will lobby for you to get tenure, what would you do if this boss asks you to jump in to teach their class tomorrow or to replace them at a meeting and to keep notes for them? Well, you would probably do it. A pre-specified catalog of achievements and obligations necessary for getting tenure will, however, likely not have such items on their list. I have a strong feeling that many (young) researchers in fixed-term positions who hope to be given tenure after all, end up wasting valuable time on for them and for science on the whole irrelevant things.
In short, I have argued that giving tenure to researchers on non-tenure track positions entails an adverse selection problem and a moral hazard problem. The effect of this is that the university ultimately does not hire the best researchers they could have hired, and these researchers do lots of work that has little to do with excellent research per se.

