1.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
2.
Chicken (game)
–
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. From a game-theoretic point of view, chicken and hawk-dove are identical, the game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis. The game of chicken models two drivers, both headed for a bridge from opposite directions. The first to swerve away yields the bridge to the other, if neither player swerves, the result is a costly deadlock in the middle of the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves, additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure his best outcome, risks the worst. The phrase game of chicken is used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain, and only pride stops them from backing down. This is an adapted from a sport which, I am told, is practiced by some youthful degenerates. It is played by choosing a long road with a white line down the middle. Each car is expected to keep the wheels on one side of the white line, as they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the line before the other. And the one who has swerved becomes an object of contempt, as played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. Both are to blame for playing such a dangerous game. The game may be played without misfortune a few times, the moment will come when neither side can face the derisive cry of Chicken. When that moment is come, the statesmen of both sides will plunge the world into destruction, brinkmanship involves the introduction of an element of uncontrollable risk, even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome. In the chickie run scene from the film Rebel Without a Cause, the opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the game as he cant play chicken. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, the hawk-dove version of the game imagines two players contesting an indivisible resource who can choose between two strategies, one more escalated than the other. They can use threat displays, or physically attack each other, if both players choose the Hawk strategy, then they fight until one is injured and the other wins
3.
Prisoner's dilemma
–
It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it, prisoners dilemma, presenting it as follows, Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge and they hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain, Each prisoner is given the opportunity either to, betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a reward if they both kept silent. In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. If the number of times the game will be played is known to the players, in an infinite or unknown length game there is no fixed optimum strategy, and Prisoners Dilemma tournaments have been held to compete and test algorithms. The prisoners dilemma game can be used as a model for many real world situations involving cooperative behaviour, both cannot communicate, they are separated in two individual rooms. Regardless of what the other decides, each gets a higher reward by betraying the other. The reasoning involves an argument by dilemma, B will either cooperate or defect, if B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3, so either way, A should defect. Parallel reasoning will show that B should defect, because defection always results in a better payoff than cooperation, regardless of the other players choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game, the structure of the traditional Prisoners Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that player chooses to either Cooperate or Defect. If both players cooperate, they receive the reward R for cooperating. If both players defect, they receive the punishment payoff P. The donation game is a form of prisoners dilemma in which corresponds to offering the other player a benefit b at a personal cost c with b > c. The payoff matrix is thus Note that 2R>T+S which qualifies the donation game to be an iterated game, the donation game may be applied to markets
4.
Stag hunt
–
In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include assurance game, coordination game, jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare, each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed, an individual can get a hare by themself, but a hare is worth less than a stag. This has been taken to be an analogy for social cooperation. The stag hunt differs from the Prisoners Dilemma in that there are two pure strategy Nash equilibria, when both players cooperate and both players defect. In the Prisoners Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when players choose to defect. An example of the matrix for the stag hunt is pictured in Figure 2. Formally, a hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance game. For instance if a=2, b=1, c=0, and d=1, while remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt, in addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium, no payoffs can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here, there is a substantial relationship between the stag hunt and the prisoners dilemma. In biology many circumstances that have described as prisoners dilemma might also be interpreted as a stag hunt. It is also the case that human interactions that seem like prisoners dilemmas may in fact be stag hunts. For example, suppose we have a dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection, for instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoners dilemma into the stag hunt given at the introduction
5.
Normal-form game
–
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, while this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, in static games of complete, perfect information, a normal-form representation of a game is a specification of players strategy spaces and payoff functions. The matrix to the right is a representation of a game in which players move simultaneously. For example, if player 1 plays top and player 2 plays left, player 1 receives 4, in each cell, the first number represents the payoff to the row player, and the second number represents the payoff to the column player. Often, symmetric games are represented only one payoff. This is the payoff for the row player, for example, the payoff matrices on the right and left below represent the same game. The payoff matrix facilitates elimination of dominated strategies, and it is used to illustrate this concept. For example, in the dilemma, we can see that each prisoner can either cooperate or defect. If exactly one prisoner defects, he gets off easily and the prisoner is locked up for a long time. However, if they both defect, they both be locked up for a shorter time. One can determine that Cooperate is strictly dominated by Defect, one must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what the player chooses, the row player does better by choosing Defect. Similarly, one compares the second payoff in each row, again 0 > −1 and this shows that no matter what row does, column does better by choosing Defect. This demonstrates the unique Nash equilibrium of this game is and these matrices only represent games in which moves are simultaneous. The above matrix does not represent the game in which player 1 moves first, observed by player 2, in order to represent this sequential game we must specify all of player 2s actions, even in contingencies that can never arise in the course of the game. In this game, player 2 has actions, as before, Left, unlike before he has four strategies, contingent on player 1s actions. Accordingly, to specify a game, the payoff function has to be specified for each player in the player set P=. D. Fudenberg and J. Tirole, Game Theory
6.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others