1.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
2.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others
3.
Best response
–
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players strategies as given. Reaction correspondences, also known as best response correspondences, are used in the proof of the existence of mixed strategy Nash equilibria, one constructs a correspondence b, for each player from the set of opponent strategy profiles into the set of the players strategies. So, for any set of opponents strategies σ − i, b i represents player i s best responses to σ − i. Response correspondences for all 2x2 normal form games can be drawn with a line for each player in a unit square strategy space, figures 1 to 3 graphs the best response correspondences for the stag hunt game. The dotted line in Figure 1 shows the probability that player Y plays Stag. In Figure 2 the dotted line shows the probability that player X plays Stag. There are three distinctive reaction correspondence shapes, one for each of the three types of symmetric 2x2 games, coordination games, discoordination games and games with dominated strategies, any payoff symmetric 2x2 game will take one of these three forms. Games in which players score highest when both players choose the strategy, such as the stag hunt and battle of the sexes are called coordination games. Games such as the game of chicken and hawk-dove game in which players score highest when they choose opposite strategies, the third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know one of them is which, then the mixed Nash is an evolutionarily stable strategy. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes, Games with dominated strategies have reaction correspondences which only cross at one point, which will be in either the bottom left, or top right corner in payoff symmetric 2x2 games. For instance, in the prisoners dilemma, the Cooperate move is not optimal for any probability of opponent Cooperation. Figure 5 shows the correspondence for such a game, where the dimensions are Probability play Cooperate. A wider range of reaction correspondences shapes is possible in 2x2 games with payoff asymmetries, for each player there are five possible best response shapes, shown in Figure 6. From left to right these are, dominated strategy, dominated strategy, rising, falling, while there are only four possible types of payoff symmetric 2x2 games, the five different best response curves per player allow for a larger number of payoff asymmetric game types. Many of these are not truly different from each other, the dimensions may be redefined to produce symmetrical games which are logically identical. One well-known game with payoff asymmetries is the matching pennies game, player Ys reaction correspondence is that of a coordination game, while that of player X is a discoordination game. The only Nash equilibrium is the combination of mixed strategies where both players independently choose heads and tails with probability 0.5 each
4.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game
5.
Perfect information
–
In economics, perfect information is a feature of perfect competition. Perfect information is importantly different from information, which implies common knowledge of each players utility functions, payoffs. Chess is an example of a game with perfect information as each player can see all of the pieces on the board at all times. Other examples of games include tic-tac-toe, Irensei, and Go. Card games where each players cards are hidden from other players, as in contract bridge, complete information Extensive form game Information asymmetry Partial knowledge Perfect competition Screening game Signaling game Fudenberg, D. and Tirole, J. Game Theory, MIT Press. A primer in theory, Harvester-Wheatsheaf
6.
Centipede game
–
The payoffs are arranged so that if one passes the pot to ones opponent and the opponent takes the pot on the next round, one receives slightly less than if one had taken the pot on this round. Although the traditional centipede game had a limit of 100 rounds, any game with this structure and these results are taken to show that subgame perfect equilibria and Nash equilibria fail to predict human play in some circumstances. One possible version of a game could be played as follows. At the start of the game, Alice has two piles of coins in front of her, one pile contains 4 coins and the other pile contains 1 coin. Each player has two available, either take the larger pile of coins and give the smaller pile to the other player or push both piles across the table to the other player. Each time the piles of coins pass across the table, the quantity of coins in each pile doubles. For example, assume that Alice chooses to push the piles on her first move, handing the piles of 1 and 4 coins over to Bob, doubling them to 2 and 8. The game continues for a number of rounds or until a player decides to end the game by pocketing a pile of coins. The addition of coins is taken to be an externality, as it is not contributed by either player, standard game theoretic tools predict that the first player will defect on the first round, taking the pile of coins for himself. In the centipede game, a Pure strategy consists of a set of actions, there are several pure strategy Nash equilibria of the centipede game and infinitely many mixed strategy Nash equilibria. However, there is only one subgame perfect equilibrium, in the unique subgame perfect equilibrium, each player chooses to defect at every opportunity. This, of course, means defection at the first stage, in the Nash equilibria, however, the actions that would be taken after the initial choice opportunities may be cooperative. Defection by the first player is the subgame perfect equilibrium and required by any Nash equilibrium. Suppose two players reach the round of the game, the second player will do better by defecting and taking a slightly larger share of the pot. This reasoning proceeds backwards through the tree until one concludes that the best action is for the first player to defect in the first round. The same reasoning can apply to any node in the game tree, for a game that ends after four rounds, this reasoning proceeds as follows. If we were to reach the last round of the game, Player 2 would do better by choosing d instead of r, however, given that 2 will choose d,1 should choose D in the second to last round, receiving 3 instead of 2. Given that 1 would choose D in the second to last round,2 should choose d in the third to last round, but given this, Player 1 should choose D in the first round, receiving 1 instead of 0
7.
Chicken (game)
–
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. From a game-theoretic point of view, chicken and hawk-dove are identical, the game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis. The game of chicken models two drivers, both headed for a bridge from opposite directions. The first to swerve away yields the bridge to the other, if neither player swerves, the result is a costly deadlock in the middle of the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves, additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure his best outcome, risks the worst. The phrase game of chicken is used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain, and only pride stops them from backing down. This is an adapted from a sport which, I am told, is practiced by some youthful degenerates. It is played by choosing a long road with a white line down the middle. Each car is expected to keep the wheels on one side of the white line, as they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the line before the other. And the one who has swerved becomes an object of contempt, as played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. Both are to blame for playing such a dangerous game. The game may be played without misfortune a few times, the moment will come when neither side can face the derisive cry of Chicken. When that moment is come, the statesmen of both sides will plunge the world into destruction, brinkmanship involves the introduction of an element of uncontrollable risk, even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome. In the chickie run scene from the film Rebel Without a Cause, the opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the game as he cant play chicken. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, the hawk-dove version of the game imagines two players contesting an indivisible resource who can choose between two strategies, one more escalated than the other. They can use threat displays, or physically attack each other, if both players choose the Hawk strategy, then they fight until one is injured and the other wins
8.
Coordination game
–
In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. If this game is a game, then the following inequalities in payoffs hold for player 1, A > B, D > C, and for player 2. In this game the strategy profiles and are pure Nash equilibria and this setup can be extended for more than two strategies, as well as for a game with more than two players. A typical case for a game is choosing the sides of the road upon which to drive. In a simplified example, assume that two drivers meet on a dirt road. Both have to swerve in order to avoid a head-on collision, if both execute the same swerving maneuver they will manage to pass each other, but if they choose differing maneuvers they will collide. In the payoff matrix in Fig.2, successful passing is represented by a payoff of 10, in this case there are two pure Nash equilibria, either both swerve to the left, or both swerve to the right. In this example, it doesnt matter which side both players pick, as long as they pick the same. This is not true for all games, as the pure coordination game in Fig.3 shows. Pure coordination is the game where the players prefer the same Nash equilibrium outcome, here both players prefer partying over both staying at home to watch TV. The outcome Pareto dominates the outcome, just as both Pareto dominate the two outcomes, and. This is different in type of coordination game commonly called battle of the sexes. In this game both players prefer engaging in the activity over going alone, but their preferences differ over which activity they should engage in. Player 1 prefers that they both party while player 2 prefers that they stay at home. Finally, the stag hunt game in Fig.5 shows a situation in both players can benefit if they cooperate. However, cooperation might fail, because each hunter has an alternative which is safer because it not require cooperation to succeed. This example of the conflict between safety and social cooperation is originally due to Jean-Jacques Rousseau. Coordination games also have mixed strategy Nash equilibria, since d > b and d-b < a+d-b-c, p is always between zero and one, so existence is assured
9.
Cournot game
–
It is named after Antoine Augustin Cournot who was inspired by observing competition in a spring water duopoly. All firms know N, the number of firms in the market. Each firm has a cost function c i, normally the cost functions are treated as common knowledge. The cost functions may be the same or different among firms, the market price is set at a level such that demand equals the total quantity produced by all firms. Each firm takes the quantity set by its competitors as a given, evaluates its residual demand, the model was one of a number that Cournot set out explicitly and with mathematical precision in the volume. He then showed that a stable equilibrium occurs where these functions intersect, the consequence of this is that in equilibrium, each firms expectations of how other firms will act are shown to be correct, when all is revealed, no firm wants to change its output decision. This idea of stability was later taken up and built upon as a description of Nash equilibria and this section presents an analysis of the model with 2 firms and constant marginal cost. What is firm 1s optimal quantity, if firm 1 decides not to produce anything, then price is given by P = P. If firm 1 produces q 1 ′ then price is given by P, more generally, for each quantity that firm 1 might decide to set, price is given by the curve d 1. The curve d 1 is called firm 1’s residual demand, it all possible combinations of firm 1’s quantity. Determine firm 1’s optimum output, To do this we must find where marginal revenue equals marginal cost, marginal cost is assumed to be constant. Marginal revenue is a curve - r 1 - with twice the slope of d 1, the point at which the two curves intersect corresponds to quantity q 1 ″. Firm 1’s optimum q 1 ″, depends on what it believes firm 2 is doing, to find an equilibrium, we derive firm 1’s optimum for other possible values of q 2. Diagram 2 considers two possible values of q 2, if q 2 =0, then the first firms residual demand is effectively the market demand, d 1 = D. The optimal solution is for firm 1 to choose the monopoly quantity, q 1 ″ = q m. If firm 2 were to choose the quantity corresponding to perfect competition, q 2 = q c such that P = c, then firm 1’s optimum would be to produce nil and this is the point at which marginal cost intercepts the marginal revenue corresponding to d 1. It can be shown that, given the demand and constant marginal cost. Because we have two points, we can draw the entire function q 1 ″, see diagram 3
10.
El Farol Bar problem
–
The El Farol bar problem is a problem in game theory. The problem is as follows, There is a particular, finite population of people, every Thursday night, all of these people want to go to the El Farol Bar. However, the El Farol is quite small, and its no fun to go if its too crowded. So much so, in fact, that the preferences of the population can be described as follows, If less than 60% of the go to the bar. If more than 60% of the go to the bar. Unfortunately, it is necessary for everyone to decide at the time whether they will go to the bar or not. They cannot wait and see how many others go on a particular Thursday before deciding to go themselves on that Thursday. One aspect of the problem is that, no matter what method each person uses to decide if they go to the bar or not. Often the solution to problems in game theory is to permit each player to use a mixed strategy. There are also multiple Nash equilibria where one or more players use a pure strategy, several variants are considered in Game Theory Evolving by Herbert Gintis. In some variants of the problem, the people are allowed to communicate each other before deciding to go to the bar. However, they are not required to tell the truth, based on a bar in Santa Fe, New Mexico, the problem was created in 1994 by W. Brian Arthur. The problem was formulated and solved dynamically six years earlier by B. A. Huberman, one variant of the El Farol Bar problem is the minority game proposed by Yi-Cheng Zhang and Damien Challet from the University of Fribourg. In the minority game, an odd number of each must choose one of two choices independently at each turn. The players who end up on the minority side win, the minority game was featured in the manga Liar Game. In that multi-stage minority game, the majority was eliminated from the game until one player was left. Players were shown engaging in cooperative strategies, another variant of the El Farol Bar problem is the Kolkata Paise Restaurant Problem where the number of choices as well as the number of players are large, typically n = N. Both are repetitive and information regarding the history of choices made by different players for different restaurants are available to every one, for the choices for a single restaurant on any evening by more than one player, one is randomly selected from them and served food while others lose
11.
Game without a value
–
In game theory, and in particular the study of zero-sum continuous games, it is commonly assumed that a game has a minimax value. This is the value to one of the players when both play a perfect strategy. This article gives an example of a zero sum game that has no value and it is due to Sion and Wolfe. Zero sum games with a number of pure strategies are known to have a minimax value. There follows an example of a game with no minimax value. The existence of such games is interesting because many of the results of game theory become inapplicable if there is no minimax value. Sometimes player I is referred to as the player and player II the minimizing player. If is interpreted as a point on the square, the figure shows the payoff to player I. Now suppose that player I adopts a strategy, choosing a number from probability density function f. Player I seeks to maximize the payoff, player II to minimize the payoff, note that each player is aware of the others objective. Sion and Wolfe show that sup f inf g ∬ K d f d g =13 and these are the maximal and minimal expectations of the games value of player I and II respectively. The sup and inf respectively take the supremum and infimum over pdfs on the unit interval and these represent player I and player IIs strategies. Thus, player I can assure himself of a payoff of at least 3/7 if he knows player IIs strategy, there is clearly no epsilon equilibrium for sufficiently small ϵ, specifically, if ϵ <12 ≃0.0476. Dasgupta and Maskin assert that the values are achieved if player I puts probability weight only on the set. Glicksbergs theorem shows that any game with upper or lower semicontinuous payoff function has a value. Observe that the function of Sion and Wolfes example is clearly not semicontinuous. However, it may be made so by changing the value of K and K to either +1 or −1, if this is done, the game then has a value. Subsequent work by Heuer discusses a class of games in which the square is divided into three regions, the payoff function being constant in each of the regions
12.
Matching pennies
–
Matching pennies is the name for a simple game used in game theory. It is played between two players, Even and Odd, each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously, if the pennies match, then Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so one from Even. Matching Pennies is a game, since one players gain is exactly equal to the other players loss. The game can be written in a payoff matrix, each cell of the matrix shows the two players payoffs, with Evens payoffs listed first. Matching pennies is used primarily to illustrate the concept of mixed strategies and this game has no pure strategy Nash equilibrium since there is no pure strategy that is a best response to a best response. In other words, there is no pair of strategies such that neither player would want to switch if told what the other would do. Instead, the unique Nash equilibrium of this game is in mixed strategies, in this way, each player makes the other indifferent between choosing heads or tails, so neither player has an incentive to try another strategy. The best response functions for mixed strategies are depicted on the figure 1 below, When either player plays the equilibrium, varying the payoffs in the matrix can change the equilibrium point. For example, in the table shown on the right, Even has a chance to win 7 if both he and Odd play Heads, to calculate the equilibrium point in this game, note that a player playing a mixed strategy must be indifferent between his two actions. For the Odd player, the expected payoff when playing Heads is +1 ⋅ y −1 ⋅ and when playing Tails −1 ⋅ y +1 ⋅, note that x is the Heads-probability of Odd and y is the Heads-probability of Even. So the change in Evens payoff affects Odds strategy and not his own strategy, human players do not always play the equilibrium strategy. Laboratory experiments reveal several factors that make players deviate from the strategy, especially if matching pennies is played repeatedly. They may try to produce random sequences by switching their actions from Heads to Tails and vice versa and this makes it possible for expert players to predict their next actions with more than 50% chance of success. In this way, a positive expected payoff might be attainable, Humans are trained to detect patterns. They try to detect patterns in the sequence, even when such patterns do not exist. Humans behavior is affected by framing effects, additionally, the fact that Even wins when there is a match gives him an advantage, since people are better at matching than at mismatching
13.
Platonia dilemma
–
If he receives more than one telegram, or none at all, no one will get any money, and cooperation between players is forbidden. In this situation, the thing to do is to send a telegram with probability 1/20. A similar game, referred to as a Luring Lottery, was played by the editors of Scientific American in the 1980s. To enter the contest once, readers had to send in a postcard with the number 1 written on it and they were also explicitly permitted to submit as many entries as they wished by sending in a single postcard bearing the number of entries they wished to submit. The prize was one million dollars divided by the number of entries received. Thus a reader who submitted a number of entries increased his or her chances of winning. It can be shown mathematically that one maximizes ones average winnings in this game by submitting a number of equal to the total number of entries of others. Of course, if others take this account, then this strategy translates into a runaway reaction to unbounded number of entries being submitted. According to the magazine, the thing was for each contestant to roll a simulated die with the number of sides equal to the number of expected responders. Reputedly the publisher and owners were concerned about betting the company on a game. Some took this further by filling their postcards with mathematical expressions designed to evaluate to the largest possible number in the limited space allowed. The magazine was unable to tell who won, and the value of the prize would have been a minuscule fraction of a cent. Metamagical Themas Superrationality William Poundstone, Prisoners Dilemma, Doubleday, NY1992, social Dilemma Games and Puzzles By Leon Felkins, written 3/10/96, Last revision on 1/8/10
14.
Prisoner's dilemma
–
It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it, prisoners dilemma, presenting it as follows, Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge and they hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain, Each prisoner is given the opportunity either to, betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a reward if they both kept silent. In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. If the number of times the game will be played is known to the players, in an infinite or unknown length game there is no fixed optimum strategy, and Prisoners Dilemma tournaments have been held to compete and test algorithms. The prisoners dilemma game can be used as a model for many real world situations involving cooperative behaviour, both cannot communicate, they are separated in two individual rooms. Regardless of what the other decides, each gets a higher reward by betraying the other. The reasoning involves an argument by dilemma, B will either cooperate or defect, if B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3, so either way, A should defect. Parallel reasoning will show that B should defect, because defection always results in a better payoff than cooperation, regardless of the other players choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game, the structure of the traditional Prisoners Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that player chooses to either Cooperate or Defect. If both players cooperate, they receive the reward R for cooperating. If both players defect, they receive the punishment payoff P. The donation game is a form of prisoners dilemma in which corresponds to offering the other player a benefit b at a personal cost c with b > c. The payoff matrix is thus Note that 2R>T+S which qualifies the donation game to be an iterated game, the donation game may be applied to markets
15.
Signaling game
–
In game theory, a signaling game is a simple type of a dynamic Bayesian game. It is a game two players, called the sender and the receiver, The sender can have one of several types. The senders type, t, determines the function of the sender. It is the information of the sender - it is not known to the receiver. The receiver has only a type, so his payoff function is known to both players. The game has two steps, The sender plays in the first step and he can play one of several actions, which are called messages. The set of messages is M =. The receiver plays in the step, after viewing the senders message. The set of actions is A =. The two players receive payoffs dependent on the type, the message chosen by the sender. The equilibrium concept that is relevant for signaling games is Perfect Bayesian equilibrium - a refinement of both Bayesian Nash equilibrium and subgame-perfect equilibrium, a sender of type t j sends a message m ∗ in the set of probability distributions over M. The receiver observing the message m takes an action a ∗ in the space of probability distributions over A, a game is in perfect Bayesian equilibrium if it meets all four of the following requirements, The receiver must have a belief about which types can have sent message m. These beliefs can be described as a probability distribution μ, the probability that the sender has type t i if he chooses message m, the sum over all types t i of these probabilities has to be 1 conditional on any message m. The action the receiver chooses must maximize the utility of the receiver given his beliefs about which type could have sent message m, μ. This means that the sum ∑ t i μ U R is maximized, the action a that maximizes this sum is a ∗. For each type, t, the sender chooses to send the message m ∗ that maximizes the senders utility U S given the strategy chosen by the receiver, a ∗. This means that the message does not give any information to the receiver. A separating equilibrium is an equilibrium where senders with different types always choose different messages and this means that the senders message always reveals the senders type, so the receivers beliefs become deterministic after seeing the message
16.
Stag hunt
–
In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include assurance game, coordination game, jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare, each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed, an individual can get a hare by themself, but a hare is worth less than a stag. This has been taken to be an analogy for social cooperation. The stag hunt differs from the Prisoners Dilemma in that there are two pure strategy Nash equilibria, when both players cooperate and both players defect. In the Prisoners Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when players choose to defect. An example of the matrix for the stag hunt is pictured in Figure 2. Formally, a hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance game. For instance if a=2, b=1, c=0, and d=1, while remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt, in addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium, no payoffs can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here, there is a substantial relationship between the stag hunt and the prisoners dilemma. In biology many circumstances that have described as prisoners dilemma might also be interpreted as a stag hunt. It is also the case that human interactions that seem like prisoners dilemmas may in fact be stag hunts. For example, suppose we have a dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection, for instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoners dilemma into the stag hunt given at the introduction
17.
Ultimatum game
–
The ultimatum game is a game in economic experiments. The first player receives a sum of money and proposes how to divide the sum between the proposer and the other player, the second player chooses to either accept or reject this proposal. If the second player accepts, the money is split according to the proposal, if the second player rejects, neither player receives any money. The game is played only once so that reciprocation is not an issue. For illustration, we suppose there is a smallest division of the good available. Suppose that the amount of money available is x. The first player chooses some amount p he will keep for himself in the interval, the second player chooses some function f, →. We will represent the strategy profile as, where p is the proposal and f is the function. If f = accept the first receives p and the second x−p, otherwise both get zero. is a Nash equilibrium of the game if f = accept. The first player would not want to unilaterally increase his/her demand since the second would reject any higher demand, the second would not want to reject the demand, since he/she would then get nothing. There is one other Nash equilibrium in which p = x and f = reject for all y>0, here both players get nothing, but neither could get more by unilaterally changing his/her strategy. However, only one of these Nash equilibria satisfies a more restrictive equilibrium concept, suppose that the first demands a large amount that gives the second some amount of money. By rejecting the demand, the second is choosing nothing rather than something, so, it would be better for the second to choose to accept any demand that gives him/her any amount whatsoever. If the first knows this, he/she will give the second the smallest amount possible, when carried out between members of a shared social group people offer fair splits, and offers of less than 30% are often rejected. It has also found that delaying the responders decision makes people accept unfair offers more often. Common chimpanzees behaved similarly to human by proposing fair offers in one version of the game involving direct interaction between the chimpanzees. However, another study published in November 2012 showed that both kinds of chimpanzees, common chimpanzees and bonobos did not reject unfair offers, using a mechanical apparatus. As of February 2015, bonobos have not been studied using the protocol involving direct interaction, the highly mixed results have been taken to be both evidence for and against the so-called Homo economicus assumptions of rational, utility-maximizing, individual decisions
18.
Ariel Rubinstein
–
Ariel Rubinstein is an Israeli economist who works in Economic Theory, Game Theory and Bounded Rationality. Ariel Rubinstein is a professor of economics at the School of Economics at Tel Aviv University and he studied mathematics and economics at the Hebrew University of Jerusalem, 1972–1979. In 1982, he published Perfect equilibrium in a bargaining model, the model is known also as a Rubinstein bargaining model. It describes two-person bargaining as a game with perfect information in which the players alternate offers. A key assumption is that the players are impatient, the main result gives conditions under which the game has a unique subgame perfect equilibrium and characterizes this equilibrium. He also co-wrote A Course in Game Theory with Martin J. Osborne, Rubinstein was elected a member of the Israel Academy of Sciences and Humanities, a Foreign Honorary Member of the American Academy of Arts and Sciences in and the American Economic Association. In 1985 he was elected a fellow of the Econometric Society, in 2002, he was awarded an honorary doctorate by the Tilburg University. He has received the Bruno Prize, the Israel Prize for economics, the Nemmers Prize in Economics, the EMET Prize. and the Rothschild Prize. Bargaining and Markets, with Martin J. Osborne, Academic Press 1990 A Course in Game Theory, with Martin J. Osborne, modeling Bounded Rationality, MIT Press,1998. Economics and Language, Cambridge University Press,2000, lecture Notes in Microeconomic Theory, The Economic Agent, Princeton University Press,2006. AGADOT HAKALKALA, Kineret, Zmora, Bitan,2009, list of Israel Prize recipients Personal Web site Nash lecture Roberts, Russ. Rubinstein on Game Theory and Behavioral Economics
19.
John Forbes Nash Jr.
–
John Forbes Nash Jr. was an American mathematician who made fundamental contributions to game theory, differential geometry, and the study of partial differential equations. Nashs work has provided insight into the factors that govern chance and his theories are widely used in economics. In 2015, he shared the Abel Prize with Louis Nirenberg for his work on nonlinear partial differential equations. In 1959, Nash began showing signs of mental illness. After 1970, his condition improved, allowing him to return to academic work by the mid-1980s. His struggles with his illness and his became the basis for Sylvia Nasars biography, A Beautiful Mind. On May 23,2015, Nash and his wife, Alicia Nash, were killed in a car crash while riding in a taxi on the New Jersey Turnpike, Nash was born on June 13,1928, in Bluefield, West Virginia. His father, John Forbes Nash, was an engineer for the Appalachian Electric Power Company. His mother, Margaret Virginia Nash, had been a schoolteacher before she married and he was baptized in the Episcopal Church. He had a sister, Martha. Nash attended kindergarten and public school, and he learned from books provided by his parents and grandparents, Nashs parents pursued opportunities to supplement their sons education, and arranged for him to take advanced mathematics courses at a local community college during his final year of high school. He attended Carnegie Institute of Technology through a benefit of the George Westinghouse Scholarship. He switched to a major and eventually, at the advice of his teacher John Lighton Synge. After graduating in 1948 with both a B. S. and M. S. in mathematics, Nash accepted a scholarship to Princeton University, where he pursued further graduate studies in mathematics. Nashs adviser and former Carnegie professor Richard Duffin wrote a letter of recommendation for Nashs entrance to Princeton stating, Nash was accepted at Harvard University. However, the chairman of the department at Princeton, Solomon Lefschetz, offered him the John S. Kennedy fellowship. Further, he considered Princeton more favorably because of its proximity to his family in Bluefield, at Princeton, he began work on his equilibrium theory, later known as the Nash equilibrium. Nash earned a Ph. D. degree in 1950 with a 28-page dissertation on non-cooperative games, the thesis, which was written under the supervision of doctoral advisor Albert W. Tucker, contained the definition and properties of the Nash equilibrium
20.
Anatol Rapoport
–
Anatol Rapoport was a Russian-born American mathematical psychologist. He contributed to general theory, mathematical biology and to the mathematical modeling of social interaction. Rapoport was born in Lozova, Kharkov Governorate, Russia into a secular Jewish family, in 1922, he came to the United States, and in 1928 he became a naturalized citizen. He started studying music in Chicago and continued with piano, conducting, however, due to the rise of Nazism, he found it impossible to make a career as a pianist. After the war, he joined the Committee on Mathematical Biology at the University of Chicago, publishing his first book, Science and he also received a one-year fellowship at the prestigious Center for Advanced Study in the Behavioral Sciences in Stanford, California. In 1970 Rapoport moved to Toronto to avoid the ways of the Vietnam-era United States. He was appointed professor of mathematics and psychology at the University of Toronto, the university appointed him professor emeritus in 1980. He lived in bucolic Wychwood Park overlooking downtown Toronto, a neighbour of Marshall McLuhan, on his retirement from the University of Toronto, he became director of the Institute of Advanced Studies until 1983. University of Toronto appointed him professor of peace studies in 1984, a position he held until 1996, in 1984 he co-founded Science for Peace, was elected president and remained on its executive until 1998. In 1954 Anatol Rapoport co-founded the Society for General Systems Research, along with the researchers Ludwig von Bertalanffy, Ralph Gerard and he became president of the Society for General Systems Research in 1965. Anatol Rapoport died of pneumonia in Toronto and he is survived by his wife Gwen, daughter Anya, and sons Alexander and Anthony. Rapoport contributed to general theory, mathematical biology, and to the mathematical modeling of social interaction. He combined his mathematical expertise with psychological insights into the study of theory, social networks. Rapoport extended these understandings into studies of conflict, dealing with nuclear disarmament. His autobiography, Certainties and Doubts, A Philosophy of Life, was published in 2001, Rapoport had a versatile mind, working in mathematics, psychology, biology, game theory, social network analysis, and peace and conflict studies. For example, he pioneered in the modeling of parasitism and symbiosis and this went on to give a conceptual basis for his lifelong work in conflict and cooperation. Among many other books on fights, games, violence. He analyzed contests in which there are more than two sets of conflicting interests, such as war, diplomacy, poker, or bargaining and his work led him to peace research, including books on The Origins of Violence and Peace, An Idea Whose Time Has Come