1.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
2.
Economic equilibrium
–
In economics, economic equilibrium is a state where economic forces such as supply and demand are balanced and in the absence of external influences the values of economic variables will not change. For example, in the textbook model of perfect competition, equilibrium occurs at the point at which quantity demanded. However, the concept of equilibrium in economics also applies to imperfectly competitive markets, three basic properties of equilibrium in general have been proposed by Huw Dixon. These are, Equilibrium property P1, The behavior of agents is consistent, Equilibrium property P2, No agent has an incentive to change its behavior. Equilibrium Property P3, Equilibrium is the outcome of some dynamic process, in a competitive equilibrium, supply equals demand. Property P1 is satisfied, because at the price the amount supplied is equal to the amount demanded. Demand is chosen to maximize utility given the price, no one on the demand side has any incentive to demand more or less at the prevailing price. Likewise supply is determined by firms maximizing their profits at the market price, hence, agents on neither the demand side nor the supply side will have any incentive to alter their actions. To see whether Property P3 is satisfied, consider what happens when the price is above the equilibrium, in this case there is an excess supply, with the quantity supplied exceeding that demanded. This will tend to put pressure on the price to make it return to equilibrium. Likewise where the price is below the point there is a shortage in supply leading to an increase in prices back to equilibrium. Not all equilibria are stable in the sense of Equilibrium property P3 and it is possible to have competitive equilibria that are unstable. However, if an equilibrium is unstable, it raises the question of how you might get there, even if it satisfies properties P1 and P2, the absence of P3 means that the market can only be in the unstable equilibrium if it starts off there. In most simple microeconomic stories of supply and demand a static equilibrium is observed in a market, however, Equilibrium may also be economy-wide or general, as opposed to the partial equilibrium of a single market. Equilibrium can change if there is a change in demand or supply conditions, for example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets, then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand. That is, there are no endogenous forces leading to the price or the quantity, the Nash equilibrium is widely used in economics as the main alternative to competitive equilibrium. It is used there is a strategic element to the behavior of agents
3.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others
4.
Minimax
–
Minimax is a decision rule used in decision theory, game theory, statistics and philosophy for minimizing the possible loss for a worst case scenario. Its formal definition is, v i _ = max a i min a − i v i Where, − i denotes all other players except player i. a i is the action taken by player i. a − i denotes the actions taken by all other players. V i is the function of player i. Then, we determine which action player i can take in order to make sure that this smallest value is the largest possible. For example, consider the game for two players, where the first player may choose any of three moves, labelled T, M, or B, and the second player may choose either of two moves, L or R. The result of the combination of moves is expressed in a payoff table. For the sake of example, we consider only pure strategies, check each player in turn, The row player can play T, which guarantees him a payoff of at least 2. The column player can play L and secure a payoff of at least 0, if both players play their maximin strategies, the payoff vector is. In contrast, the only Nash equilibrium in game is. Its formal definition is, v i ¯ = min a − i max a i v i The definition is similar to that of the maximin value - only the order of the maximum and minimum operators is inverse. In the above example, The row player can get a value of 4 or 5, so, the column player can get a value of 1,1 or 4. In zero-sum games, the solution is the same as the Nash equilibrium. Equivalently, Player 1s strategy guarantees him a payoff of V regardless of Player 2s strategy, the name minimax arises because each player minimizes the maximum payoff possible for the other—since the game is zero-sum, he/she also minimizes his/their own maximum loss. See also example of a game without a value, the following example of a zero-sum game, where A and B make simultaneous moves, illustrates minimax solutions. Suppose each player has three choices and consider the matrix for A displayed on the right. Assume the payoff matrix for B is the matrix with the signs reversed. Then, the choice for A is A2 since the worst possible result is then having to pay 1. So a more stable strategy is needed, similarly, B can ensure an expected gain of at least 1/3, no matter what A chooses, by using a randomized strategy of choosing B1 with probability 1∕3 and B2 with probability 2∕3
5.
Prisoner's dilemma
–
It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it, prisoners dilemma, presenting it as follows, Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge and they hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain, Each prisoner is given the opportunity either to, betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a reward if they both kept silent. In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. If the number of times the game will be played is known to the players, in an infinite or unknown length game there is no fixed optimum strategy, and Prisoners Dilemma tournaments have been held to compete and test algorithms. The prisoners dilemma game can be used as a model for many real world situations involving cooperative behaviour, both cannot communicate, they are separated in two individual rooms. Regardless of what the other decides, each gets a higher reward by betraying the other. The reasoning involves an argument by dilemma, B will either cooperate or defect, if B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3, so either way, A should defect. Parallel reasoning will show that B should defect, because defection always results in a better payoff than cooperation, regardless of the other players choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game, the structure of the traditional Prisoners Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that player chooses to either Cooperate or Defect. If both players cooperate, they receive the reward R for cooperating. If both players defect, they receive the punishment payoff P. The donation game is a form of prisoners dilemma in which corresponds to offering the other player a benefit b at a personal cost c with b > c. The payoff matrix is thus Note that 2R>T+S which qualifies the donation game to be an iterated game, the donation game may be applied to markets
6.
Best response
–
In game theory, the best response is the strategy which produces the most favorable outcome for a player, taking other players strategies as given. Reaction correspondences, also known as best response correspondences, are used in the proof of the existence of mixed strategy Nash equilibria, one constructs a correspondence b, for each player from the set of opponent strategy profiles into the set of the players strategies. So, for any set of opponents strategies σ − i, b i represents player i s best responses to σ − i. Response correspondences for all 2x2 normal form games can be drawn with a line for each player in a unit square strategy space, figures 1 to 3 graphs the best response correspondences for the stag hunt game. The dotted line in Figure 1 shows the probability that player Y plays Stag. In Figure 2 the dotted line shows the probability that player X plays Stag. There are three distinctive reaction correspondence shapes, one for each of the three types of symmetric 2x2 games, coordination games, discoordination games and games with dominated strategies, any payoff symmetric 2x2 game will take one of these three forms. Games in which players score highest when both players choose the strategy, such as the stag hunt and battle of the sexes are called coordination games. Games such as the game of chicken and hawk-dove game in which players score highest when they choose opposite strategies, the third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know one of them is which, then the mixed Nash is an evolutionarily stable strategy. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes, Games with dominated strategies have reaction correspondences which only cross at one point, which will be in either the bottom left, or top right corner in payoff symmetric 2x2 games. For instance, in the prisoners dilemma, the Cooperate move is not optimal for any probability of opponent Cooperation. Figure 5 shows the correspondence for such a game, where the dimensions are Probability play Cooperate. A wider range of reaction correspondences shapes is possible in 2x2 games with payoff asymmetries, for each player there are five possible best response shapes, shown in Figure 6. From left to right these are, dominated strategy, dominated strategy, rising, falling, while there are only four possible types of payoff symmetric 2x2 games, the five different best response curves per player allow for a larger number of payoff asymmetric game types. Many of these are not truly different from each other, the dimensions may be redefined to produce symmetrical games which are logically identical. One well-known game with payoff asymmetries is the matching pennies game, player Ys reaction correspondence is that of a coordination game, while that of player X is a discoordination game. The only Nash equilibrium is the combination of mixed strategies where both players independently choose heads and tails with probability 0.5 each
7.
Stackelberg competition
–
The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first and then the follower firms move sequentially. It is named after the German economist Heinrich Freiherr von Stackelberg who published Market Structure, in game theory terms, the players of this game are a leader and a follower and they compete on quantity. The Stackelberg leader is sometimes referred to as the Market Leader, there are some further constraints upon the sustaining of a Stackelberg equilibrium. The leader must know ex ante that the follower observes its action, the follower must have no means of committing to a future non-Stackelberg follower action and the leader must know this. Indeed, if the follower could commit to a Stackelberg leader action and the leader knew this, firms may engage in Stackelberg competition if one has some sort of advantage enabling it to move first. More generally, the leader must have commitment power, moving observably first is the most obvious means of commitment, once the leader has made its move, it cannot undo it - it is committed to that action. Moving first may be if the leader was the incumbent monopoly of the industry. Holding excess capacity is another means of commitment, in very general terms, let the price function for the industry be P, price is simply a function of total output, so is P where the subscript 1 represents the leader and 2 represents the follower. Suppose firm i has the cost structure C i, the model is solved by backward induction. The leader considers what the best response of the follower is, the leader then picks a quantity that maximises its payoff, anticipating the predicted response of the follower. The follower actually observes this and in equilibrium picks the expected quantity as a response, to calculate the SPNE, the best response functions of the follower must first be calculated. The profit of firm 2 is revenue minus cost, revenue is the product of price and quantity and cost is given by the firms cost structure, so profit is, Π2 = P ⋅ q 2 − C2. The best response is to find the value of q 2 that maximises Π2 given q 1, i. e. given the output of the leader, hence, the maximum of Π2 with respect to q 2 is to be found. First differentiate Π2 with respect to q 2, ∂ Π2 ∂ q 2 = ∂ P ∂ q 2 ⋅ q 2 + P − ∂ C2 ∂ q 2. Setting this to zero for maximisation, ∂ Π2 ∂ q 2 = ∂ P ∂ q 2 ⋅ q 2 + P − ∂ C2 ∂ q 2 =0, the values of q 2 that satisfy this equation are the best responses. Now the best response function of the leader is considered and this function is calculated by considering the followers output as a function of the leaders output, as just computed. The profit of firm 1 is Π1 = P. q 1 − C1, where q 2 is the followers quantity as a function of the leaders quantity, namely the function calculated above. The best response is to find the value of q 1 that maximises Π1 given q 2, i. e. given the best response function of the follower, hence, the maximum of Π1 with respect to q 1 is to be found
8.
Subgame perfect equilibrium
–
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Every finite extensive game has a perfect equilibrium. A common method for determining subgame perfect equilibria in the case of a game is backward induction. Here one first considers the last actions of the game and determines which actions the final mover should take in each possible circumstance to maximize his/her utility. One then supposes that the last actor will do these actions and this process continues until one reaches the first move of the game. The strategies which remain are the set of all subgame perfect equilibria for finite-horizon extensive games of perfect information, however, backward induction cannot be applied to games of imperfect or incomplete information because this entails cutting through non-singleton information sets. A subgame perfect equilibrium necessarily satisfies the One-Shot deviation principle, the set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical, the Ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. An example for a game possessing an ordinary Nash equilibrium and a perfect equilibrium is shown in Figure 1. The strategies for player 1 are given by whereas player 2 has the choice between 2 as his choice to be kind or unkind to player 1 might depend on the previously made by player 1. The payoff matrix of the game is shown in Table 1, observe that there are two different Nash equilibria, given by the strategy profiles L, and R. Consider the equilibrium given by the strategy profile L, more formally, the equilibrium is not an equilibrium with respect to the subgame induced by node 22. It is likely that in real life player 2 would choose the strategy instead which would in turn inspire player 1 to change his strategy to R, the resulting profile R, is not only a Nash equilibrium but it is also an equilibrium in all subgames. It is therefore a perfect equilibrium. Reinhard Selten proved that any game which can be broken into sub-games containing a sub-set of all the choices in the main game will have a subgame perfect Nash Equilibrium strategy. Subgame perfection is used with games of complete information. Subgame perfection can be used with extensive form games of complete, one game in which the backward induction solution is well known is tic-tac-toe, but in theory even Go has such an optimum strategy for all players
9.
Perfect information
–
In economics, perfect information is a feature of perfect competition. Perfect information is importantly different from information, which implies common knowledge of each players utility functions, payoffs. Chess is an example of a game with perfect information as each player can see all of the pieces on the board at all times. Other examples of games include tic-tac-toe, Irensei, and Go. Card games where each players cards are hidden from other players, as in contract bridge, complete information Extensive form game Information asymmetry Partial knowledge Perfect competition Screening game Signaling game Fudenberg, D. and Tirole, J. Game Theory, MIT Press. A primer in theory, Harvester-Wheatsheaf
10.
Bayesian game
–
In game theory, a Bayesian game is a game in which the players do not have complete information on the other players, but, they have beliefs with known probability distribution. A Bayesian game can be converted into a game of complete, harsanyi describes a Bayesian game in the following way. In addition to the players in the game, there is a special player called Nature. Nature assigns a random variable to each player which could take values of types for each player, harsanyis approach to modeling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information. The type of a player determines that players payoff function, the probability associated with a type is the probability that the player, for whom the type is specified, is that type. In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type of another player, such games are called Bayesian because of the probabilistic analysis inherent in the game. The lack of information held by players and modeling of beliefs mean that such games are used to analyse imperfect information scenarios. The normal form representation of a game with perfect information is a specification of the strategy spaces. A strategy for a player is a plan of action that covers every contingency of the game. The strategy space of a player is thus the set of all available to a player. A payoff function is a function from the set of profiles to the set of payoffs. In a Bayesian game, one has to specify strategy spaces, type spaces, payoff functions, a strategy for a player is a complete plan of action that covers every contingency that might arise for every type that player might be. A strategy must not only specify the actions of the given the type that he is. Strategy spaces are defined as above, a type space for a player is just the set of all possible types of that player. The beliefs of a player describe the uncertainty of that player about the types of the other players, each belief is the probability of the other players having particular types, given the type of the player with that belief. A payoff function is a 2-place function of strategy profiles and types, if a player has payoff function U and he has type t, the payoff he receives is U, where x ∗ is the strategy profile played in the game. Ω is the set of states of nature, for instance, in a card game, it can be any order of the cards. A i is the set of actions for player i, let A = A1 × A2 × ⋯ × A N
11.
Information set (game theory)
–
In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has information, every information set contains only one member. Otherwise, it is the case that some players cannot be exactly what has taken place so far in the game. More specifically, in the form, an information set is a set of decision nodes such that. The notion of set was introduced by John von Neumann. At the right are two versions of the battle of the game, shown in extensive form. The first game is simply sequential-when player 2 has the chance to move, the second game is also sequential, but the dotted line shows player 2s information set. This is the way to show that when player 2 moves. This difference also leads to different predictions for the two games, in the first game, player 1 has the upper hand. They know that they can choose O safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o and get 2 than choose f, formally, thats applying subgame perfection to solve the game. In the second game, player 2 cant observe what player 1 did, game Theory, A very short introduction
12.
Bayes' theorem
–
In probability theory and statistics, Bayes’ theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. One of the applications of Bayes’ theorem is Bayesian inference. When applied, the involved in Bayes’ theorem may have different probability interpretations. With the Bayesian probability interpretation the theorem expresses how a subjective degree of belief should rationally change to account for availability of related evidence, Bayesian inference is fundamental to Bayesian statistics. Bayes’ theorem is named after Rev. Thomas Bayes, who first provided an equation that allows new evidence to update beliefs. It was further developed by Pierre-Simon Laplace, who first published the modern formulation in his 1812 “Théorie analytique des probabilités. ”Sir Harold Jeffreys put Bayes’ algorithm and Laplaces formulation on an axiomatic basis. Jeffreys wrote that Bayes’ theorem “is to the theory of probability what the Pythagorean theorem is to geometry. ”Bayes theorem is stated mathematically as the equation, P = P P P. P and P are the probabilities of observing A and B without regard to each other, P, a conditional probability, is the probability of observing event A given that B is true. P is the probability of observing event B given that A is true, Bayes’ theorem was named after the Reverend Thomas Bayes, who studied how to compute a distribution for the probability parameter of a binomial distribution. Bayes’ unpublished manuscript was edited by Richard Price before it was posthumously read at the Royal Society. Price edited Bayes’ major work “An Essay towards solving a Problem in the Doctrine of Chances”, Price wrote an introduction to the paper which provides some of the philosophical basis of Bayesian statistics. In 1765 he was elected a Fellow of the Royal Society in recognition of his work on the legacy of Bayes, the French mathematician Pierre-Simon Laplace reproduced and extended Bayes’ results in 1774, apparently quite unaware of Bayes’ work. The Bayesian interpretation of probability was developed mainly by Laplace, stephen Stigler suggested in 1983 that Bayes’ theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes, that interpretation, however, has been disputed. Martyn Hooper and Sharon McGrayne have argued that Richard Prices contribution was substantial, By modern standards, Price discovered Bayes’ work, recognized its importance, corrected it, contributed to the article, and found a use for it. The modern convention of employing Bayes’ name alone is unfair but so entrenched that anything else makes little sense, suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users, suppose that 0. 5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability that he is a user and this surprising result arises because the number of non-users is very large compared to the number of users, thus the number of false positives outweighs the number of true positives. To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5 users, from the 995 non-users,0.01 ×995 ≃10 false positives are expected
13.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set
14.
Jean Tirole
–
Jean Tirole is a French professor of economics. He focuses on industrial organization, game theory, banking and finance, in 2014 he was awarded the Nobel Memorial Prize in Economic Sciences for his analysis of market power and regulation. Tirole received engineering degrees from the École Polytechnique in Paris in 1976 and he graduated as a member of the elite Corps of Bridges, Waters and Forests. Tirole pursued graduate studies at the Paris Dauphine University and was awarded a DEA degree in 1976, in 1981, he received a Ph. D. in economics from the Massachusetts Institute of Technology for his thesis titled Essays in economic theory, under the supervision of Eric Maskin. After receiving his doctorate from MIT in 1981, he worked as a researcher at the École nationale des ponts et chaussées until 1984, from 1984–1991, he worked as Professor of Economics at MIT. From 1994 to 1996 he was a Professor of Economics at the École Polytechnique, Tirole was involved with Jean-Jacques Laffont in the project of creating a new School of Economics in Toulouse. He was president of the Econometric Society in 1998 and of the European Economic Association in 2001, Tirole has been a member of the Académie des Sciences morales et politiques since 2011, the Conseil danalyse économique since 2008 and the Conseil stratégique de la recherché since 2013. In the early 2010s, he showed that generally tend to take short-term risks. Tirole was awarded the Nobel Memorial Prize in Economic Sciences in 2014 for his analysis of market power and he is a foreign honorary member of the American Academy of Arts and Sciences and of the American Economic Association. He has also been a Sloan Fellow and a Guggenheim Fellow and he was a fellow of the Econometric Society in 1986 and an Economic Theory Fellow in 2011. In 2007 he was awarded the highest award of the French CNRS and he is among the most influential economists in the world according to IDEAS/RePEc. His research covers industrial organization, regulation, game theory, banking and finance, psychology and economics, international finance, the Theory of Industrial Organization, MIT Press,1988. Dynamic Models of Oligopoly (avec Drew Fudenberg, Harwood Academic Publishers GMbH,1986, the Theory of Industrial Organization, MIT Press,1988. A Theory of Incentives in Regulation and Procurement, MIT Press,1993, the Prudential Regulation of Banks, MIT Press,1994. Competition in Telecommunications, MIT Press,1999, financial Crises, Liquidity and the International Monetary System, Princeton University Press,2002. The Theory of Corporate Finance, Princeton University Press,2005, Association of American Publishers 2006 Award for Excellence. Balancing the Banks, Princeton University Press,2010, inside and Outside Liquidity, MIT Press,2011
15.
MIT Press
–
The MIT Press is a university press affiliated with the Massachusetts Institute of Technology in Cambridge, Massachusetts. Six years later, MITs publishing operations were first formally instituted by the creation of an imprint called Technology Press in 1932 and this imprint was founded by James R. Killian, Jr. at the time editor of MITs alumni magazine and later to become MIT president. Technology Press published eight titles independently, then in 1937 entered into an arrangement with John Wiley & Sons in which Wiley took over marketing, in 1962 the association with Wiley came to an end after a further 125 titles had been published. The press acquired its name after this separation, and has since functioned as an independent publishing house. A European marketing office was opened in 1969, and a Journals division was added in 1972, other areas, such as technology and design, have been added since. A recent addition is environmental science, in January 2010 the MIT Press published its 9000th title, and published about 200 books and 30 journals. In 2012 the Press celebrated its 50th anniversary, including publishing a booklet on paper. The MIT Press is a distributor for such publishers as Zone Books, in 2000, the MIT Press created CogNet, an online resource for the study of the brain and the cognitive sciences. In 1981 the MIT Press published its first book under the Bradford Books imprint, Brainstorms, Philosophical Essays on Mind and Psychology by Daniel C. The MIT Press also operates the MIT Press Bookstore showcasing both its front and backlist titles, along with a selection of complementary works from other academic. Once extensive construction around its location is completed, the Bookstore is planned to be returned to a site adjacent to the subway entrance. The Bookstore offers customized selections from the MIT Press at many conferences and symposia in the Boston area, the Press uses a colophon or logo designed by its longtime design director, Muriel Cooper, in 1962. It later served as an important reference point for the 2015 redesign of the MIT Media Lab logo by Pentagram, the Arts and Humanities Economics International Affairs, History, and Political Science Science and Technology Official Website MIT Press Journals Homepage The MIT PressLog
16.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
17.
John Harsanyi
–
John Charles Harsanyi was a Hungarian-American economist and Nobel Memorial Prize in Economic Sciences winner. He also made important contributions to the use of game theory, for his work, he was a co-recipient along with John Nash and Reinhard Selten of the 1994 Nobel Memorial Prize in Economics. Harsanyi was born on May 29,1920 in Budapest, Hungary, the son of Alice and Charles Harsanyi and his parents converted from Judaism to Catholicism a year before he was born. He attended high school at the Lutheran Gymnasium in Budapest, in high school, he became one of the best problem solvers of the KöMaL, the Mathematical and Physical Monthly for Secondary Schools. Founded in 1893, this periodical is generally credited with a share of Hungarian students success in mathematics. He also won the first prize in the Eötvös mathematics competition for school students. Although he wanted to study mathematics and philosophy, his father sent him to France in 1939 to enroll in engineering at the University of Lyon. However, because of the start of World War II, Harsanyi returned to Hungary to study pharmacology at the University of Budapest, as a pharmacology student, Harsanyi escaped conscription into the Hungarian Army which, as a person of Jewish descent, would have meant forced labor. However, in 1944 his military deferment was cancelled and he was compelled to join a labor unit on the Eastern Front. After the end of the war, Harsanyi returned to the University of Budapest for graduate studies in philosophy and sociology, then a devout Catholic, he simultaneously studied theology, also joining lay ranks of the Dominican Order. He later abandoned Catholicism, becoming an atheist for the rest of his life, Harsanyi spent the academic year 1947–1948 on the faculty of the Institute of Sociology of the University of Budapest, where he met Anne Klauber, his future wife. He was forced to resign the faculty because of openly expressing his anti-Marxist opinions, Harsanyi remained in Hungary for the following two years attempting to sell his familys pharmacy without losing it to the authorities. The two did not marry until they arrived in Australia because Klaubers immigration papers would need to be changed to reflect her married name, the two arrived with her parents on December 30,1950 and they looked to marry immediately. Harsanyi and Klauber were married on January 2,1951, neither spoke much English and understood little of what they were told to say to each other. Harsanyi later explained to his new wife that she had promised to better food than she usually did. Harsanyis Hungarian degrees were not recognized in Australia, but they earned him credit at the University of Sydney for a masters degree. Harsanyi worked in a factory during the day and studied economics in the evening at the University of Sydney, while studying in Sydney, he started publishing research papers in economic journals, including the Journal of Political Economy and the Review of Economic Studies. The degree allowed him to take a position in 1954 at the University of Queensland in Brisbane
18.
John Maynard Smith
–
John Maynard Smith FRS was a British theoretical evolutionary biologist and geneticist. Originally an aeronautical engineer during the Second World War, he took a degree in genetics under the well-known biologist J. B. S. Haldane. Maynard Smith was instrumental in the application of theory to evolution and theorised on other problems such as the evolution of sex. John Maynard Smith was born in London, the son of the surgeon Sidney Maynard Smith, but following his fathers death in 1928, the moved to Exmoor. B. S. Haldane, whose books were in the schools library despite the bad reputation Haldane had at Eton for his communism and he became an atheist at age 14. On leaving school, Maynard Smith joined the Communist Party of Great Britain, when the Second World War broke out in 1939, he defied his partys line and volunteered for service. He was rejected, however, because of eyesight and was told to finish his engineering degree. He later quipped that under the circumstances, my poor eyesight was a selective advantage—it stopped me getting shot, the year of his graduation, he married Sheila Matthew, and they later had two sons and one daughter. Between 1942 and 1947, he applied his degree to military aircraft design, Maynard Smith, having decided that aircraft were “noisy and old-fashioned”, then took a change of career, entering University College London to study fruit fly genetics under Haldane. After graduating he became a lecturer in Zoology at UCL between 1952 and 1965, where he directed the Drosophila lab and conducted research on population genetics and he published a popular Penguin book, The Theory of Evolution, in 1958. In 1962 he was one of the members of the University of Sussex and was a Dean between 1965–85. He subsequently became a professor emeritus, prior to his death the building housing much of Life Sciences at Sussex was renamed the John Maynard Smith Building, in his honour. In 1973 Maynard Smith formalised a central concept in evolutionary theory called the evolutionarily stable strategy. This area of research culminated in his 1982 book Evolution and the Theory of Games, the Hawk-Dove game is arguably his single most influential game theoretical model. He was elected a Fellow of the Royal Society in 1977, in 1986 he was awarded the Darwin Medal. Maynard Smith published a book entitled The Evolution of Sex which explored in mathematical terms, during the late 1980s he also became interested in the other major evolutionary transitions with the evolutionary biologist Eörs Szathmáry. Together they wrote an influential 1995 book The Major Transitions in Evolution, a popular science version of the book, entitled The Origins of Life, From the birth of life to the origin of language was published in 1999. In 1995 he was awarded the Linnean Medal by The Linnean Society and in 1999 he was awarded the Crafoord Prize jointly with Ernst Mayr, in 2001 he was awarded the Kyoto Prize
19.
Evolution and the Theory of Games
–
Evolution and the Theory of Games is a book by the British evolutionary biologist John Maynard Smith on evolutionary game theory. The book was published in December 1982 by Cambridge University Press. In the book, John Maynard Smith summarises work on game theory that had developed in the 1970s. The book is noted for being well written and not overly mathematically challenging. So, for instance, suppose that in a population of frogs and this would be an ESS if any one cowardly frog that does not fight to the death always fares worse. A more likely scenario is one where fighting to the death is not an ESS because a frog might arise that will stop fighting if it realises that it is going to lose and this frog would then reap the benefits of fighting, but not the ultimate cost. Hence, fighting to the death would easily be invaded by a mutation that causes this sort of informed fighting, much complexity can be built from this, and Maynard Smith is outstanding at explaining in clear prose and with simple math
20.
Ariel Rubinstein
–
Ariel Rubinstein is an Israeli economist who works in Economic Theory, Game Theory and Bounded Rationality. Ariel Rubinstein is a professor of economics at the School of Economics at Tel Aviv University and he studied mathematics and economics at the Hebrew University of Jerusalem, 1972–1979. In 1982, he published Perfect equilibrium in a bargaining model, the model is known also as a Rubinstein bargaining model. It describes two-person bargaining as a game with perfect information in which the players alternate offers. A key assumption is that the players are impatient, the main result gives conditions under which the game has a unique subgame perfect equilibrium and characterizes this equilibrium. He also co-wrote A Course in Game Theory with Martin J. Osborne, Rubinstein was elected a member of the Israel Academy of Sciences and Humanities, a Foreign Honorary Member of the American Academy of Arts and Sciences in and the American Economic Association. In 1985 he was elected a fellow of the Econometric Society, in 2002, he was awarded an honorary doctorate by the Tilburg University. He has received the Bruno Prize, the Israel Prize for economics, the Nemmers Prize in Economics, the EMET Prize. and the Rothschild Prize. Bargaining and Markets, with Martin J. Osborne, Academic Press 1990 A Course in Game Theory, with Martin J. Osborne, modeling Bounded Rationality, MIT Press,1998. Economics and Language, Cambridge University Press,2000, lecture Notes in Microeconomic Theory, The Economic Agent, Princeton University Press,2006. AGADOT HAKALKALA, Kineret, Zmora, Bitan,2009, list of Israel Prize recipients Personal Web site Nash lecture Roberts, Russ. Rubinstein on Game Theory and Behavioral Economics
21.
Reinhard Selten
–
Reinhard Justus Reginald Selten was a German economist, who won the 1994 Nobel Memorial Prize in Economic Sciences. He is also known for his work in bounded rationality. Selten was born in Breslau in Lower Silesia, now in Poland, to a Jewish father, Adolf Selten, reinhard Selten was raised as Protestant. He recalled later, he would occupy his mind with problems of geometry and algebra while walking back. He studied mathematics at Goethe University Frankfurt and obtained his diploma in 1957 and he then worked as scientific assistant to Heinz Sauermann until 1967. In 1959, he married with Elisabeth Lang Reiner, in 1961, he also received his doctorate in Frankfurt in mathematics with a thesis on the evaluation of n-person games. He was a professor at Berkeley and taught from 1969 to 1972 at the Free University of Berlin and, from 1972 to 1984. He then accepted a professorship at the University of Bonn, there he built the BonnEconLab, a laboratory for experimental economic research, on which he has been active even after his retirement. Selten was professor emeritus at the University of Bonn, Germany and he had been an Esperantist since 1959 and met his wife through the Esperanto movement. He was a member and co-founder of the International Academy of Sciences San Marino, for the 2009 European Parliament election, he was the top candidate for the German wing of Europe – Democracy – Esperanto. For his work in theory, Selten won the 1994 Nobel Memorial Prize in Economic Sciences. Selten was Germanys first and, at the time of his death and he is also well known for his work in bounded rationality, and can be considered as one of the founding fathers of experimental economics. With Gerd Gigerenzer he edited the book Bounded Rationality, The Adaptive Toolbox and he developed an example of a game called Seltens Horse because of its extensive form representation. His last work was Impulse Balance Theory and its Extension by an Additional Criterion and he is noted for his publishing in non-refereed journals to avoid being forced to make unwanted changes to his work. A General Theory of Equilibrium Selection in Games, Cambridge, MA, Models of Strategic Rationality, Theory and Decision Library, Series C, Game Theory, Mathematical Programming and Operations Research, Dordrecht-Boston-London, Kluwer Academic Publishers. Enkonduko en la Teorion de Lingvaj Ludoj – Ĉu mi lernu Esperanton, berlin-Paderborn, Akademia Libroservo, Institut für Kybernetik. – in Esperanto Game Theory and Economic Behavior, Selected Essays,2. New edition of, Models of Strategic Rationality, with a Chinese Introduction, outstanding Academic Works on Economics by Nobel Prize Winners. Chinese Translation of, Models of Strategic Rationality, outstanding Academic Works on Economics by Nobel Prize Winners
22.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
23.
Graphical game theory
–
In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is a compact representation of a game using the interaction among participants. Consider a game with n players with m strategies each and we will represent the players as nodes in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility depends on fewer other players, the graphical representation would be smaller. Each node i in G has a function u i, d i +1 → R, U i specifies the utility of player i as a function of his strategy as well as those of his neighbors. For a general n players game, in each player has m possible strategies. The size of the representation for this game is O where d is the maximal node degree in the graph. If d ≪ n, then the graphical representation is much smaller. In case where each players utility function depends only on one player, The maximal degree of the graph is 1. So, the size of the input will be n m 2. Finding Nash equilibrium in a game takes exponential time in the size of the representation, if the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the degree of a node is 3 or more. In Vazirani, Vijay V. Nisan, Noam, Roughgarden, Tim, Tardos, Michael Kearns, Michael L. Littman and Satinder Singh Graphical Models for Game Theory
24.
Normal-form game
–
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, while this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, in static games of complete, perfect information, a normal-form representation of a game is a specification of players strategy spaces and payoff functions. The matrix to the right is a representation of a game in which players move simultaneously. For example, if player 1 plays top and player 2 plays left, player 1 receives 4, in each cell, the first number represents the payoff to the row player, and the second number represents the payoff to the column player. Often, symmetric games are represented only one payoff. This is the payoff for the row player, for example, the payoff matrices on the right and left below represent the same game. The payoff matrix facilitates elimination of dominated strategies, and it is used to illustrate this concept. For example, in the dilemma, we can see that each prisoner can either cooperate or defect. If exactly one prisoner defects, he gets off easily and the prisoner is locked up for a long time. However, if they both defect, they both be locked up for a shorter time. One can determine that Cooperate is strictly dominated by Defect, one must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what the player chooses, the row player does better by choosing Defect. Similarly, one compares the second payoff in each row, again 0 > −1 and this shows that no matter what row does, column does better by choosing Defect. This demonstrates the unique Nash equilibrium of this game is and these matrices only represent games in which moves are simultaneous. The above matrix does not represent the game in which player 1 moves first, observed by player 2, in order to represent this sequential game we must specify all of player 2s actions, even in contingencies that can never arise in the course of the game. In this game, player 2 has actions, as before, Left, unlike before he has four strategies, contingent on player 1s actions. Accordingly, to specify a game, the payoff function has to be specified for each player in the player set P=. D. Fudenberg and J. Tirole, Game Theory
25.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice
26.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game