1.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
2.
Chicken (game)
–
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. From a game-theoretic point of view, chicken and hawk-dove are identical, the game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis. The game of chicken models two drivers, both headed for a bridge from opposite directions. The first to swerve away yields the bridge to the other, if neither player swerves, the result is a costly deadlock in the middle of the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves, additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure his best outcome, risks the worst. The phrase game of chicken is used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain, and only pride stops them from backing down. This is an adapted from a sport which, I am told, is practiced by some youthful degenerates. It is played by choosing a long road with a white line down the middle. Each car is expected to keep the wheels on one side of the white line, as they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the line before the other. And the one who has swerved becomes an object of contempt, as played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. Both are to blame for playing such a dangerous game. The game may be played without misfortune a few times, the moment will come when neither side can face the derisive cry of Chicken. When that moment is come, the statesmen of both sides will plunge the world into destruction, brinkmanship involves the introduction of an element of uncontrollable risk, even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome. In the chickie run scene from the film Rebel Without a Cause, the opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the game as he cant play chicken. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, the hawk-dove version of the game imagines two players contesting an indivisible resource who can choose between two strategies, one more escalated than the other. They can use threat displays, or physically attack each other, if both players choose the Hawk strategy, then they fight until one is injured and the other wins
3.
Prisoner's dilemma
–
It was originally framed by Merrill Flood and Melvin Dresher working at RAND in 1950. Albert W. Tucker formalized the game with prison sentence rewards and named it, prisoners dilemma, presenting it as follows, Each prisoner is in solitary confinement with no means of communicating with the other. The prosecutors lack sufficient evidence to convict the pair on the principal charge and they hope to get both sentenced to a year in prison on a lesser charge. Simultaneously, the prosecutors offer each prisoner a bargain, Each prisoner is given the opportunity either to, betray the other by testifying that the other committed the crime, or to cooperate with the other by remaining silent. The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a reward if they both kept silent. In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of rational self-interested action. If the number of times the game will be played is known to the players, in an infinite or unknown length game there is no fixed optimum strategy, and Prisoners Dilemma tournaments have been held to compete and test algorithms. The prisoners dilemma game can be used as a model for many real world situations involving cooperative behaviour, both cannot communicate, they are separated in two individual rooms. Regardless of what the other decides, each gets a higher reward by betraying the other. The reasoning involves an argument by dilemma, B will either cooperate or defect, if B cooperates, A should defect, because going free is better than serving 1 year. If B defects, A should also defect, because serving 2 years is better than serving 3, so either way, A should defect. Parallel reasoning will show that B should defect, because defection always results in a better payoff than cooperation, regardless of the other players choice, it is a dominant strategy. Mutual defection is the only strong Nash equilibrium in the game, the structure of the traditional Prisoners Dilemma can be generalized from its original prisoner setting. Suppose that the two players are represented by the colors, red and blue, and that player chooses to either Cooperate or Defect. If both players cooperate, they receive the reward R for cooperating. If both players defect, they receive the punishment payoff P. The donation game is a form of prisoners dilemma in which corresponds to offering the other player a benefit b at a personal cost c with b > c. The payoff matrix is thus Note that 2R>T+S which qualifies the donation game to be an iterated game, the donation game may be applied to markets
4.
Stag hunt
–
In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include assurance game, coordination game, jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare, each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed, an individual can get a hare by themself, but a hare is worth less than a stag. This has been taken to be an analogy for social cooperation. The stag hunt differs from the Prisoners Dilemma in that there are two pure strategy Nash equilibria, when both players cooperate and both players defect. In the Prisoners Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when players choose to defect. An example of the matrix for the stag hunt is pictured in Figure 2. Formally, a hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance game. For instance if a=2, b=1, c=0, and d=1, while remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt, in addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium, no payoffs can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here, there is a substantial relationship between the stag hunt and the prisoners dilemma. In biology many circumstances that have described as prisoners dilemma might also be interpreted as a stag hunt. It is also the case that human interactions that seem like prisoners dilemmas may in fact be stag hunts. For example, suppose we have a dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection, for instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoners dilemma into the stag hunt given at the introduction
5.
Normal-form game
–
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, while this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, in static games of complete, perfect information, a normal-form representation of a game is a specification of players strategy spaces and payoff functions. The matrix to the right is a representation of a game in which players move simultaneously. For example, if player 1 plays top and player 2 plays left, player 1 receives 4, in each cell, the first number represents the payoff to the row player, and the second number represents the payoff to the column player. Often, symmetric games are represented only one payoff. This is the payoff for the row player, for example, the payoff matrices on the right and left below represent the same game. The payoff matrix facilitates elimination of dominated strategies, and it is used to illustrate this concept. For example, in the dilemma, we can see that each prisoner can either cooperate or defect. If exactly one prisoner defects, he gets off easily and the prisoner is locked up for a long time. However, if they both defect, they both be locked up for a shorter time. One can determine that Cooperate is strictly dominated by Defect, one must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what the player chooses, the row player does better by choosing Defect. Similarly, one compares the second payoff in each row, again 0 > −1 and this shows that no matter what row does, column does better by choosing Defect. This demonstrates the unique Nash equilibrium of this game is and these matrices only represent games in which moves are simultaneous. The above matrix does not represent the game in which player 1 moves first, observed by player 2, in order to represent this sequential game we must specify all of player 2s actions, even in contingencies that can never arise in the course of the game. In this game, player 2 has actions, as before, Left, unlike before he has four strategies, contingent on player 1s actions. Accordingly, to specify a game, the payoff function has to be specified for each player in the player set P=. D. Fudenberg and J. Tirole, Game Theory
6.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others
7.
Partha Dasgupta
–
He was born in Dhaka, present-day Bangladesh, then moved to present-day India, and is the son of the noted economist Amiya Kumar Dasgupta. He is married to Carol Dasgupta, who is a psychotherapist and his father-in-law was the Nobel Laureate James Meade. Partha and Carol Dasgupta have three children, Zubeida Dasgupta-Clark, Shamik and Aisha and he obtained a PhD in Economics at Cambridge in 1968 with thesis titled Population, growth and non-transferable capital. His PhD supervisor was Sir James Mirrlees, at Cambridge he was a member of the Cambridge Apostles, a distinguished intellectual society. During 1989–92 he was on leave from the University of Cambridge and served as Professor of Economics, Professor of Philosophy, in October 1991 he returned to Cambridge, on leave from Stanford University, to re-assume his Chair at Cambridge. He resigned from Stanford in 1992 and has remained in Cambridge since then, academic Activities During 1991–97 Dasgupta was Chairman of the Board of the Beijer International Institute of Ecological Economics of the Royal Swedish Academy of Sciences, Stockholm. During 1999–2009 he served as a Founder Member of the Management and Advisory Committee of the South Asian Network for Development and Environmental Economics, during 2008-2013 he was a Professorial Research Fellow at the University of Manchesters Sustainable Consumption Institute. He is a patron of population concern charity Population Matters, during 2011-2014 he was Chairman of the Scientific Advisory Board of the International Human Dimensions Programme on Global Environmental Change, Bonn. Since 2011 he has been Chairman of the Advisory Board of the Wittgenstein Centre and he served as Chairman of the Central Government Expert Group on Green National Accounting for India which submitted its Report in 2013. He is a cofounder of the Centre for the Study of Existential Risk at the University of Cambridge and he was awarded the 2015 Blue Planet Prize for Environmental Research, and the 2016 Tyler Prize. Guidelines for Project Evaluation, United Nations,1972, economic Theory and Exhaustible Resources, Cambridge University Press,1979. Utilitarianism, information and rights in Sen, Amartya, Williams, Bernard, the Control of Resources, Harvard University Press,1982. An Inquiry into Well-Being and Destitution, * Human Well-Being and the Natural Environment. Oxford, Oxford University Press,2001, Rev. ed.2004, selected Papers of Partha Dasgupta, Vol.1, Institutions, Innovations, and Human Values, Vol.2, Poverty, Population, and Natural Resources
8.
Eric Maskin
–
Eric Stark Maskin is an American economist and 2007 Nobel laureate recognized with Leonid Hurwicz and Roger Myerson for having laid the foundations of mechanism design theory. He is the Adams University Professor at Harvard University, until 2011, he was the Albert O. Hirschman Professor of Social Science at the Institute for Advanced Study, and a visiting lecturer with the rank of professor at Princeton University. Maskin was born in New York City on December 12,1950, into a Jewish family and he graduated from Tenafly High School in Tenafly, New Jersey, in 1968, and attended Harvard University, where he earned A. B. He continued to earn a Ph. D. in applied mathematics at the same institution, in 1976, after earning his doctorate, Maskin became a research fellow at Jesus College, Cambridge University. In the following year, he joined the faculty at Massachusetts Institute of Technology, in 1985 he returned to Harvard as the Louis Berkman Professor of Economics, where he remained until 2000. That year, he moved to the Institute for Advanced Study in Princeton, in 2011, Maskin has returned to Harvard again. Maskin has worked in areas of economic theory, such as game theory, the economics of incentives. He is particularly known for his papers on mechanism design/implementation theory. His current research projects include comparing different electoral rules, examining the causes of inequality and he is a Fellow of the American Academy of Arts and Sciences, Econometric Society, and the European Economic Association, and a Corresponding Fellow of the British Academy. He was president of the Econometric Society in 2003, Maskin suggested that software patents inhibit innovation rather than stimulate progress. Software, semiconductor, and computer industries have been innovative despite historically weak patent protection, innovation in those industries has been sequential and complementary, so competition can increase firms future profits. In such an industry, patent protection may reduce overall innovation. A natural experiment occurred in the 1980s when patent protection was extended to software, standard arguments would predict that R&D intensity and productivity should have increased among patenting firms. Consistent with our model, however, these increases did not occur, other evidence supporting this model includes a distinctive pattern of cross-licensing and a positive relationship between rates of innovation and firm entry. List of economists Mechanism design Maskin Nobel Prize lecture Profile in The Daily Princetonian Tabarrok, explaining the research that won the 2007 Nobel Prize in Economics. Videos of Eric Maskin speaking in plain English Maskin, Eric Stark, Prize Lecture by Eric S. Maskin. Eric S. Maskin delivered his Prize Lecture on 8 December 2007 at Aula Magna and he was introduced by Professor Jörgen Weibull, Chairman of the Economics Prize Committee. Copyright © Nobel Web AB2007 Maskin, Eric Stark, Eric Maskin - An Introduction to Mechanism Design - Warwick Economics Summit 2014
9.
Permutation
–
These differ from combinations, which are selections of some members of a set where order is disregarded. For example, written as tuples, there are six permutations of the set, namely and these are all the possible orderings of this three element set. As another example, an anagram of a word, all of whose letters are different, is a permutation of its letters, in this example, the letters are already ordered in the original word and the anagram is a reordering of the letters. The study of permutations of finite sets is a topic in the field of combinatorics, Permutations occur, in more or less prominent ways, in almost every area of mathematics. For similar reasons permutations arise in the study of sorting algorithms in computer science, the number of permutations of n distinct objects is n factorial, usually written as n. which means the product of all positive integers less than or equal to n. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself and that is, it is a function from S to S for which every element occurs exactly once as an image value. This is related to the rearrangement of the elements of S in which each element s is replaced by the corresponding f, the collection of such permutations form a group called the symmetric group of S. The key to this structure is the fact that the composition of two permutations results in another rearrangement. Permutations may act on structured objects by rearranging their components, or by certain replacements of symbols, in elementary combinatorics, the k-permutations, or partial permutations, are the ordered arrangements of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set, fabian Stedman in 1677 described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells, first, two must be admitted to be varied in two ways which he illustrates by showing 12 and 21 and he then explains that with three bells there are three times two figures to be produced out of three which again is illustrated. His explanation involves cast away 3, and 1.2 will remain, cast away 2, and 1.3 will remain, cast away 1, and 2.3 will remain. He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three, effectively this is an recursive process. He continues with five bells using the casting method and tabulates the resulting 120 combinations. At this point he gives up and remarks, Now the nature of these methods is such, in modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. There are two equivalent common ways of regarding permutations, sometimes called the active and passive forms, or in older terminology substitutions and permutations, which form is preferable depends on the type of questions being asked in a given discipline. The active way to regard permutations of a set S is to them as the bijections from S to itself. Thus, the permutations are thought of as functions which can be composed with each other, forming groups of permutations
10.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
11.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set
12.
Graphical game theory
–
In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is a compact representation of a game using the interaction among participants. Consider a game with n players with m strategies each and we will represent the players as nodes in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility depends on fewer other players, the graphical representation would be smaller. Each node i in G has a function u i, d i +1 → R, U i specifies the utility of player i as a function of his strategy as well as those of his neighbors. For a general n players game, in each player has m possible strategies. The size of the representation for this game is O where d is the maximal node degree in the graph. If d ≪ n, then the graphical representation is much smaller. In case where each players utility function depends only on one player, The maximal degree of the graph is 1. So, the size of the input will be n m 2. Finding Nash equilibrium in a game takes exponential time in the size of the representation, if the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the degree of a node is 3 or more. In Vazirani, Vijay V. Nisan, Noam, Roughgarden, Tim, Tardos, Michael Kearns, Michael L. Littman and Satinder Singh Graphical Models for Game Theory
13.
Information set (game theory)
–
In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has information, every information set contains only one member. Otherwise, it is the case that some players cannot be exactly what has taken place so far in the game. More specifically, in the form, an information set is a set of decision nodes such that. The notion of set was introduced by John von Neumann. At the right are two versions of the battle of the game, shown in extensive form. The first game is simply sequential-when player 2 has the chance to move, the second game is also sequential, but the dotted line shows player 2s information set. This is the way to show that when player 2 moves. This difference also leads to different predictions for the two games, in the first game, player 1 has the upper hand. They know that they can choose O safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o and get 2 than choose f, formally, thats applying subgame perfection to solve the game. In the second game, player 2 cant observe what player 1 did, game Theory, A very short introduction
14.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice
15.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game
16.
Economic equilibrium
–
In economics, economic equilibrium is a state where economic forces such as supply and demand are balanced and in the absence of external influences the values of economic variables will not change. For example, in the textbook model of perfect competition, equilibrium occurs at the point at which quantity demanded. However, the concept of equilibrium in economics also applies to imperfectly competitive markets, three basic properties of equilibrium in general have been proposed by Huw Dixon. These are, Equilibrium property P1, The behavior of agents is consistent, Equilibrium property P2, No agent has an incentive to change its behavior. Equilibrium Property P3, Equilibrium is the outcome of some dynamic process, in a competitive equilibrium, supply equals demand. Property P1 is satisfied, because at the price the amount supplied is equal to the amount demanded. Demand is chosen to maximize utility given the price, no one on the demand side has any incentive to demand more or less at the prevailing price. Likewise supply is determined by firms maximizing their profits at the market price, hence, agents on neither the demand side nor the supply side will have any incentive to alter their actions. To see whether Property P3 is satisfied, consider what happens when the price is above the equilibrium, in this case there is an excess supply, with the quantity supplied exceeding that demanded. This will tend to put pressure on the price to make it return to equilibrium. Likewise where the price is below the point there is a shortage in supply leading to an increase in prices back to equilibrium. Not all equilibria are stable in the sense of Equilibrium property P3 and it is possible to have competitive equilibria that are unstable. However, if an equilibrium is unstable, it raises the question of how you might get there, even if it satisfies properties P1 and P2, the absence of P3 means that the market can only be in the unstable equilibrium if it starts off there. In most simple microeconomic stories of supply and demand a static equilibrium is observed in a market, however, Equilibrium may also be economy-wide or general, as opposed to the partial equilibrium of a single market. Equilibrium can change if there is a change in demand or supply conditions, for example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets, then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand. That is, there are no endogenous forces leading to the price or the quantity, the Nash equilibrium is widely used in economics as the main alternative to competitive equilibrium. It is used there is a strategic element to the behavior of agents
17.
Solution concept
–
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called solutions, and describe which strategies will be adopted by players and, therefore, the most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. Many solution concepts, for games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions, each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Let Γ be the class of all games and, for each game G ∈ Γ, let S G be the set of strategy profiles of G. A solution concept is an element of the direct product Π G ∈ Γ2 S G, i. e. a function F, Γ → ⋃ G ∈ Γ2 S G such that F ⊆ S G for all G ∈ Γ. In this solution concept, players are assumed to be rational, a strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. For example, in the dilemma, cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does. A Nash equilibrium is a profile in which every strategy is a best response to every other strategy played. There are games that have multiple Nash equilibria, some of which are unrealistic, in the case of dynamic games, unrealistic Nash equilibria might be eliminated by applying backward induction, which assumes that future play will be rational. It therefore eliminates noncredible threats because such threats would be irrational to carry out if a player was called upon to do so. For example, consider a game in which the players are an incumbent firm in an industry. As it stands, the incumbent has a monopoly over the industry, if the entrant chooses not to enter, the payoff to the incumbent is high and the entrant neither loses nor gains. If the entrant enters, the incumbent can fight or accommodate the entrant and it will fight by lowering its price, running the entrant out of business and damaging its own profits. If it accommodates the entrant it will some of its sales. If the entrant enters, the best response of the incumbent is to accommodate, if the incumbent accommodates, the best response of the entrant is to enter. Hence the strategy profile in which the incumbent accommodates if the entrant enters, however, if the incumbent is going to play fight, the best response of the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do, hence fight can be considered as a best response of the incumbent if the entrant does not enter
18.
Subgame perfect equilibrium
–
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Every finite extensive game has a perfect equilibrium. A common method for determining subgame perfect equilibria in the case of a game is backward induction. Here one first considers the last actions of the game and determines which actions the final mover should take in each possible circumstance to maximize his/her utility. One then supposes that the last actor will do these actions and this process continues until one reaches the first move of the game. The strategies which remain are the set of all subgame perfect equilibria for finite-horizon extensive games of perfect information, however, backward induction cannot be applied to games of imperfect or incomplete information because this entails cutting through non-singleton information sets. A subgame perfect equilibrium necessarily satisfies the One-Shot deviation principle, the set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical, the Ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. An example for a game possessing an ordinary Nash equilibrium and a perfect equilibrium is shown in Figure 1. The strategies for player 1 are given by whereas player 2 has the choice between 2 as his choice to be kind or unkind to player 1 might depend on the previously made by player 1. The payoff matrix of the game is shown in Table 1, observe that there are two different Nash equilibria, given by the strategy profiles L, and R. Consider the equilibrium given by the strategy profile L, more formally, the equilibrium is not an equilibrium with respect to the subgame induced by node 22. It is likely that in real life player 2 would choose the strategy instead which would in turn inspire player 1 to change his strategy to R, the resulting profile R, is not only a Nash equilibrium but it is also an equilibrium in all subgames. It is therefore a perfect equilibrium. Reinhard Selten proved that any game which can be broken into sub-games containing a sub-set of all the choices in the main game will have a subgame perfect Nash Equilibrium strategy. Subgame perfection is used with games of complete information. Subgame perfection can be used with extensive form games of complete, one game in which the backward induction solution is well known is tic-tac-toe, but in theory even Go has such an optimum strategy for all players
19.
Bayesian Nash equilibrium
–
In game theory, a Bayesian game is a game in which the players do not have complete information on the other players, but, they have beliefs with known probability distribution. A Bayesian game can be converted into a game of complete, harsanyi describes a Bayesian game in the following way. In addition to the players in the game, there is a special player called Nature. Nature assigns a random variable to each player which could take values of types for each player, harsanyis approach to modeling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information. The type of a player determines that players payoff function, the probability associated with a type is the probability that the player, for whom the type is specified, is that type. In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type of another player, such games are called Bayesian because of the probabilistic analysis inherent in the game. The lack of information held by players and modeling of beliefs mean that such games are used to analyse imperfect information scenarios. The normal form representation of a game with perfect information is a specification of the strategy spaces. A strategy for a player is a plan of action that covers every contingency of the game. The strategy space of a player is thus the set of all available to a player. A payoff function is a function from the set of profiles to the set of payoffs. In a Bayesian game, one has to specify strategy spaces, type spaces, payoff functions, a strategy for a player is a complete plan of action that covers every contingency that might arise for every type that player might be. A strategy must not only specify the actions of the given the type that he is. Strategy spaces are defined as above, a type space for a player is just the set of all possible types of that player. The beliefs of a player describe the uncertainty of that player about the types of the other players, each belief is the probability of the other players having particular types, given the type of the player with that belief. A payoff function is a 2-place function of strategy profiles and types, if a player has payoff function U and he has type t, the payoff he receives is U, where x ∗ is the strategy profile played in the game. Ω is the set of states of nature, for instance, in a card game, it can be any order of the cards. A i is the set of actions for player i, let A = A1 × A2 × ⋯ × A N
20.
Perfect Bayesian equilibrium
–
In game theory, a Perfect Bayesian Equilibrium is an equilibrium concept relevant for dynamic games with incomplete information. A PBE is a refinement of both Bayesian Nash equilibrium and subgame perfect equilibrium, a PBE has two components - strategies and beliefs, The strategy of a player in given information-set determines how this player acts in that information-set. The action may depend on the history and this is similar to a sequential game. The belief of a player in a given information-set determines what node in that information-set the player believes that he is playing at, the belief may be a probability distribution over the nodes in the information-set. Formally, a system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1. The strategies and beliefs should satisfy the conditions, Sequential rationality, each strategy should be optimal in expectation. Consistency, each belief should be updated according to the strategies and Bayes rule, every PBE is both a SPE and a BNE, but the opposite is not necessarily true. A signaling game is the simplest kind of a dynamic Bayesian game, there are two players, one of them has only one possible type, and the other has several possible types. The sender plays first, then the receiver, to calculate a PBE in a signaling game, we consider two kinds of equilibria, a separating equilibrium and a pooling equilibrium. Consider the following game, The sender has two possible types, either a friend or an enemy. Each type has two strategies, either give a gift, or not give, the receiver has only one type, and two strategies, either accept the gift, or reject it. The senders utility is 1 if his gift is accepted, -1 if his gift is rejected, the receivers utility depends on who gives the gift, If the sender is a friend, then the receivers utility is 1 or 0. If the sender is an enemy, then the receivers utility is -1 or 0, to analyze PBE in this game, lets look first at the following potential separating equilibria, The senders strategy is, a friend gives and an enemy does not give. The receivers beliefs are updated accordingly, if she receives a gift she knows that the sender is a friend and this is NOT an equilibrium, since the senders strategy is not optimal, an enemy sender can increase his payoff from 0 to 1 by sending a gift. The senders strategy is, a friend does not give and an enemy gives, the receivers beliefs are updated accordingly, if she receives a gift she knows that the sender is a enemy, otherwise she knows that the sender is a friend. Again, this is NOT an equilibrium, since the strategy is not optimal. We conclude that in this game, there is no separating equilibrium, now, lets look at the following potential pooling equilibria, The senders strategy is, always give. The receivers beliefs are not updated, she believes in the a-priori probability, that the sender is a friend with probability p
21.
Pareto efficiency
–
The concept is named after Vilfredo Pareto, Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The concept has applications in fields such as economics, engineering. The Pareto frontier is the set of all Pareto efficient allocations, an allocation is defined as Pareto efficient or Pareto optimal when no further Pareto improvements can be made. The notion of Pareto efficiency can also be applied to the selection of alternatives in engineering, each option is first assessed under multiple criteria and then a subset of options is identified with the property that no other option can categorically outperform any of its members. Pareto optimality is a defined concept used to determine when an allocation is optimal. If there is a transfer that satisfies this condition, the reallocation is called a Pareto improvement, when no further Pareto improvements are possible, the allocation is a Pareto optimum. A formal definition for an economy is as follows, Consider an economy with i agents and j goods. Here in this economy, feasibility refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. It is important to note that a change from a generally inefficient economic allocation to an efficient one is not necessarily a Pareto improvement, even if there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that subsequently becomes competitive. However, since the monopolist is disadvantaged, this is not a Pareto improvement, thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. However, in the world, such compensations may have unintended consequences. They can lead to incentive distortions over time as agents anticipate such compensations, under certain idealized conditions, it can be shown that a system of free markets, also called a competitive equilibrium, will lead to a Pareto efficient outcome. This is called the first welfare theorem and it was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions necessary for the proof. In the absence of information or complete markets, outcomes will generally be Pareto inefficient. In addition to the first welfare theorem linking the concepts of Pareto optimal allocations and free markets, the second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, a weak Pareto optimum is an allocation for which there are no possible alternative allocations whose realization would cause every individual to gain