1.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set
2.
Chicken (game)
–
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. From a game-theoretic point of view, chicken and hawk-dove are identical, the game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis. The game of chicken models two drivers, both headed for a bridge from opposite directions. The first to swerve away yields the bridge to the other, if neither player swerves, the result is a costly deadlock in the middle of the bridge, or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves, additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure his best outcome, risks the worst. The phrase game of chicken is used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain, and only pride stops them from backing down. This is an adapted from a sport which, I am told, is practiced by some youthful degenerates. It is played by choosing a long road with a white line down the middle. Each car is expected to keep the wheels on one side of the white line, as they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the line before the other. And the one who has swerved becomes an object of contempt, as played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. Both are to blame for playing such a dangerous game. The game may be played without misfortune a few times, the moment will come when neither side can face the derisive cry of Chicken. When that moment is come, the statesmen of both sides will plunge the world into destruction, brinkmanship involves the introduction of an element of uncontrollable risk, even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome. In the chickie run scene from the film Rebel Without a Cause, the opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the game as he cant play chicken. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, the hawk-dove version of the game imagines two players contesting an indivisible resource who can choose between two strategies, one more escalated than the other. They can use threat displays, or physically attack each other, if both players choose the Hawk strategy, then they fight until one is injured and the other wins
3.
Stag hunt
–
In game theory, the stag hunt is a game that describes a conflict between safety and social cooperation. Other names for it or its variants include assurance game, coordination game, jean-Jacques Rousseau described a situation in which two individuals go out on a hunt. Each can individually choose to hunt a stag or hunt a hare, each player must choose an action without knowing the choice of the other. If an individual hunts a stag, they must have the cooperation of their partner in order to succeed, an individual can get a hare by themself, but a hare is worth less than a stag. This has been taken to be an analogy for social cooperation. The stag hunt differs from the Prisoners Dilemma in that there are two pure strategy Nash equilibria, when both players cooperate and both players defect. In the Prisoners Dilemma, in contrast, despite the fact that both players cooperating is Pareto efficient, the only pure Nash equilibrium is when players choose to defect. An example of the matrix for the stag hunt is pictured in Figure 2. Formally, a hunt is a game with two pure strategy Nash equilibria—one that is risk dominant and another that is payoff dominant. The payoff matrix in Figure 1 illustrates a generic stag hunt, often, games with a similar structure but without a risk dominant Nash equilibrium are called assurance game. For instance if a=2, b=1, c=0, and d=1, while remains a Nash equilibrium, it is no longer risk dominant. Nonetheless many would call this game a stag hunt, in addition to the pure strategy Nash equilibria there is one mixed strategy Nash equilibrium. This equilibrium depends on the payoffs, but the risk dominance condition places a bound on the mixed strategy Nash equilibrium, no payoffs can generate a mixed strategy equilibrium where Stag is played with a probability higher than one half. The best response correspondences are pictured here, there is a substantial relationship between the stag hunt and the prisoners dilemma. In biology many circumstances that have described as prisoners dilemma might also be interpreted as a stag hunt. It is also the case that human interactions that seem like prisoners dilemmas may in fact be stag hunts. For example, suppose we have a dilemma as pictured in Figure 3. The payoff matrix would need adjusting if players who defect against cooperators might be punished for their defection, for instance, if the expected punishment is −2, then the imposition of this punishment turns the above prisoners dilemma into the stag hunt given at the introduction
4.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
5.
Permutation
–
These differ from combinations, which are selections of some members of a set where order is disregarded. For example, written as tuples, there are six permutations of the set, namely and these are all the possible orderings of this three element set. As another example, an anagram of a word, all of whose letters are different, is a permutation of its letters, in this example, the letters are already ordered in the original word and the anagram is a reordering of the letters. The study of permutations of finite sets is a topic in the field of combinatorics, Permutations occur, in more or less prominent ways, in almost every area of mathematics. For similar reasons permutations arise in the study of sorting algorithms in computer science, the number of permutations of n distinct objects is n factorial, usually written as n. which means the product of all positive integers less than or equal to n. In algebra and particularly in group theory, a permutation of a set S is defined as a bijection from S to itself and that is, it is a function from S to S for which every element occurs exactly once as an image value. This is related to the rearrangement of the elements of S in which each element s is replaced by the corresponding f, the collection of such permutations form a group called the symmetric group of S. The key to this structure is the fact that the composition of two permutations results in another rearrangement. Permutations may act on structured objects by rearranging their components, or by certain replacements of symbols, in elementary combinatorics, the k-permutations, or partial permutations, are the ordered arrangements of k distinct elements selected from a set. When k is equal to the size of the set, these are the permutations of the set, fabian Stedman in 1677 described factorials when explaining the number of permutations of bells in change ringing. Starting from two bells, first, two must be admitted to be varied in two ways which he illustrates by showing 12 and 21 and he then explains that with three bells there are three times two figures to be produced out of three which again is illustrated. His explanation involves cast away 3, and 1.2 will remain, cast away 2, and 1.3 will remain, cast away 1, and 2.3 will remain. He then moves on to four bells and repeats the casting away argument showing that there will be four different sets of three, effectively this is an recursive process. He continues with five bells using the casting method and tabulates the resulting 120 combinations. At this point he gives up and remarks, Now the nature of these methods is such, in modern mathematics there are many similar situations in which understanding a problem requires studying certain permutations related to it. There are two equivalent common ways of regarding permutations, sometimes called the active and passive forms, or in older terminology substitutions and permutations, which form is preferable depends on the type of questions being asked in a given discipline. The active way to regard permutations of a set S is to them as the bijections from S to itself. Thus, the permutations are thought of as functions which can be composed with each other, forming groups of permutations
6.
Partha Dasgupta
–
He was born in Dhaka, present-day Bangladesh, then moved to present-day India, and is the son of the noted economist Amiya Kumar Dasgupta. He is married to Carol Dasgupta, who is a psychotherapist and his father-in-law was the Nobel Laureate James Meade. Partha and Carol Dasgupta have three children, Zubeida Dasgupta-Clark, Shamik and Aisha and he obtained a PhD in Economics at Cambridge in 1968 with thesis titled Population, growth and non-transferable capital. His PhD supervisor was Sir James Mirrlees, at Cambridge he was a member of the Cambridge Apostles, a distinguished intellectual society. During 1989–92 he was on leave from the University of Cambridge and served as Professor of Economics, Professor of Philosophy, in October 1991 he returned to Cambridge, on leave from Stanford University, to re-assume his Chair at Cambridge. He resigned from Stanford in 1992 and has remained in Cambridge since then, academic Activities During 1991–97 Dasgupta was Chairman of the Board of the Beijer International Institute of Ecological Economics of the Royal Swedish Academy of Sciences, Stockholm. During 1999–2009 he served as a Founder Member of the Management and Advisory Committee of the South Asian Network for Development and Environmental Economics, during 2008-2013 he was a Professorial Research Fellow at the University of Manchesters Sustainable Consumption Institute. He is a patron of population concern charity Population Matters, during 2011-2014 he was Chairman of the Scientific Advisory Board of the International Human Dimensions Programme on Global Environmental Change, Bonn. Since 2011 he has been Chairman of the Advisory Board of the Wittgenstein Centre and he served as Chairman of the Central Government Expert Group on Green National Accounting for India which submitted its Report in 2013. He is a cofounder of the Centre for the Study of Existential Risk at the University of Cambridge and he was awarded the 2015 Blue Planet Prize for Environmental Research, and the 2016 Tyler Prize. Guidelines for Project Evaluation, United Nations,1972, economic Theory and Exhaustible Resources, Cambridge University Press,1979. Utilitarianism, information and rights in Sen, Amartya, Williams, Bernard, the Control of Resources, Harvard University Press,1982. An Inquiry into Well-Being and Destitution, * Human Well-Being and the Natural Environment. Oxford, Oxford University Press,2001, Rev. ed.2004, selected Papers of Partha Dasgupta, Vol.1, Institutions, Innovations, and Human Values, Vol.2, Poverty, Population, and Natural Resources
7.
Eric Maskin
–
Eric Stark Maskin is an American economist and 2007 Nobel laureate recognized with Leonid Hurwicz and Roger Myerson for having laid the foundations of mechanism design theory. He is the Adams University Professor at Harvard University, until 2011, he was the Albert O. Hirschman Professor of Social Science at the Institute for Advanced Study, and a visiting lecturer with the rank of professor at Princeton University. Maskin was born in New York City on December 12,1950, into a Jewish family and he graduated from Tenafly High School in Tenafly, New Jersey, in 1968, and attended Harvard University, where he earned A. B. He continued to earn a Ph. D. in applied mathematics at the same institution, in 1976, after earning his doctorate, Maskin became a research fellow at Jesus College, Cambridge University. In the following year, he joined the faculty at Massachusetts Institute of Technology, in 1985 he returned to Harvard as the Louis Berkman Professor of Economics, where he remained until 2000. That year, he moved to the Institute for Advanced Study in Princeton, in 2011, Maskin has returned to Harvard again. Maskin has worked in areas of economic theory, such as game theory, the economics of incentives. He is particularly known for his papers on mechanism design/implementation theory. His current research projects include comparing different electoral rules, examining the causes of inequality and he is a Fellow of the American Academy of Arts and Sciences, Econometric Society, and the European Economic Association, and a Corresponding Fellow of the British Academy. He was president of the Econometric Society in 2003, Maskin suggested that software patents inhibit innovation rather than stimulate progress. Software, semiconductor, and computer industries have been innovative despite historically weak patent protection, innovation in those industries has been sequential and complementary, so competition can increase firms future profits. In such an industry, patent protection may reduce overall innovation. A natural experiment occurred in the 1980s when patent protection was extended to software, standard arguments would predict that R&D intensity and productivity should have increased among patenting firms. Consistent with our model, however, these increases did not occur, other evidence supporting this model includes a distinctive pattern of cross-licensing and a positive relationship between rates of innovation and firm entry. List of economists Mechanism design Maskin Nobel Prize lecture Profile in The Daily Princetonian Tabarrok, explaining the research that won the 2007 Nobel Prize in Economics. Videos of Eric Maskin speaking in plain English Maskin, Eric Stark, Prize Lecture by Eric S. Maskin. Eric S. Maskin delivered his Prize Lecture on 8 December 2007 at Aula Magna and he was introduced by Professor Jörgen Weibull, Chairman of the Economics Prize Committee. Copyright © Nobel Web AB2007 Maskin, Eric Stark, Eric Maskin - An Introduction to Mechanism Design - Warwick Economics Summit 2014
8.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice
9.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game
10.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others
11.
Pareto efficiency
–
The concept is named after Vilfredo Pareto, Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The concept has applications in fields such as economics, engineering. The Pareto frontier is the set of all Pareto efficient allocations, an allocation is defined as Pareto efficient or Pareto optimal when no further Pareto improvements can be made. The notion of Pareto efficiency can also be applied to the selection of alternatives in engineering, each option is first assessed under multiple criteria and then a subset of options is identified with the property that no other option can categorically outperform any of its members. Pareto optimality is a defined concept used to determine when an allocation is optimal. If there is a transfer that satisfies this condition, the reallocation is called a Pareto improvement, when no further Pareto improvements are possible, the allocation is a Pareto optimum. A formal definition for an economy is as follows, Consider an economy with i agents and j goods. Here in this economy, feasibility refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. It is important to note that a change from a generally inefficient economic allocation to an efficient one is not necessarily a Pareto improvement, even if there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that subsequently becomes competitive. However, since the monopolist is disadvantaged, this is not a Pareto improvement, thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. However, in the world, such compensations may have unintended consequences. They can lead to incentive distortions over time as agents anticipate such compensations, under certain idealized conditions, it can be shown that a system of free markets, also called a competitive equilibrium, will lead to a Pareto efficient outcome. This is called the first welfare theorem and it was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions necessary for the proof. In the absence of information or complete markets, outcomes will generally be Pareto inefficient. In addition to the first welfare theorem linking the concepts of Pareto optimal allocations and free markets, the second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, a weak Pareto optimum is an allocation for which there are no possible alternative allocations whose realization would cause every individual to gain