In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal "choice". The character of the individual preferences is determined purely by taste factors, independent of considerations of prices, income, or availability of goods. With the help of the scientific method many practical decisions of life can be modelled, resulting in testable predictions about human behavior. Although economists are not interested in the underlying causes of the preferences in themselves, they are interested in the theory of choice because it serves as a background for empirical demand analysis. In 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to economists had developed an elaborated theory of demand that omitted primitive characteristics of people; this omission ceased when, at the end of the 19th and the beginning of the 20th century, logical positivism predicated the need of theoretical concepts to be related with observables.
Whereas economists in the 18th and 19th centuries felt comfortable theorizing about utility, with the advent of logical positivism in the 20th century, they felt that it needed more of an empirical structure. Because binary choices are directly observable, it appealed to economists; the search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the major issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function; this has been achieved by mapping it to the mathematical index called utility. Von Neumann and Morgenstern 1944 book "Games and Economic Behaviour" treated preferences as a formal relation whose properties can be stated axiomatically; these type of axiomatic handling of preferences soon began to influence other economists: Marschak adopted it by 1950, Houthakker employed it in a 1950 paper, Kenneth Arrow perfected it in his 1951 book "Social Choice and Individual Values".
Gérard Debreu, influenced by the ideas of the Bourbaki group, championed the axiomatization of consumer theory in the 1950s, the tools he borrowed from the mathematical field of binary relations have become mainstream since then. Though the economics of choice can be examined either at the level of utility functions or at the level of preferences, to move from one to the other can be useful. For example, shifting the conceptual basis from an abstract preference relation to an abstract utility scale results in a new mathematical framework, allowing new kinds of conditions on the structure of preference to be formulated and investigated. Another historical turnpoint can be traced back to 1895, when Georg Cantor proved in a theorem that if a binary relation is linearly ordered it is isomorphically embeddable in the ordered real numbers; this notion would become influential for the theory of preferences in economics: by the 1940s prominent authors such as Paul Samuelson, would theorize about people having weakly ordered preferences.
Suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, so that x ⪯ y means "the agent wants y at least as much as x" or "the agent weakly prefers y to x"; the symbol ∼ is used as a shorthand to the indifference relation: x ∼ y ⟺, which reads "the agent is indifferent between y and x". The symbol ≺ is used as a shorthand to the strong preference relation: x ≺ y ⟺, which reads "the agent prefers y to x". In everyday speech, the statement "x is preferred to y" is understood to mean that someone chooses x over y. However, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing people's choices in many directions. Suppose a person is confronted with a mental experiment that she must solve with the aid of introspection, she is offered apples and oranges, is asked to verbally choose one of the two. A decision scientist observing this single event would be inclined to say that whichever is chosen is the preferred alternative.
Under several repetitions of this experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the time oranges are chosen x ∼ y. If 51% of the time she chooses oranges it means that y ≻ x. Preference is here being identified with a greater frequency of choice; this experiment implicitly assumes. Otherwise, out of 100 repetitions, some of them will give as a result that neither apples, oranges or ties are chosen; these few cases of uncertainty will ruin any preference information resulting from the frequency attributes of the other valid cases. However, this example was used
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
In game theory, a sequential game is a game where one player chooses their action before the others choose theirs. The players must have some information of the first's choice, otherwise the difference in time would have no strategic effect. Sequential games hence are governed by the time axis, represented in the form of decision trees. Unlike sequential games, simultaneous games do not have a time axis as players choose their moves without being sure of the other's, are represented in the form of payoff matrices. Extensive form representations are used for sequential games, since they explicitly illustrate the sequential aspects of a game. Combinatorial games are sequential games. Games such as chess, infinite chess, tic-tac-toe and Go are examples of sequential games; the size of the decision trees can vary according to game complexity, ranging from the small game tree of tic-tac-toe, to an immensely complex game tree of chess so large that computers cannot map it completely. In sequential games with perfect information, a subgame perfect equilibrium can be found by backward induction.
Simultaneous game Subgame perfection Sequential auction
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Simultaneous games contrast with sequential games, which are played by the players taking turns. Normal form representations are used for simultaneous games. Rock-paper-scissors, a played hand game, is an example of a simultaneous game. Both players make a decision without knowledge of the opponent's decision, reveal their hands at the same time. There are two players in this game and each of them has three different strategies to make their decision. We will display Player 1's strategies as Player 2's strategies as columns. In the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in rock-paper-scissors will look like this: The prisoner's dilemma is an example of a simultaneous game; some variants of chess that belong to this class of games include Synchronous chess and Parity chess.
Sequential game Simultaneous action selection Bibliography Pritchard, D. B.. Beasley, John, ed; the Classified Encyclopedia of Chess Variants. John Beasley. ISBN 978-0-9555168-0-1
Game theory is the study of mathematical models of strategic interaction between rational decision-makers. It has applications in all fields of social science, as well as in computer science, it addressed zero-sum games, in which one person's gains result in losses for the other participants. Today, game theory applies to a wide range of behavioral relations, is now an umbrella term for the science of logical decision making in humans and computers. Modern game theory began with the idea regarding the existence of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields; as of 2014, with the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole, eleven game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, mathematical game theory; the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a minimax mixed strategy solution to a two-person version of the card game le Her, the problem is now known as Waldegrave problem. In his 1838 Recherches sur les principes mathématiques de la théorie des richesses, Antoine Augustin Cournot considered a duopoly and presents a solution, a restricted version of the Nash equilibrium.
In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels. It proved that the optimal chess strategy is determined; this paved the way for more general theorems. In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix was symmetric. Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture, proved false. Game theory did not exist as a unique field until John von Neumann published the paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics, his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern.
The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility as an independent discipline. Von Neumann's work in game theory culminated in this 1944 book; this foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. During the following time period, work on game theory was focused on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies. In 1950, the first mathematical discussion of the prisoner's dilemma appeared, an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies, known as Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern.
Nash proved that every n-player, non-zero-sum non-cooperative game has what is now known as a Nash equilibrium. Game theory experienced a flurry of activity in the 1950s, during which time the concepts of the core, the extensive form game, fictitious play, repeated games, the Shapley value were developed. In addition, the first applications of game theory to philosophy and political science occurred during this time. In 1979 Robert Axelrod tried setting up computer programs as players and found that in tournaments between them the winner was a simple "tit-for-tat" program that cooperates on the first step on subsequent steps just does whatever its opponent did on the previous step; the same winner was often obtained by natural selection. In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. In 1994 Nash and Harsanyi became Economics Nobel Laureates for their contributi
Matching pennies is the name for a simple game used in game theory. It is played between two players and Odd; each player must secretly turn the penny to heads or tails. The players reveal their choices simultaneously. If the pennies match Even keeps both pennies, so wins one from Odd. If the pennies do not match Odd keeps both pennies, so receives one from Even. Matching Pennies is a zero-sum game because each participant's gain or loss of utility is balanced by the losses or gains of the utility of the other participants. If the participants' total gains are added up and their total losses subtracted, the sum will be zero; the game can be written in a payoff matrix. Each cell of the matrix shows the two players' payoffs. Matching pennies is used to illustrate the concept of mixed strategies and a mixed strategy Nash equilibrium; this game has no pure strategy Nash equilibrium since there is no pure strategy, a best response to a best response. In other words, there is no pair of pure strategies such that neither player would want to switch if told what the other would do.
Instead, the unique Nash equilibrium of this game is in mixed strategies: each player chooses heads or tails with equal probability. In this way, each player makes the other indifferent between choosing heads or tails, so neither player has an incentive to try another strategy; the best-response functions for mixed strategies are depicted in Figure 1 below: When either player plays the equilibrium, everyone's expected payoff is zero. Varying the payoffs in the matrix can change the equilibrium point. For example, in the table shown on the right, Even has a chance to win 7 if both he and Odd play Heads. To calculate the equilibrium point in this game, note that a player playing a mixed strategy must be indifferent between his two actions; this gives us two equations: For the Even player, the expected payoff when playing Heads is + 7 ⋅ x − 1 ⋅ and when playing Tails − 1 ⋅ x + 1 ⋅, these must be equal, so x = 0.2. For the Odd player, the expected payoff when playing Heads is + 1 ⋅ y − 1 ⋅ and when playing Tails − 1 ⋅ y + 1 ⋅, these must be equal, so y = 0.5.
Note that x is the Heads-probability of Odd and y is the Heads-probability of Even. So the change in Even's payoff not his own strategy. Human players do not always play the equilibrium strategy. Laboratory experiments reveal several factors that make players deviate from the equilibrium strategy if matching pennies is played repeatedly: Humans are not good at randomizing, they may try to produce "random" sequences by switching their actions from Heads to Tails and vice versa, but they switch their actions too often. This makes it possible for expert players to predict their next actions with more than 50% chance of success. In this way, a positive expected payoff might be attainable. Humans are trained to detect patterns, they try to detect patterns in the opponent's sequence when such patterns do not exist, adjust their strategy accordingly. Humans' behavior is affected by framing effects; when the Odd player is named "the misleader" and the Even player is named "the guesser", the former focuses on trying to randomize and the latter focuses on trying to detect a pattern, this increases the chances of success of the guesser.
Additionally, the fact that Even wins when there is a match gives him an advantage, since people are better at matching than at mismatching. Moreover, when the payoff matrix is asymmetric, other factors influence human behavior when the game is not repeated: Players tend to increase the probability of playing an action which gives them a higher payoff, e.g. in the payoff matrix above, Even will tend to play more Heads. This is intuitively understandable, but it is not a Nash equilibrium: as explained above, the mixing probability of a player should depend only on the other player's payoff, not his own payoff; this deviation can be explained as a quantal response equilibrium. In a quantal-response-equilibrium, the best-response curves are not sharp as in a standard Nash equilibrium. Rather, they change smoothly from the action whose probability is 0 to the action whose probability 1; the equilibrium point is the intersection point of the smoothed curves of the two players, different than the Nash-equilibrium point.
The own-payoff effects are mitigated by risk aversion. Players tend to underestimate high gains and overest
An extensive-form game is a specification of a game in game theory, allowing for the explicit representation of a number of key aspects, like the sequencing of players' possible moves, their choices at every decision point, the information each player has about the other player's moves when they make a decision, their payoffs for all possible game outcomes. Extensive-form games allow for the representation of incomplete information in the form of chance events modeled as "moves by nature"; some authors in introductory textbooks define the extensive-form game as being just a game tree with payoffs, add the other elements in subsequent chapters as refinements. Whereas the rest of this article follows this gentle approach with motivating examples, we present upfront the finite extensive-form games as constructed here; this general definition was introduced by Harold W. Kuhn in 1953, who extended an earlier definition of von Neumann from 1928. Following the presentation from Hart, an n-player extensive-form game thus consists of the following: A finite set of n players A rooted tree, called the game tree Each terminal node of the game tree has an n-tuple of payoffs, meaning there is one payoff for each player at the end of every possible play A partition of the non-terminal nodes of the game tree in n+1 subsets, one for each player, with a special subset for a fictitious player called Chance.
Each player's subset of nodes is referred to as the "nodes of the player". Each node of the Chance player has a probability distribution over its outgoing edges; each set of nodes of a rational player is further partitioned in information sets, which make certain choices indistinguishable for the player when making a move, in the sense that: there is a one-to-one correspondence between outgoing edges of any two nodes of the same information set—thus the set of all outgoing edges of an information set is partitioned in equivalence classes, each class representing a possible choice for a player's move at some point—, every path in the tree from the root to a terminal node can cross each information set at most once the complete description of the game specified by the above parameters is common knowledge among the playersA play is thus a path through the tree from the root to a terminal node. At any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution.
At any rational player's node, the player must choose one of the equivalence classes for the edges, which determines one outgoing edge except the player doesn't know which one is being followed. A pure strategy for a player thus consists of a selection—choosing one class of outgoing edges for every information set. In a game of perfect information, the information sets are singletons. It's less evident, it is assumed that each player has a von Neumann–Morgenstern utility function defined for every game outcome. The above presentation, while defining the mathematical structure over which the game is played, elides however the more technical discussion of formalizing statements about how the game is played like "a player cannot distinguish between nodes in the same information set when making a decision"; these can be made precise using epistemic modal logic. A perfect information two-player game over a game tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe and infinite chess.
A game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of imperfect information. A complete extensive-form representation specifies: the players of a game for every player every opportunity they have to move what each player can do at each of their moves what each player knows for every move the payoffs received by every player for every possible combination of moves The game on the right has two players: 1 and 2; the numbers by every non-terminal node indicate. The numbers by every terminal node represent the payoffs to the players; the labels by every edge of the graph are the name of the action. The initial node belongs to player 1. Play according to the tree is as follows: player 1 chooses between U and D; the payoffs are as specified in the tree. There are four outcomes represented by the four terminal nodes of the tree:, and; the payoffs associated with each outcome are as follows, and. If player 1 plays D, player 2 will play U' to maximise their payoff and so player 1 will only receive 1.
However, if player 1 plays U, player 2 maximises their payoff by playing D' and player 1 receives 2. Player 1 prefers 2 to 1 and s