1.
Chess
–
Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an eight-by-eight grid. Chess is played by millions of people worldwide, both amateurs and professionals, each player begins the game with 16 pieces, one king, one queen, two rooks, two knights, two bishops, and eight pawns. Each of the six piece types moves differently, with the most powerful being the queen, the objective is to checkmate the opponents king by placing it under an inescapable threat of capture. To this end, a players pieces are used to attack and capture the opponents pieces, in addition to checkmate, the game can be won by voluntary resignation by the opponent, which typically occurs when too much material is lost, or if checkmate appears unavoidable. A game may result in a draw in several ways. Chess is believed to have originated in India, some time before the 7th century, chaturanga is also the likely ancestor of the Eastern strategy games xiangqi, janggi and shogi. The pieces took on their current powers in Spain in the late 15th century, the first generally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886. Since 1948, the World Championship has been controlled by FIDE, the international governing body. There is also a Correspondence Chess World Championship and a World Computer Chess Championship, online chess has opened amateur and professional competition to a wide and varied group of players. There are also many variants, with different rules, different pieces. FIDE awards titles to skilled players, the highest of which is grandmaster, many national chess organizations also have a title system. However, these are not recognised by FIDE, the term master may refer to a formal title or may be used more loosely for any skilled player. Until recently, chess was a sport of the International Olympic Committee. Chess was included in the 2006 and 2010 Asian Games, since the 1990s, computer analysis has contributed significantly to chess theory, particularly in the endgame. The computer IBM Deep Blue was the first machine to overcome a reigning World Chess Champion in a match when it defeated Garry Kasparov in 1997, the rise of strong computer programs that can be run on hand-held devices has led to increasing concerns about cheating during tournaments. The official rules of chess are maintained by FIDE, chesss international governing body, along with information on official chess tournaments, the rules are described in the FIDE Handbook, Laws of Chess section. Chess is played on a board of eight rows and eight columns. The colors of the 64 squares alternate and are referred to as light, the chessboard is placed with a light square at the right-hand end of the rank nearest to each player

2.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern

3.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game

4.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set

5.
Combinatorial game
–
Combinatorial game theory is a branch of mathematics and theoretical computer science that typically studies sequential games with perfect information. Study has been confined to two-player games that have a position in which the players take turns changing in defined ways or moves to achieve a defined winning condition. However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, in CGT, the moves in these and other games are represented as a game tree. CGT has a different emphasis than traditional or economic theory, which was initially developed to study games with simple combinatorial structure. Essentially, CGT has contributed new methods for analyzing game trees, for using surreal numbers. The type of games studied by CGT is also of interest in artificial intelligence, in CGT there has been less emphasis on refining practical search algorithms, but more emphasis on descriptive theoretical results. An important notion in CGT is that of the solved game, for example, tic-tac-toe is considered a solved game, as it can be proven that any game will result in a draw if both players play optimally. Deriving similar results for games with rich combinatorial structures is difficult, for instance, in 2007 it was announced that checkers has been weakly solved—optimal play by both sides also leads to a draw—but this result was a computer-assisted proof. Other real world games are too complicated to allow complete analysis today. Applying CGT to a position attempts to determine the sequence of moves for both players until the game ends, and by doing so discover the optimum move in any position. In practice, this process is difficult unless the game is very simple. However, a number of fall into both categories. Nim, for instance, is an instrumental in the foundation of CGT. Tic-tac-toe is still used to basic principles of game AI design to computer science students. CGT arose in relation to the theory of games, in which any play available to one player must be available to the other as well. One very important such game is nim, which can be solved completely, Nim is an impartial game for two players, and subject to the normal play condition, which means that a player who cannot move loses. Their results were published in their book Winning Ways for your Mathematical Plays in 1982, however, the first work published on the subject was Conways 1976 book On Numbers and Games, also known as ONAG, which introduced the concept of surreal numbers and the generalization to games. On Numbers and Games was also a fruit of the collaboration between Berlekamp, Conway, and Guy, Combinatorial games are generally, by convention, put into a form where one player wins when the other has no moves remaining

6.
Backgammon
–
Backgammon is one of the oldest board games known. It is a two player game where playing pieces are moved according to the roll of dice, and a player wins by removing all of their pieces from the board before their opponent. Backgammon is a member of the family, one of the oldest classes of board games in the world. Backgammon involves a combination of strategy and luck, while the dice may determine the outcome of a single game, the better player will accumulate the better record over series of many games, somewhat like poker. With each roll of the dice, players must choose from options for moving their checkers. The optional use of a doubling cube allows players to raise the stakes during the game, like chess, backgammon has been studied with great interest by computer scientists. Owing to this research, backgammon software has been developed that is capable of beating world-class human players, Backgammon playing pieces are known variously as checkers, draughts, stones, men, counters, pawns, discs, pips, chips, or nips. The objective is to all of ones own checkers from the board before ones opponent can do the same. In the most often-played variants the checkers are scattered at first, as the playing time for each individual game is short, it is often played in matches where victory is awarded to the first player to reach a certain number of points. Each side of the board has a track of 12 long triangles, the points form a continuous track in the shape of a horseshoe, and are numbered from 1 to 24. In the most commonly used setup, each begins with fifteen checkers. The two players move their checkers in opposing directions, from the 24-point towards the 1-point, points 1 through 6 are called the home board or inner board, and points 7 through 12 are called the outer board. The 7-point is referred to as the bar point, and the 13-point as the midpoint, to start the game, each player rolls one die, and the player with the higher number moves first using the numbers shown on both dice. If the players roll the number, they must roll again. Both dice must land completely flat on the side of the gameboard. The players then alternate turns, rolling two dice at the beginning of each turn, after rolling the dice, players must, if possible, move their checkers according to the number shown on each die. For example, if the player rolls a 6 and a 3, the player must move one checker six points forward, and another or the same checker three points forward. The same checker may be moved twice, as long as the two moves can be separately and legally, six and then three, or three and then six

7.
Tic-tac-toe
–
Tic-tac-toe is a paper-and-pencil game for two players, X and O, who take turns marking the spaces in a 3×3 grid. The player who succeeds in placing three of their marks in a horizontal, vertical, or diagonal row wins the game, the following example game is won by the first player, X, Players soon discover that best play from both parties leads to a draw. Hence, tic-tac-toe is most often played by young children. The game can be generalized to an m, n, k-game in which two players alternate placing stones of their own color on an m×n board, with the goal of getting k of their own color in a row. Hararys generalized tic-tac-toe is an even broader generalization of tic tac toe and it can also be generalized as a nd game. Tic-tac-toe is the game where n equals 3 and d equals 2, according to Claudia Zaslavskys book Tic Tac Toe, And Other Three-In-A Row Games from Ancient Egypt to the Modern Computer, tic-tac-toe could be traced back to ancient Egypt. Another closely related ancient game is Three Mens Morris which is played on a simple grid. An early variation of tic-tac-toe was played in the Roman Empire and it was called Terni Lapilli and instead of having any number of pieces, each player only had three, thus they had to move them around to empty spaces to keep playing. The games grid markings have been found chalked all over Rome, the different names of the game are more recent. The first print reference to noughts and crosses, the British name, in his novel Can You Forgive Her,1864, Anthony Trollope refers to a clerk playing tit-tat-toe. Tic-tac-toe may also derive from tick-tack, the name of an old version of backgammon first described in 1558, the U. S. renaming of Noughts and crosses as tic-tac-toe occurred in the 20th century. In 1952, OXO, developed by British computer scientist Alexander S. Douglas for the EDSAC computer at the University of Cambridge, the computer player could play perfect games of tic-tac-toe against a human opponent. In 1975, tic-tac-toe was also used by MIT students to demonstrate the power of Tinkertoy elements. The Tinkertoy computer, made out of only Tinkertoys, is able to play tic-tac-toe perfectly and it is currently on display at the Museum of Science, Boston. A position is merely a state of the board, while a game usually refers to the way a position is obtained. Naive counting leads to 19,683 possible board layouts, and 362,880 possible games, however, two matters much reduce these numbers, The game ends when three-in-a-row is obtained. If X starts, the number of Xs is always equal to or exactly 1 more than the number of Os. The complete analysis is complicated by the definitions used when setting the conditions

8.
Go (board game)
–
Go is an abstract strategy board game for two players, in which the aim is to surround more territory than the opponent. The game was invented in ancient China more than 2,500 years ago and it was considered one of the four essential arts of the cultured aristocratic Chinese scholar caste in antiquity. The earliest written reference to the game is recognized as the historical annal Zuo Zhuan. The modern game of Go as we know it was formalized in Japan in the 15th century CE, despite its relatively simple rules, Go is very complex, even more so than chess, and possesses more possibilities than the total number of atoms in the visible universe. Compared to chess, Go has both a board with more scope for play and longer games, and, on average. The playing pieces are called stones, one player uses the white stones and the other, black. The players take turns placing the stones on the vacant intersections of a board with a 19×19 grid of lines, beginners often play on smaller 9×9 and 13×13 boards, and archaeological evidence shows that the game was played in earlier centuries on a board with a 17×17 grid. However, boards with a 19×19 grid had become standard by the time the game had reached Korea in the 5th century CE, the objective of Go—as the translation of its name implies—is to fully surround a larger total area of the board than the opponent. Once placed on the board, stones may not be moved, capture happens when a stone or group of stones is surrounded by opposing stones on all orthogonally-adjacent points. The game proceeds until neither player wishes to make another move, when a game concludes, the territory is counted along with captured stones and komi to determine the winner. Games may also be terminated by resignation, as of mid-2008, there were well over 40 million Go players worldwide, the overwhelming majority of them living in East Asia. As of December 2015, the International Go Federation has a total of 75 member countries, Go is an adversarial game with the objective of surrounding a larger total area of the board with ones stones than the opponent. As the game progresses, the players position stones on the board to map out formations, contests between opposing formations are often extremely complex and may result in the expansion, reduction, or wholesale capture and loss of formation stones. A basic principle of Go is that a group of stones must have at least one liberty to remain on the board, a liberty is an open point bordering the group. An enclosed liberty is called an eye, and a group of stones with two or more eyes is said to be unconditionally alive, such groups cannot be captured, even if surrounded. A group with one eye or no eyes is dead and cannot resist eventual capture, the general strategy is to expand ones territory, attack the opponents weak groups, and always stay mindful of the life status of ones own groups. The liberties of groups are countable, situations where mutually opposing groups must capture each other or die are called capturing races, or semeai. In a capturing race, the group with more liberties will ultimately be able to capture the opponents stones, capturing races and the elements of life or death are the primary challenges of Go

9.
Game tree
–
In game theory, a game tree is a directed graph whose nodes are positions in a game and whose edges are moves. The diagram shows the first two levels, or plies, in the tree for tic-tac-toe. The rotations and reflections of positions are equivalent, so the first player has three choices of move, in the center, at the edge, or in the corner. The second player has two choices for the if the first player played in the center, otherwise five choices. The number of nodes in the complete game tree is the number of possible different ways the game can be played. For example, the tree for tic-tac-toe has 255,168 leaf nodes. Game trees are important in artificial intelligence because one way to pick the best move in a game is to search the tree using the minimax algorithm or its variants. The game tree for tic-tac-toe is easily searchable, but the complete trees for larger games like chess are much too large to search. Instead, a chess-playing program searches a partial game tree, typically as many plies from the current position as it can search in the time available, except for the case of pathological game trees, increasing the search depth generally improves the chance of picking the best move. Two-person games can also be represented as and-or trees, for the first player to win a game, there must exist a winning move for all moves of the second player. This is represented in the tree by using disjunction to represent the first players alternative moves. With a complete tree, it is possible to solve the game – that is to say. The algorithm can be described recursively as follows, color the final ply of the game tree so that all wins for player 1 are colored one way, all wins for player 2 are colored another way, and all ties are colored a third way. If there exists a node colored opposite as the current player, if all immediately lower nodes are colored for the same player, color this node for the same player as well. Otherwise, color this node a tie, repeat for each ply, moving upwards, until all nodes are colored. The color of the node will determine the nature of the game. The diagram shows a tree for an arbitrary game, colored using the above algorithm. It is usually possible to solve a game using only a subset of the game tree, any subtree that can be used to solve the game is known as a decision tree, and the sizes of decision trees of various shapes are used as measures of game complexity

10.
Perfect information
–
In economics, perfect information is a feature of perfect competition. Perfect information is importantly different from information, which implies common knowledge of each players utility functions, payoffs. Chess is an example of a game with perfect information as each player can see all of the pieces on the board at all times. Other examples of games include tic-tac-toe, Irensei, and Go. Card games where each players cards are hidden from other players, as in contract bridge, complete information Extensive form game Information asymmetry Partial knowledge Perfect competition Screening game Signaling game Fudenberg, D. and Tirole, J. Game Theory, MIT Press. A primer in theory, Harvester-Wheatsheaf

11.
Subgame perfect equilibrium
–
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Every finite extensive game has a perfect equilibrium. A common method for determining subgame perfect equilibria in the case of a game is backward induction. Here one first considers the last actions of the game and determines which actions the final mover should take in each possible circumstance to maximize his/her utility. One then supposes that the last actor will do these actions and this process continues until one reaches the first move of the game. The strategies which remain are the set of all subgame perfect equilibria for finite-horizon extensive games of perfect information, however, backward induction cannot be applied to games of imperfect or incomplete information because this entails cutting through non-singleton information sets. A subgame perfect equilibrium necessarily satisfies the One-Shot deviation principle, the set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical, the Ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. An example for a game possessing an ordinary Nash equilibrium and a perfect equilibrium is shown in Figure 1. The strategies for player 1 are given by whereas player 2 has the choice between 2 as his choice to be kind or unkind to player 1 might depend on the previously made by player 1. The payoff matrix of the game is shown in Table 1, observe that there are two different Nash equilibria, given by the strategy profiles L, and R. Consider the equilibrium given by the strategy profile L, more formally, the equilibrium is not an equilibrium with respect to the subgame induced by node 22. It is likely that in real life player 2 would choose the strategy instead which would in turn inspire player 1 to change his strategy to R, the resulting profile R, is not only a Nash equilibrium but it is also an equilibrium in all subgames. It is therefore a perfect equilibrium. Reinhard Selten proved that any game which can be broken into sub-games containing a sub-set of all the choices in the main game will have a subgame perfect Nash Equilibrium strategy. Subgame perfection is used with games of complete information. Subgame perfection can be used with extensive form games of complete, one game in which the backward induction solution is well known is tic-tac-toe, but in theory even Go has such an optimum strategy for all players

12.
Claude Shannon
–
Claude Elwood Shannon was an American mathematician, electrical engineer, and cryptographer known as the father of information theory. Shannon is noted for having founded information theory with a paper, A Mathematical Theory of Communication. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his work on codebreaking. Shannon was born in Petoskey, Michigan and grew up in Gaylord and his father, Claude, Sr. a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannons mother, Mabel Wolf Shannon, was a language teacher, most of the first 16 years of Shannons life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things and his best subjects were science and mathematics. At home he constructed such devices as models of planes, a model boat. While growing up, he worked under Andrew Coltrey as a messenger for the Western Union company. His childhood hero was Thomas Edison, who he learned was a distant cousin. Both were descendants of John Ogden, a leader and an ancestor of many distinguished people. Shannon was apolitical and an atheist, in 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two degrees, one in electrical engineering and the other in mathematics. In 1936, Shannon began his studies in electrical engineering at MIT, where he worked on Vannevar Bushs differential analyzer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Booles concepts, in 1937, he wrote his masters degree thesis, A Symbolic Analysis of Relay and Switching Circuits, A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used then in telephone call routing switches, next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presents diagrams of several circuits, using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannons work became the foundation of digital design, as it became widely known in the electrical engineering community during. The theoretical rigor of Shannons work superseded the ad hoc methods that had prevailed previously, howard Gardner called Shannons thesis possibly the most important, and also the most noted, masters thesis of the century

13.
Graphical game theory
–
In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is a compact representation of a game using the interaction among participants. Consider a game with n players with m strategies each and we will represent the players as nodes in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility depends on fewer other players, the graphical representation would be smaller. Each node i in G has a function u i, d i +1 → R, U i specifies the utility of player i as a function of his strategy as well as those of his neighbors. For a general n players game, in each player has m possible strategies. The size of the representation for this game is O where d is the maximal node degree in the graph. If d ≪ n, then the graphical representation is much smaller. In case where each players utility function depends only on one player, The maximal degree of the graph is 1. So, the size of the input will be n m 2. Finding Nash equilibrium in a game takes exponential time in the size of the representation, if the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the degree of a node is 3 or more. In Vazirani, Vijay V. Nisan, Noam, Roughgarden, Tim, Tardos, Michael Kearns, Michael L. Littman and Satinder Singh Graphical Models for Game Theory

14.
Information set (game theory)
–
In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has information, every information set contains only one member. Otherwise, it is the case that some players cannot be exactly what has taken place so far in the game. More specifically, in the form, an information set is a set of decision nodes such that. The notion of set was introduced by John von Neumann. At the right are two versions of the battle of the game, shown in extensive form. The first game is simply sequential-when player 2 has the chance to move, the second game is also sequential, but the dotted line shows player 2s information set. This is the way to show that when player 2 moves. This difference also leads to different predictions for the two games, in the first game, player 1 has the upper hand. They know that they can choose O safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o and get 2 than choose f, formally, thats applying subgame perfection to solve the game. In the second game, player 2 cant observe what player 1 did, game Theory, A very short introduction

15.
Normal-form game
–
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, while this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, in static games of complete, perfect information, a normal-form representation of a game is a specification of players strategy spaces and payoff functions. The matrix to the right is a representation of a game in which players move simultaneously. For example, if player 1 plays top and player 2 plays left, player 1 receives 4, in each cell, the first number represents the payoff to the row player, and the second number represents the payoff to the column player. Often, symmetric games are represented only one payoff. This is the payoff for the row player, for example, the payoff matrices on the right and left below represent the same game. The payoff matrix facilitates elimination of dominated strategies, and it is used to illustrate this concept. For example, in the dilemma, we can see that each prisoner can either cooperate or defect. If exactly one prisoner defects, he gets off easily and the prisoner is locked up for a long time. However, if they both defect, they both be locked up for a shorter time. One can determine that Cooperate is strictly dominated by Defect, one must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what the player chooses, the row player does better by choosing Defect. Similarly, one compares the second payoff in each row, again 0 > −1 and this shows that no matter what row does, column does better by choosing Defect. This demonstrates the unique Nash equilibrium of this game is and these matrices only represent games in which moves are simultaneous. The above matrix does not represent the game in which player 1 moves first, observed by player 2, in order to represent this sequential game we must specify all of player 2s actions, even in contingencies that can never arise in the course of the game. In this game, player 2 has actions, as before, Left, unlike before he has four strategies, contingent on player 1s actions. Accordingly, to specify a game, the payoff function has to be specified for each player in the player set P=. D. Fudenberg and J. Tirole, Game Theory

16.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice

17.
Economic equilibrium
–
In economics, economic equilibrium is a state where economic forces such as supply and demand are balanced and in the absence of external influences the values of economic variables will not change. For example, in the textbook model of perfect competition, equilibrium occurs at the point at which quantity demanded. However, the concept of equilibrium in economics also applies to imperfectly competitive markets, three basic properties of equilibrium in general have been proposed by Huw Dixon. These are, Equilibrium property P1, The behavior of agents is consistent, Equilibrium property P2, No agent has an incentive to change its behavior. Equilibrium Property P3, Equilibrium is the outcome of some dynamic process, in a competitive equilibrium, supply equals demand. Property P1 is satisfied, because at the price the amount supplied is equal to the amount demanded. Demand is chosen to maximize utility given the price, no one on the demand side has any incentive to demand more or less at the prevailing price. Likewise supply is determined by firms maximizing their profits at the market price, hence, agents on neither the demand side nor the supply side will have any incentive to alter their actions. To see whether Property P3 is satisfied, consider what happens when the price is above the equilibrium, in this case there is an excess supply, with the quantity supplied exceeding that demanded. This will tend to put pressure on the price to make it return to equilibrium. Likewise where the price is below the point there is a shortage in supply leading to an increase in prices back to equilibrium. Not all equilibria are stable in the sense of Equilibrium property P3 and it is possible to have competitive equilibria that are unstable. However, if an equilibrium is unstable, it raises the question of how you might get there, even if it satisfies properties P1 and P2, the absence of P3 means that the market can only be in the unstable equilibrium if it starts off there. In most simple microeconomic stories of supply and demand a static equilibrium is observed in a market, however, Equilibrium may also be economy-wide or general, as opposed to the partial equilibrium of a single market. Equilibrium can change if there is a change in demand or supply conditions, for example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets, then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand. That is, there are no endogenous forces leading to the price or the quantity, the Nash equilibrium is widely used in economics as the main alternative to competitive equilibrium. It is used there is a strategic element to the behavior of agents

18.
Solution concept
–
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called solutions, and describe which strategies will be adopted by players and, therefore, the most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. Many solution concepts, for games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions, each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Let Γ be the class of all games and, for each game G ∈ Γ, let S G be the set of strategy profiles of G. A solution concept is an element of the direct product Π G ∈ Γ2 S G, i. e. a function F, Γ → ⋃ G ∈ Γ2 S G such that F ⊆ S G for all G ∈ Γ. In this solution concept, players are assumed to be rational, a strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. For example, in the dilemma, cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does. A Nash equilibrium is a profile in which every strategy is a best response to every other strategy played. There are games that have multiple Nash equilibria, some of which are unrealistic, in the case of dynamic games, unrealistic Nash equilibria might be eliminated by applying backward induction, which assumes that future play will be rational. It therefore eliminates noncredible threats because such threats would be irrational to carry out if a player was called upon to do so. For example, consider a game in which the players are an incumbent firm in an industry. As it stands, the incumbent has a monopoly over the industry, if the entrant chooses not to enter, the payoff to the incumbent is high and the entrant neither loses nor gains. If the entrant enters, the incumbent can fight or accommodate the entrant and it will fight by lowering its price, running the entrant out of business and damaging its own profits. If it accommodates the entrant it will some of its sales. If the entrant enters, the best response of the incumbent is to accommodate, if the incumbent accommodates, the best response of the entrant is to enter. Hence the strategy profile in which the incumbent accommodates if the entrant enters, however, if the incumbent is going to play fight, the best response of the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do, hence fight can be considered as a best response of the incumbent if the entrant does not enter

19.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others

20.
Bayesian Nash equilibrium
–
In game theory, a Bayesian game is a game in which the players do not have complete information on the other players, but, they have beliefs with known probability distribution. A Bayesian game can be converted into a game of complete, harsanyi describes a Bayesian game in the following way. In addition to the players in the game, there is a special player called Nature. Nature assigns a random variable to each player which could take values of types for each player, harsanyis approach to modeling a Bayesian game in such a way allows games of incomplete information to become games of imperfect information. The type of a player determines that players payoff function, the probability associated with a type is the probability that the player, for whom the type is specified, is that type. In a Bayesian game, the incompleteness of information means that at least one player is unsure of the type of another player, such games are called Bayesian because of the probabilistic analysis inherent in the game. The lack of information held by players and modeling of beliefs mean that such games are used to analyse imperfect information scenarios. The normal form representation of a game with perfect information is a specification of the strategy spaces. A strategy for a player is a plan of action that covers every contingency of the game. The strategy space of a player is thus the set of all available to a player. A payoff function is a function from the set of profiles to the set of payoffs. In a Bayesian game, one has to specify strategy spaces, type spaces, payoff functions, a strategy for a player is a complete plan of action that covers every contingency that might arise for every type that player might be. A strategy must not only specify the actions of the given the type that he is. Strategy spaces are defined as above, a type space for a player is just the set of all possible types of that player. The beliefs of a player describe the uncertainty of that player about the types of the other players, each belief is the probability of the other players having particular types, given the type of the player with that belief. A payoff function is a 2-place function of strategy profiles and types, if a player has payoff function U and he has type t, the payoff he receives is U, where x ∗ is the strategy profile played in the game. Ω is the set of states of nature, for instance, in a card game, it can be any order of the cards. A i is the set of actions for player i, let A = A1 × A2 × ⋯ × A N

21.
Perfect Bayesian equilibrium
–
In game theory, a Perfect Bayesian Equilibrium is an equilibrium concept relevant for dynamic games with incomplete information. A PBE is a refinement of both Bayesian Nash equilibrium and subgame perfect equilibrium, a PBE has two components - strategies and beliefs, The strategy of a player in given information-set determines how this player acts in that information-set. The action may depend on the history and this is similar to a sequential game. The belief of a player in a given information-set determines what node in that information-set the player believes that he is playing at, the belief may be a probability distribution over the nodes in the information-set. Formally, a system is an assignment of probabilities to every node in the game such that the sum of probabilities in any information set is 1. The strategies and beliefs should satisfy the conditions, Sequential rationality, each strategy should be optimal in expectation. Consistency, each belief should be updated according to the strategies and Bayes rule, every PBE is both a SPE and a BNE, but the opposite is not necessarily true. A signaling game is the simplest kind of a dynamic Bayesian game, there are two players, one of them has only one possible type, and the other has several possible types. The sender plays first, then the receiver, to calculate a PBE in a signaling game, we consider two kinds of equilibria, a separating equilibrium and a pooling equilibrium. Consider the following game, The sender has two possible types, either a friend or an enemy. Each type has two strategies, either give a gift, or not give, the receiver has only one type, and two strategies, either accept the gift, or reject it. The senders utility is 1 if his gift is accepted, -1 if his gift is rejected, the receivers utility depends on who gives the gift, If the sender is a friend, then the receivers utility is 1 or 0. If the sender is an enemy, then the receivers utility is -1 or 0, to analyze PBE in this game, lets look first at the following potential separating equilibria, The senders strategy is, a friend gives and an enemy does not give. The receivers beliefs are updated accordingly, if she receives a gift she knows that the sender is a friend and this is NOT an equilibrium, since the senders strategy is not optimal, an enemy sender can increase his payoff from 0 to 1 by sending a gift. The senders strategy is, a friend does not give and an enemy gives, the receivers beliefs are updated accordingly, if she receives a gift she knows that the sender is a enemy, otherwise she knows that the sender is a friend. Again, this is NOT an equilibrium, since the strategy is not optimal. We conclude that in this game, there is no separating equilibrium, now, lets look at the following potential pooling equilibria, The senders strategy is, always give. The receivers beliefs are not updated, she believes in the a-priori probability, that the sender is a friend with probability p

22.
Pareto efficiency
–
The concept is named after Vilfredo Pareto, Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The concept has applications in fields such as economics, engineering. The Pareto frontier is the set of all Pareto efficient allocations, an allocation is defined as Pareto efficient or Pareto optimal when no further Pareto improvements can be made. The notion of Pareto efficiency can also be applied to the selection of alternatives in engineering, each option is first assessed under multiple criteria and then a subset of options is identified with the property that no other option can categorically outperform any of its members. Pareto optimality is a defined concept used to determine when an allocation is optimal. If there is a transfer that satisfies this condition, the reallocation is called a Pareto improvement, when no further Pareto improvements are possible, the allocation is a Pareto optimum. A formal definition for an economy is as follows, Consider an economy with i agents and j goods. Here in this economy, feasibility refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. It is important to note that a change from a generally inefficient economic allocation to an efficient one is not necessarily a Pareto improvement, even if there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that subsequently becomes competitive. However, since the monopolist is disadvantaged, this is not a Pareto improvement, thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. However, in the world, such compensations may have unintended consequences. They can lead to incentive distortions over time as agents anticipate such compensations, under certain idealized conditions, it can be shown that a system of free markets, also called a competitive equilibrium, will lead to a Pareto efficient outcome. This is called the first welfare theorem and it was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions necessary for the proof. In the absence of information or complete markets, outcomes will generally be Pareto inefficient. In addition to the first welfare theorem linking the concepts of Pareto optimal allocations and free markets, the second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, a weak Pareto optimum is an allocation for which there are no possible alternative allocations whose realization would cause every individual to gain