1.
Chess
–
Chess is a two-player strategy board game played on a chessboard, a checkered gameboard with 64 squares arranged in an eight-by-eight grid. Chess is played by millions of people worldwide, both amateurs and professionals, each player begins the game with 16 pieces, one king, one queen, two rooks, two knights, two bishops, and eight pawns. Each of the six piece types moves differently, with the most powerful being the queen, the objective is to checkmate the opponents king by placing it under an inescapable threat of capture. To this end, a players pieces are used to attack and capture the opponents pieces, in addition to checkmate, the game can be won by voluntary resignation by the opponent, which typically occurs when too much material is lost, or if checkmate appears unavoidable. A game may result in a draw in several ways. Chess is believed to have originated in India, some time before the 7th century, chaturanga is also the likely ancestor of the Eastern strategy games xiangqi, janggi and shogi. The pieces took on their current powers in Spain in the late 15th century, the first generally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886. Since 1948, the World Championship has been controlled by FIDE, the international governing body. There is also a Correspondence Chess World Championship and a World Computer Chess Championship, online chess has opened amateur and professional competition to a wide and varied group of players. There are also many variants, with different rules, different pieces. FIDE awards titles to skilled players, the highest of which is grandmaster, many national chess organizations also have a title system. However, these are not recognised by FIDE, the term master may refer to a formal title or may be used more loosely for any skilled player. Until recently, chess was a sport of the International Olympic Committee. Chess was included in the 2006 and 2010 Asian Games, since the 1990s, computer analysis has contributed significantly to chess theory, particularly in the endgame. The computer IBM Deep Blue was the first machine to overcome a reigning World Chess Champion in a match when it defeated Garry Kasparov in 1997, the rise of strong computer programs that can be run on hand-held devices has led to increasing concerns about cheating during tournaments. The official rules of chess are maintained by FIDE, chesss international governing body, along with information on official chess tournaments, the rules are described in the FIDE Handbook, Laws of Chess section. Chess is played on a board of eight rows and eight columns. The colors of the 64 squares alternate and are referred to as light, the chessboard is placed with a light square at the right-hand end of the rank nearest to each player

2.
Backgammon
–
Backgammon is one of the oldest board games known. It is a two player game where playing pieces are moved according to the roll of dice, and a player wins by removing all of their pieces from the board before their opponent. Backgammon is a member of the family, one of the oldest classes of board games in the world. Backgammon involves a combination of strategy and luck, while the dice may determine the outcome of a single game, the better player will accumulate the better record over series of many games, somewhat like poker. With each roll of the dice, players must choose from options for moving their checkers. The optional use of a doubling cube allows players to raise the stakes during the game, like chess, backgammon has been studied with great interest by computer scientists. Owing to this research, backgammon software has been developed that is capable of beating world-class human players, Backgammon playing pieces are known variously as checkers, draughts, stones, men, counters, pawns, discs, pips, chips, or nips. The objective is to all of ones own checkers from the board before ones opponent can do the same. In the most often-played variants the checkers are scattered at first, as the playing time for each individual game is short, it is often played in matches where victory is awarded to the first player to reach a certain number of points. Each side of the board has a track of 12 long triangles, the points form a continuous track in the shape of a horseshoe, and are numbered from 1 to 24. In the most commonly used setup, each begins with fifteen checkers. The two players move their checkers in opposing directions, from the 24-point towards the 1-point, points 1 through 6 are called the home board or inner board, and points 7 through 12 are called the outer board. The 7-point is referred to as the bar point, and the 13-point as the midpoint, to start the game, each player rolls one die, and the player with the higher number moves first using the numbers shown on both dice. If the players roll the number, they must roll again. Both dice must land completely flat on the side of the gameboard. The players then alternate turns, rolling two dice at the beginning of each turn, after rolling the dice, players must, if possible, move their checkers according to the number shown on each die. For example, if the player rolls a 6 and a 3, the player must move one checker six points forward, and another or the same checker three points forward. The same checker may be moved twice, as long as the two moves can be separately and legally, six and then three, or three and then six

3.
Claude Shannon
–
Claude Elwood Shannon was an American mathematician, electrical engineer, and cryptographer known as the father of information theory. Shannon is noted for having founded information theory with a paper, A Mathematical Theory of Communication. Shannon contributed to the field of cryptanalysis for national defense during World War II, including his work on codebreaking. Shannon was born in Petoskey, Michigan and grew up in Gaylord and his father, Claude, Sr. a descendant of early settlers of New Jersey, was a self-made businessman, and for a while, a Judge of Probate. Shannons mother, Mabel Wolf Shannon, was a language teacher, most of the first 16 years of Shannons life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things and his best subjects were science and mathematics. At home he constructed such devices as models of planes, a model boat. While growing up, he worked under Andrew Coltrey as a messenger for the Western Union company. His childhood hero was Thomas Edison, who he learned was a distant cousin. Both were descendants of John Ogden, a leader and an ancestor of many distinguished people. Shannon was apolitical and an atheist, in 1932, Shannon entered the University of Michigan, where he was introduced to the work of George Boole. He graduated in 1936 with two degrees, one in electrical engineering and the other in mathematics. In 1936, Shannon began his studies in electrical engineering at MIT, where he worked on Vannevar Bushs differential analyzer. While studying the complicated ad hoc circuits of this analyzer, Shannon designed switching circuits based on Booles concepts, in 1937, he wrote his masters degree thesis, A Symbolic Analysis of Relay and Switching Circuits, A paper from this thesis was published in 1938. In this work, Shannon proved that his switching circuits could be used to simplify the arrangement of the electromechanical relays that were used then in telephone call routing switches, next, he expanded this concept, proving that these circuits could solve all problems that Boolean algebra could solve. In the last chapter, he presents diagrams of several circuits, using this property of electrical switches to implement logic is the fundamental concept that underlies all electronic digital computers. Shannons work became the foundation of digital design, as it became widely known in the electrical engineering community during. The theoretical rigor of Shannons work superseded the ad hoc methods that had prevailed previously, howard Gardner called Shannons thesis possibly the most important, and also the most noted, masters thesis of the century

4.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set

5.
Perfect information
–
In economics, perfect information is a feature of perfect competition. Perfect information is importantly different from information, which implies common knowledge of each players utility functions, payoffs. Chess is an example of a game with perfect information as each player can see all of the pieces on the board at all times. Other examples of games include tic-tac-toe, Irensei, and Go. Card games where each players cards are hidden from other players, as in contract bridge, complete information Extensive form game Information asymmetry Partial knowledge Perfect competition Screening game Signaling game Fudenberg, D. and Tirole, J. Game Theory, MIT Press. A primer in theory, Harvester-Wheatsheaf

6.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern

7.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game

8.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice

9.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others

10.
Pareto efficiency
–
The concept is named after Vilfredo Pareto, Italian engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The concept has applications in fields such as economics, engineering. The Pareto frontier is the set of all Pareto efficient allocations, an allocation is defined as Pareto efficient or Pareto optimal when no further Pareto improvements can be made. The notion of Pareto efficiency can also be applied to the selection of alternatives in engineering, each option is first assessed under multiple criteria and then a subset of options is identified with the property that no other option can categorically outperform any of its members. Pareto optimality is a defined concept used to determine when an allocation is optimal. If there is a transfer that satisfies this condition, the reallocation is called a Pareto improvement, when no further Pareto improvements are possible, the allocation is a Pareto optimum. A formal definition for an economy is as follows, Consider an economy with i agents and j goods. Here in this economy, feasibility refers to an allocation where the total amount of each good that is allocated sums to no more than the total amount of the good in the economy. It is important to note that a change from a generally inefficient economic allocation to an efficient one is not necessarily a Pareto improvement, even if there are overall gains in the economy, if a single agent is disadvantaged by the reallocation, the allocation is not Pareto optimal. For instance, if a change in economic policy eliminates a monopoly and that subsequently becomes competitive. However, since the monopolist is disadvantaged, this is not a Pareto improvement, thus, in practice, to ensure that nobody is disadvantaged by a change aimed at achieving Pareto efficiency, compensation of one or more parties may be required. However, in the world, such compensations may have unintended consequences. They can lead to incentive distortions over time as agents anticipate such compensations, under certain idealized conditions, it can be shown that a system of free markets, also called a competitive equilibrium, will lead to a Pareto efficient outcome. This is called the first welfare theorem and it was first demonstrated mathematically by economists Kenneth Arrow and Gérard Debreu. However, the result only holds under the assumptions necessary for the proof. In the absence of information or complete markets, outcomes will generally be Pareto inefficient. In addition to the first welfare theorem linking the concepts of Pareto optimal allocations and free markets, the second welfare theorem is essentially the reverse of the first welfare theorem. It states that under similar ideal assumptions, any Pareto optimum can be obtained by some competitive equilibrium, or free market system, a weak Pareto optimum is an allocation for which there are no possible alternative allocations whose realization would cause every individual to gain