1.
Game theory
–
Game theory is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. Game theory is used in economics, political science, and psychology, as well as logic, computer science. Originally, it addressed zero-sum games, in one persons gains result in losses for the other participants. Today, game theory applies to a range of behavioral relations, and is now an umbrella term for the science of logical decision making in humans, animals. Modern game theory began with the idea regarding the existence of equilibria in two-person zero-sum games. Von Neumanns original proof used Brouwer fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this provided an axiomatic theory of expected utility. This theory was developed extensively in the 1950s by many scholars, Game theory was later explicitly applied to biology in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been recognized as an important tool in many fields. With the Nobel Memorial Prize in Economic Sciences going to game theorist Jean Tirole in 2014, John Maynard Smith was awarded the Crafoord Prize for his application of game theory to biology. Early discussions of examples of two-person games occurred long before the rise of modern, the first known discussion of game theory occurred in a letter written by Charles Waldegrave, an active Jacobite, and uncle to James Waldegrave, a British diplomat, in 1713. In this letter, Waldegrave provides a mixed strategy solution to a two-person version of the card game le Her. James Madison made what we now recognize as an analysis of the ways states can be expected to behave under different systems of taxation. In 1913 Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels and it proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems, the Danish mathematician Zeuthen proved that the mathematical model had a winning strategy by using Brouwers fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Borel conjectured that non-existence of mixed-strategy equilibria in two-person zero-sum games would occur, a conjecture that was proved false. Game theory did not really exist as a field until John von Neumann published a paper in 1928. Von Neumanns original proof used Brouwers fixed-point theorem on continuous mappings into compact convex sets and his paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern
2.
Normal-form game
–
In game theory, normal form is a description of a game. Unlike extensive form, normal-form representations are not graphical per se, while this approach can be of greater use in identifying strictly dominated strategies and Nash equilibria, some information is lost as compared to extensive-form representations. The normal-form representation of a game includes all perceptible and conceivable strategies, in static games of complete, perfect information, a normal-form representation of a game is a specification of players strategy spaces and payoff functions. The matrix to the right is a representation of a game in which players move simultaneously. For example, if player 1 plays top and player 2 plays left, player 1 receives 4, in each cell, the first number represents the payoff to the row player, and the second number represents the payoff to the column player. Often, symmetric games are represented only one payoff. This is the payoff for the row player, for example, the payoff matrices on the right and left below represent the same game. The payoff matrix facilitates elimination of dominated strategies, and it is used to illustrate this concept. For example, in the dilemma, we can see that each prisoner can either cooperate or defect. If exactly one prisoner defects, he gets off easily and the prisoner is locked up for a long time. However, if they both defect, they both be locked up for a shorter time. One can determine that Cooperate is strictly dominated by Defect, one must compare the first numbers in each column, in this case 0 > −1 and −2 > −5. This shows that no matter what the player chooses, the row player does better by choosing Defect. Similarly, one compares the second payoff in each row, again 0 > −1 and this shows that no matter what row does, column does better by choosing Defect. This demonstrates the unique Nash equilibrium of this game is and these matrices only represent games in which moves are simultaneous. The above matrix does not represent the game in which player 1 moves first, observed by player 2, in order to represent this sequential game we must specify all of player 2s actions, even in contingencies that can never arise in the course of the game. In this game, player 2 has actions, as before, Left, unlike before he has four strategies, contingent on player 1s actions. Accordingly, to specify a game, the payoff function has to be specified for each player in the player set P=. D. Fudenberg and J. Tirole, Game Theory
3.
Nash equilibrium
–
The Nash equilibrium is one of the foundational concepts in game theory. The reality of the Nash equilibrium of a game can be tested using experimental economics methods, Game theorists use the Nash equilibrium concept to analyze the outcome of the strategic interaction of several decision makers. The simple insight underlying John Nashs idea is that one cannot predict the result of the choices of multiple decision makers if one analyzes those decisions in isolation, instead, one must ask what each player would do, taking into account the decision-making of the others. Nash equilibrium has been used to analyze hostile situations like war and arms races and it has also been used to study to what extent people with different preferences can cooperate, and whether they will take risks to achieve a cooperative outcome. It has been used to study the adoption of technical standards, the Nash equilibrium was named after John Forbes Nash, Jr. A version of the Nash equilibrium concept was first known to be used in 1838 by Antoine Augustin Cournot in his theory of oligopoly, in Cournots theory, firms choose how much output to produce to maximize their own profit. However, the best output for one firm depends on the outputs of others, a Cournot equilibrium occurs when each firms output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium, however, Nashs definition of equilibrium is broader than Cournots. It is also broader than the definition of a Pareto-efficient equilibrium, the modern game-theoretic concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible actions. The concept of the mixed-strategy Nash equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games, however, their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any game with a finite set of actions. The key to Nashs ability to prove far more generally than von Neumann lay in his definition of equilibrium. According to Nash, a point is an n-tuple such that each players mixed strategy maximizes his payoff if the strategies of the others are held fixed. Thus each players strategy is optimal against those of the others, since the development of the Nash equilibrium concept, game theorists have discovered that it makes misleading predictions in certain circumstances. They have proposed many related solution concepts designed to overcome perceived flaws in the Nash concept, one particularly important issue is that some Nash equilibria may be based on threats that are not credible. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats, other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. Informally, a profile is a Nash equilibrium if no player can do better by unilaterally changing his or her strategy. To see what this means, imagine that each player is told the strategies of the others
4.
Graphical game theory
–
In game theory, the common ways to describe a game are the normal form and the extensive form. The graphical form is a compact representation of a game using the interaction among participants. Consider a game with n players with m strategies each and we will represent the players as nodes in a graph in which each player has a utility function that depends only on him and his neighbors. As the utility depends on fewer other players, the graphical representation would be smaller. Each node i in G has a function u i, d i +1 → R, U i specifies the utility of player i as a function of his strategy as well as those of his neighbors. For a general n players game, in each player has m possible strategies. The size of the representation for this game is O where d is the maximal node degree in the graph. If d ≪ n, then the graphical representation is much smaller. In case where each players utility function depends only on one player, The maximal degree of the graph is 1. So, the size of the input will be n m 2. Finding Nash equilibrium in a game takes exponential time in the size of the representation, if the graphical representation of the game is a tree, we can find the equilibrium in polynomial time. In the general case, where the degree of a node is 3 or more. In Vazirani, Vijay V. Nisan, Noam, Roughgarden, Tim, Tardos, Michael Kearns, Michael L. Littman and Satinder Singh Graphical Models for Game Theory
5.
Directed graph
–
In mathematics, and more specifically in graph theory, a directed graph is a graph that is a set of vertices connected by edges, where the edges have a direction associated with them. It differs from an ordinary or undirected graph, in that the latter is defined in terms of unordered pairs of vertices, more specifically, these entities are addressed as directed multigraphs. On the other hand, the definition allows a directed graph to have loops. More specifically, directed graphs without loops are addressed as directed graphs. Symmetric directed graphs are directed graphs where all edges are bidirected, simple directed graphs are directed graphs that have no loops and no multiple arrows with same source and target nodes. As already introduced, in case of arrows the entity is usually addressed as directed multigraph. Some authors describe digraphs with loops as loop-digraphs. Complete directed graphs are directed graphs where each pair of vertices is joined by a symmetric pair of directed arrows. It follows that a complete digraph is symmetric, oriented graphs are directed graphs having no bidirected edges. It follows that a graph is an oriented graph iff it hasnt any 2-cycle. Tournaments are oriented graphs obtained by choosing a direction for each edge in undirected complete graphs. Directed acyclic graphs are directed graphs with no directed cycles, multitrees are DAGs in which no two directed paths from a single starting vertex meet back at the same ending vertex. Oriented trees or polytrees are DAGs formed by orienting the edges of undirected acyclic graphs, rooted trees are oriented trees in which all edges of the underlying undirected tree are directed away from the roots. Rooted directed graphs are digraphs in which a vertex has been distinguished as the root, control flow graphs are rooted digraphs used in computer science as a representation of the paths that might be traversed through a program during its execution. Signal-flow graphs are directed graphs in which nodes represent system variables and branches represent functional connections between pairs of nodes, flow graphs are digraphs associated with a set of linear algebraic or differential equations. State diagrams are directed multigraphs that represent finite state machines, representations of a quiver label its vertices with vector spaces and its edges compatibly with linear transformations between them, and transform via natural transformations. If a path leads from x to y, then y is said to be a successor of x and reachable from x, the arrow is called the inverted arrow of. The adjacency matrix of a graph is unique up to identical permutation of rows. Another matrix representation for a graph is its incidence matrix. For a vertex, the number of head ends adjacent to a vertex is called the indegree of the vertex, the indegree of v is denoted deg− and its outdegree is denoted deg+
6.
Reduction (complexity)
–
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently could also be used as a subroutine to solve problem A efficiently, when this is true, solving A cannot be harder than solving B. Harder means having an estimate of the required computational resources in a given context. We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used. There are two situations where we need to use reductions, First, we find ourselves trying to solve a problem that is similar to a problem weve already solved. This is perhaps the most obvious use of reductions, second, suppose we have a problem that weve proven is hard to solve, and we have a similar new problem. We might suspect that it is hard to solve. We argue by contradiction, suppose the new problem is easy to solve, then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard, a very simple example of a reduction is from multiplication to squaring. Suppose all we know how to do is to add, subtract, take squares and this seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction, however, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. Going in the direction, however, we can certainly square a number with just one multiplication. Using this limited form of reduction, we have shown the result that multiplication is harder in general than squaring. Given two subsets A and B of N and a set of functions F from N to N which is closed under composition, A is called reducible to B under F if ∃ f ∈ F. X ∈ A ⟺ f ∈ B We write A ≤ F B Let S be a subset of P and ≤ a reduction, then S is called closed under ≤ if ∀ s ∈ S. A reduction is a preordering, that is a reflexive and transitive relation, on P×P, as described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map instances of one problem to instances of another, Turing reductions compute the solution to one problem, the many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes
7.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete
8.
Treewidth
–
In graph theory, the treewidth of an undirected graph is a number associated with the graph. Treewidth is commonly used as a parameter in the parameterized complexity analysis of graph algorithms, the graphs with treewidth at most k are also called partial k-trees, many other well-studied graph families also have bounded treewidth. The concept of treewidth was originally introduced by Umberto Bertelé and Francesco Brioschi under the name of dimension and it was later rediscovered by Rudolf Halin, based on properties that it shares with a different graph parameter, the Hadwiger number. Later it was rediscovered by Neil Robertson and Paul Seymour and has since been studied by many other authors. A tree decomposition of a graph G = is a tree, T, xn, where each Xi is a subset of V, satisfying the following properties, The union of all sets Xi equals V. That is, each vertex is contained in at least one tree node. If Xi and Xj both contain a vertex v, then all nodes Xk of T in the path between Xi and Xj contain v as well, equivalently, the tree nodes containing vertex v form a connected subtree of T. For every edge in the graph, there is a subset Xi that contains both v and w and that is, vertices are adjacent in the graph only when the corresponding subtrees have a node in common. The width of a decomposition is the size of its largest set Xi minus one. The treewidth tw of a graph G is the minimum width among all possible tree decompositions of G, in this definition, the size of the largest set is diminished by one in order to make the treewidth of a tree equal to one. Equivalently, the treewidth of G is one less than the size of the largest clique in the graph containing G with the smallest clique number. A chordal graph with this size may be obtained by adding to G an edge between every two vertices that both belong to at least one of the sets Xi. Treewidth may also be characterized in terms of havens, functions describing an evasion strategy for a certain pursuit-evasion game defined on a graph, a similar characterization can also be made using brambles, families of connected subgraphs that all touch each other. The order of a bramble is the smallest hitting set for the family of subgraphs, every complete graph Kn has treewidth n −1. This is most easily using the definition of treewidth in terms of chordal graphs, the complete graph is already chordal. A connected graph with at least two vertices has treewidth 1 if and only if it is a tree, a tree has treewidth one by the same reasoning as for complete graphs. For any fixed constant k, the graphs of treewidth at most k are called the partial k-trees, other families of graphs with bounded treewidth include the cactus graphs, pseudoforests, series-parallel graphs, outerplanar graphs, Halin graphs, and Apollonian networks. The control flow graphs arising in the compilation of structured programs also have bounded treewidth, the planar graphs do not have bounded treewidth, because the n × n grid graph is a planar graph with treewidth exactly n
9.
AC0
–
AC0 is a complexity class used in circuit complexity. It is the smallest class in the AC hierarchy, and consists of all families of circuits of depth O and polynomial size, with unlimited-fanin AND gates and it thus contains NC0, which has only bounded-fanin AND and OR gates. Integer addition and subtraction are computable in AC0, but multiplication is not, in 1984 Furst, Saxe, and Sipser showed that calculating the parity of an input cannot be decided by any AC0 circuits, even with non-uniformity. It follows that AC0 is not equal to NC1, because a family of circuits in the class can compute parity. More precise bounds follow from switching lemma, using them, it has been shown that there is an oracle separation between the polynomial hierarchy and PSPACE
10.
NP-hardness
–
NP-hardness, in computational complexity theory, is the defining property of a class of problems that are, informally, at least as hard as the hardest problems in NP. As a consequence, finding an algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP. A common misconception is that the NP in NP-hard stands for non-polynomial when in fact it stands for Non-deterministic Polynomial acceptable problems, although it is suspected that there are no polynomial-time algorithms for NP-hard problems, this has never been proven. Moreover, the class P in which all problems can be solved in time, is contained in the NP class. Informally, we can think of an algorithm that can call such a machine as a subroutine for solving H. Another definition is to require that there is a reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. Awkwardly, it does not restrict the class NP-hard to decision problems, for instance it also includes search problems, If P ≠ NP, then NP-hard problems cannot be solved in polynomial time. Note that some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio or even up to any approximation ratio. An example of an NP-hard problem is the subset sum problem. That is a problem, and happens to be NP-complete. Another example of an NP-hard problem is the problem of finding the least-cost cyclic route through all nodes of a weighted graph. This is commonly known as the traveling salesman problem, there are decision problems that are NP-hard but not NP-complete, for example the halting problem. This is the problem which asks given a program and its input and that is a yes/no question, so this is a decision problem. It is easy to prove that the problem is NP-hard. It is also easy to see that the problem is not in NP since all problems in NP are decidable in a finite number of operations, while the halting problem. There are also NP-hard problems that are neither NP-complete nor undecidable, for instance, the language of True quantified Boolean formulas is decidable in polynomial space, but not non-deterministic polynomial time. NP-hard problems do not have to be elements of the complexity class NP, NP-hard Class of decision problems which are at least as hard as the hardest problems in NP
11.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
12.
PSPACE
–
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. PSPACE is a superset of the set of context-sensitive languages. It turns out that allowing the Turing machine to be nondeterministic does not add any extra power, because of Savitchs theorem, NPSPACE is equivalent to PSPACE, essentially because a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space. Also, the complements of all problems in PSPACE are also in PSPACE and it is widely suspected that all are strict. The containments in the line are both known to be strict. The first follows from direct diagonalization and the fact that PSPACE = NPSPACE via Savitchs theorem, the second follows simply from the space hierarchy theorem. The hardest problems in PSPACE are the PSPACE-Complete problems, see PSPACE-Complete for examples of problems that are suspected to be in PSPACE but not in NP. The class PSPACE is closed under union, complementation. An alternative characterization of PSPACE is the set of problems decidable by an alternating Turing machine in polynomial time, a logical characterization of PSPACE from descriptive complexity theory is that it is the set of problems expressible in second-order logic with the addition of a transitive closure operator. A full transitive closure is not needed, a transitive closure. It is the addition of this operator that distinguishes PSPACE from PH, a major result of complexity theory is that PSPACE can be characterized as all the languages recognizable by a particular interactive proof system, the one defining the class IP. In this system, there is an all-powerful prover trying to convince a randomized polynomial-time verifier that a string is in the language, PSPACE can be characterized as the quantum complexity class QIP. PSPACE is also equal to PCTC, problems solvable by classical computers using closed curves, as well as to BQPCTC. PSPACE-complete problems are of importance to studying PSPACE problems because they represent the most difficult problems in PSPACE. Finding a simple solution to a PSPACE-complete problem would mean we have a solution to all other problems in PSPACE because all PSPACE problems could be reduced to a PSPACE-complete problem. An example of a PSPACE-complete problem is the quantified Boolean formula problem, introduction to the Theory of Computation. Chapter 19, Polynomial space, pp. 455–490, introduction to the Theory of Computation. Chapter 8, Space Complexity Complexity Zoo, PSPACE
13.
Polynomial hierarchy
–
In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes P, NP and co-NP to oracle machines. It is a counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. There are multiple equivalent definitions of the classes of the polynomial hierarchy. If any Σ k P = Σ k +1 P, or if any Σ k P = Π k P, then the hierarchy collapses to level k, in particular, if P = NP, then the hierarchy collapses completely. The union of all classes in the hierarchy is the complexity class PH. The polynomial hierarchy is an analogue of the hierarchy and arithmetical hierarchy. It is known that PH is contained within PSPACE, but it is not known whether the two classes are equal. One useful reformulation of this problem is that PH = PSPACE if, if the polynomial hierarchy has any complete problems, then it has only finitely many distinct levels. Since there are PSPACE-complete problems, we know that if PSPACE = PH, then the hierarchy must collapse. Each class in the hierarchy contains ≤ m P -complete problems. Furthermore, each class in the hierarchy is closed under ≤ m P -reductions, meaning that for a class C in the hierarchy. These two facts together imply that if K i is a problem for Σ i P, then Σ i +1 P = N P K i. For instance, Σ2 P = N P S A T, in other words, if a language is defined based on some oracle in C, then we can assume that it is defined based on a complete problem for C. Complete problems therefore act as representatives of the class for which they are complete, the Sipser–Lautemann theorem states that the class BPP is contained in the second level of the polynomial hierarchy. Kannans theorem states that for any k, Σ2 is not contained in SIZE, todas theorem states that the polynomial hierarchy is contained in P#P. EXPTIME Exponential hierarchy Arithmetic hierarchy A. R. Meyer and L. J. Stockmeyer. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space, in Proceedings of the 13th IEEE Symposium on Switching and Automata Theory, pp. 125–129,1972. The paper that introduced the polynomial hierarchy, theoretical Computer Science, vol.3, pp. 1–22,1976. Michael R. Garey and David S. Johnson, computers and Intractability, A Guide to the Theory of NP-Completeness
14.
Congestion game
–
Congestion games are a class of games in game theory first proposed by Rosenthal in 1973. In a Congestion game we define players and resources, where the payoff of each depends on the resources it chooses. Congestion games are a case of potential games. Rosenthal proved that any congestion game is a game and Monderer and Shapley proved the converse, for any potential game. Consider a traffic net where two players originate at point O and need to get to point T, suppose that node O is connected to node T via connection points A and B, where A is a little closer than B. Good outcome in this game will be for the two players to coordinate and pass through different connection points, and if so, what will the cost be for each player. Discrete congestion games are games with the following components, assume that each d e is positive and monotone increasing. Lets consider the directed graph where each player has two available strategies - going though A or going through B - leading to a total of four possibilities. The existence of Nash equilibria can be shown by constructing a potential function that assigns a value to each outcome, moreover, this construction will also show that iterated best response finds a Nash equilibrium. Define Φ = ∑ e ∈ E ∑ k =1 x e d e, note that this function is not the social welfare ∑ e ∈ E x e d e, but rather a discrete integral of sorts. The critical property of a function for a congestion game is that if one player switches strategy. Consider the case when player i switches from P i to Q i, elements that are in both of the strategies remain unaffected, elements that the player leaves decrease the potential by d e, and the elements the player joins increase the potential by d e. This change in potential is precisely the change in delay for player i, now observe that any minimum of Φ is a pure Nash equilibrium. Fixing all but one player, any improvement in strategy by that player corresponds to decreasing Φ, now since there are a finite number of configurations and each d e is monotone, there exists an equilibrium. Continuous congestion games are the case as n → ∞. In this setup, we consider players as infinitesimally small and we keep E a finite set of congestible elements. Instead of recognizing n players, as in the case, we have n types of players. Each type picks a strategy from a strategy set S i, as before, assume that the d e are monotone and positive, but add the assumption that they are continuous as well
15.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
16.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
17.
International Standard Serial Number
–
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication. The ISSN is especially helpful in distinguishing between serials with the same title, ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature. The ISSN system was first drafted as an International Organization for Standardization international standard in 1971, ISO subcommittee TC 46/SC9 is responsible for maintaining the standard. When a serial with the content is published in more than one media type. For example, many serials are published both in print and electronic media, the ISSN system refers to these types as print ISSN and electronic ISSN, respectively. The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers, as an integer number, it can be represented by the first seven digits. The last code digit, which may be 0-9 or an X, is a check digit. Formally, the form of the ISSN code can be expressed as follows, NNNN-NNNC where N is in the set, a digit character. The ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, for calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, the modulus 11 of the sum must be 0. There is an online ISSN checker that can validate an ISSN, ISSN codes are assigned by a network of ISSN National Centres, usually located at national libraries and coordinated by the ISSN International Centre based in Paris. The International Centre is an organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, at the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept, where ISBNs are assigned to individual books, an ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an identifier associated with a serial title. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change, separate ISSNs are needed for serials in different media. Thus, the print and electronic versions of a serial need separate ISSNs. Also, a CD-ROM version and a web version of a serial require different ISSNs since two different media are involved, however, the same ISSN can be used for different file formats of the same online serial
18.
JSTOR
–
JSTOR is a digital library founded in 1995. Originally containing digitized back issues of journals, it now also includes books and primary sources. It provides full-text searches of almost 2,000 journals, more than 8,000 institutions in more than 160 countries have access to JSTOR, most access is by subscription, but some older public domain content is freely available to anyone. William G. Bowen, president of Princeton University from 1972 to 1988, JSTOR originally was conceived as a solution to one of the problems faced by libraries, especially research and university libraries, due to the increasing number of academic journals in existence. Most libraries found it prohibitively expensive in terms of cost and space to maintain a collection of journals. By digitizing many journal titles, JSTOR allowed libraries to outsource the storage of journals with the confidence that they would remain available long-term, online access and full-text search ability improved access dramatically. Bowen initially considered using CD-ROMs for distribution, JSTOR was initiated in 1995 at seven different library sites, and originally encompassed ten economics and history journals. JSTOR access improved based on feedback from its sites. Special software was put in place to make pictures and graphs clear, with the success of this limited project, Bowen and Kevin Guthrie, then-president of JSTOR, wanted to expand the number of participating journals. They met with representatives of the Royal Society of London and an agreement was made to digitize the Philosophical Transactions of the Royal Society dating from its beginning in 1665, the work of adding these volumes to JSTOR was completed by December 2000. The Andrew W. Mellon Foundation funded JSTOR initially, until January 2009 JSTOR operated as an independent, self-sustaining nonprofit organization with offices in New York City and in Ann Arbor, Michigan. JSTOR content is provided by more than 900 publishers, the database contains more than 1,900 journal titles, in more than 50 disciplines. Each object is identified by an integer value, starting at 1. In addition to the site, the JSTOR labs group operates an open service that allows access to the contents of the archives for the purposes of corpus analysis at its Data for Research service. This site offers a facility with graphical indication of the article coverage. Users may create focused sets of articles and then request a dataset containing word and n-gram frequencies and they are notified when the dataset is ready and may download it in either XML or CSV formats. The service does not offer full-text, although academics may request that from JSTOR, JSTOR Plant Science is available in addition to the main site. The materials on JSTOR Plant Science are contributed through the Global Plants Initiative and are only to JSTOR
19.
Extensive-form game
–
Extensive-form games also allow representation of incomplete information in the form of chance events encoded as moves by nature. Whereas the rest of this article follows this approach with motivating examples. This general definition was introduced by Harold W. Kuhn in 1953, each players subset of nodes is referred to as the nodes of the player. Each node of the Chance player has a probability distribution over its outgoing edges, at any given non-terminal node belonging to Chance, an outgoing branch is chosen according to the probability distribution. A pure strategy for a player thus consists of a selection—choosing precisely one class of outgoing edges for every information set, in a game of perfect information, the information sets are singletons. Its less evident how payoffs should be interpreted in games with Chance nodes and these can be made precise using epistemic modal logic, see Shoham & Leyton-Brown for details. A perfect information two-player game over a tree can be represented as an extensive form game with outcomes. Examples of such games include tic-tac-toe, chess, and infinite chess, a game over an expectminimax tree, like that of backgammon, has no imperfect information but has moves of chance. For example, poker has both moves of chance, and imperfect information, the numbers by every non-terminal node indicate to which player that decision node belongs. The numbers by every terminal node represent the payoffs to the players, the labels by every edge of the graph are the name of the action that edge represents. The initial node belongs to player 1, indicating that player 1 moves first, play according to the tree is as follows, player 1 chooses between U and D, player 2 observes player 1s choice and then chooses between U and D. The payoffs are as specified in the tree, there are four outcomes represented by the four terminal nodes of the tree, and. The payoffs associated with each outcome respectively are as follows, if player 1 plays D, player 2 will play U to maximise his payoff and so player 1 will only receive 1. However, if player 1 plays U, player 2 maximises his payoff by playing D, player 1 prefers 2 to 1 and so will play U and player 2 will play D. This is the perfect equilibrium. An advantage of representing the game in this way is that it is clear what the order of play is, the tree shows clearly that player 1 moves first and player 2 observes this move. However, in some games play does not occur like this, one player does not always observe the choice of another. An information set is a set of decision nodes such that, in extensive form, an information set is indicated by a dotted line connecting all nodes in that set or sometimes by a loop drawn around all the nodes in that set
20.
Information set (game theory)
–
In game theory, an information set is a set that, for a particular player, establishes all the possible moves that could have taken place in the game so far, given what that player has observed. If the game has information, every information set contains only one member. Otherwise, it is the case that some players cannot be exactly what has taken place so far in the game. More specifically, in the form, an information set is a set of decision nodes such that. The notion of set was introduced by John von Neumann. At the right are two versions of the battle of the game, shown in extensive form. The first game is simply sequential-when player 2 has the chance to move, the second game is also sequential, but the dotted line shows player 2s information set. This is the way to show that when player 2 moves. This difference also leads to different predictions for the two games, in the first game, player 1 has the upper hand. They know that they can choose O safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o and get 2 than choose f, formally, thats applying subgame perfection to solve the game. In the second game, player 2 cant observe what player 1 did, game Theory, A very short introduction
21.
Preference (economics)
–
In economics and other social sciences, preference is the ordering of alternatives based on their relative utility, a process which results in an optimal choice. The character of the preferences is determined purely by taste factors, independent of considerations of prices, income. With the help of the scientific method many practical decisions of life can be modelled, in 1926 Ragnar Frisch developed for the first time a mathematical model of preferences in the context of economic demand and utility functions. Up to then, economists had developed a theory of demand that omitted primitive characteristics of people. This omission ceased when, at the end of the 19th, because binary choices are directly observable, it instantly appealed to economists. The search for observables in microeconomics is taken further by revealed preference theory. Since the pioneer efforts of Frisch in the 1920s, one of the issues which has pervaded the theory of preferences is the representability of a preference structure with a real-valued function. This has been achieved by mapping it to the mathematical index called utility, von Neumann and Morgenstern 1944 book Games and Economic Behaviour treated preferences as a formal relation whose properties can be stated axiomatically. Even though the economics of choice can be examined either at the level of utility functions or at the level of preferences, suppose the set of all states of the world is X and an agent has a preference relation on X. It is common to mark the weak preference relation by ⪯, the symbol ∼ is used as a shorthand to the indifference relation, x ∼ y ⟺, which reads the agent is indifferent between y and x. The symbol ≺ is used as a shorthand to the preference relation, x ≺ y ⟺. In everyday speech, the statement x is preferred to y is generally understood to mean that someone chooses x over y, however, decision theory rests on more precise definitions of preferences given that there are many experimental conditions influencing peoples choices in many directions. Suppose a person is confronted with an experiment that she must solve with the aid of introspection. She is offered apples and oranges, and is asked to choose one of the two. A decision scientist observing this event would be inclined to say that whichever is chosen is the preferred alternative. Under several repetitions of experiment, if the scientist observes that apples are chosen 51% of the time it would mean that x ≻ y. If half of the oranges are chosen, then x ∼ y. Finally, if 51% of the time she chooses oranges it means that y ≻ x, preference is here being identified with a greater frequency of choice
22.
Simultaneous game
–
In game theory, a simultaneous game is a game where each player chooses his action without knowledge of the actions chosen by other players. Normal form representations are used for simultaneous games. Rock-Paper-Scissors, a widely played game, is a real life example of a simultaneous game. Both make a decision at the time, randomly, without prior knowledge of the opponents decision. There are two players in game and each of them has 3 different strategies to make decision. We will display Player 1’s strategies as rows and Player 2’s strategies as columns, in the table, the numbers in red represent the payoff to Player 1, the numbers in blue represent the payoff to Player 2. Hence, the pay off for a 2 player game in Rock-Paper-Scissors will look like this, In game theory terms, Prisoner dilemma is an example of simultaneous game
23.
Economic equilibrium
–
In economics, economic equilibrium is a state where economic forces such as supply and demand are balanced and in the absence of external influences the values of economic variables will not change. For example, in the textbook model of perfect competition, equilibrium occurs at the point at which quantity demanded. However, the concept of equilibrium in economics also applies to imperfectly competitive markets, three basic properties of equilibrium in general have been proposed by Huw Dixon. These are, Equilibrium property P1, The behavior of agents is consistent, Equilibrium property P2, No agent has an incentive to change its behavior. Equilibrium Property P3, Equilibrium is the outcome of some dynamic process, in a competitive equilibrium, supply equals demand. Property P1 is satisfied, because at the price the amount supplied is equal to the amount demanded. Demand is chosen to maximize utility given the price, no one on the demand side has any incentive to demand more or less at the prevailing price. Likewise supply is determined by firms maximizing their profits at the market price, hence, agents on neither the demand side nor the supply side will have any incentive to alter their actions. To see whether Property P3 is satisfied, consider what happens when the price is above the equilibrium, in this case there is an excess supply, with the quantity supplied exceeding that demanded. This will tend to put pressure on the price to make it return to equilibrium. Likewise where the price is below the point there is a shortage in supply leading to an increase in prices back to equilibrium. Not all equilibria are stable in the sense of Equilibrium property P3 and it is possible to have competitive equilibria that are unstable. However, if an equilibrium is unstable, it raises the question of how you might get there, even if it satisfies properties P1 and P2, the absence of P3 means that the market can only be in the unstable equilibrium if it starts off there. In most simple microeconomic stories of supply and demand a static equilibrium is observed in a market, however, Equilibrium may also be economy-wide or general, as opposed to the partial equilibrium of a single market. Equilibrium can change if there is a change in demand or supply conditions, for example, an increase in supply will disrupt the equilibrium, leading to lower prices. Eventually, a new equilibrium will be attained in most markets, then, there will be no change in price or the amount of output bought and sold — until there is an exogenous shift in supply or demand. That is, there are no endogenous forces leading to the price or the quantity, the Nash equilibrium is widely used in economics as the main alternative to competitive equilibrium. It is used there is a strategic element to the behavior of agents
24.
Solution concept
–
In game theory, a solution concept is a formal rule for predicting how a game will be played. These predictions are called solutions, and describe which strategies will be adopted by players and, therefore, the most commonly used solution concepts are equilibrium concepts, most famously Nash equilibrium. Many solution concepts, for games, will result in more than one solution. This puts any one of the solutions in doubt, so a game theorist may apply a refinement to narrow down the solutions, each successive solution concept presented in the following improves on its predecessor by eliminating implausible equilibria in richer games. Let Γ be the class of all games and, for each game G ∈ Γ, let S G be the set of strategy profiles of G. A solution concept is an element of the direct product Π G ∈ Γ2 S G, i. e. a function F, Γ → ⋃ G ∈ Γ2 S G such that F ⊆ S G for all G ∈ Γ. In this solution concept, players are assumed to be rational, a strategy is strictly dominated when there is some other strategy available to the player that always has a higher payoff, regardless of the strategies that the other players choose. For example, in the dilemma, cooperate is strictly dominated by defect for both players because either player is always better off playing defect, regardless of what his opponent does. A Nash equilibrium is a profile in which every strategy is a best response to every other strategy played. There are games that have multiple Nash equilibria, some of which are unrealistic, in the case of dynamic games, unrealistic Nash equilibria might be eliminated by applying backward induction, which assumes that future play will be rational. It therefore eliminates noncredible threats because such threats would be irrational to carry out if a player was called upon to do so. For example, consider a game in which the players are an incumbent firm in an industry. As it stands, the incumbent has a monopoly over the industry, if the entrant chooses not to enter, the payoff to the incumbent is high and the entrant neither loses nor gains. If the entrant enters, the incumbent can fight or accommodate the entrant and it will fight by lowering its price, running the entrant out of business and damaging its own profits. If it accommodates the entrant it will some of its sales. If the entrant enters, the best response of the incumbent is to accommodate, if the incumbent accommodates, the best response of the entrant is to enter. Hence the strategy profile in which the incumbent accommodates if the entrant enters, however, if the incumbent is going to play fight, the best response of the entrant is to not enter. If the entrant does not enter, it does not matter what the incumbent chooses to do, hence fight can be considered as a best response of the incumbent if the entrant does not enter
25.
Subgame perfect equilibrium
–
In game theory, a subgame perfect equilibrium is a refinement of a Nash equilibrium used in dynamic games. A strategy profile is a perfect equilibrium if it represents a Nash equilibrium of every subgame of the original game. Every finite extensive game has a perfect equilibrium. A common method for determining subgame perfect equilibria in the case of a game is backward induction. Here one first considers the last actions of the game and determines which actions the final mover should take in each possible circumstance to maximize his/her utility. One then supposes that the last actor will do these actions and this process continues until one reaches the first move of the game. The strategies which remain are the set of all subgame perfect equilibria for finite-horizon extensive games of perfect information, however, backward induction cannot be applied to games of imperfect or incomplete information because this entails cutting through non-singleton information sets. A subgame perfect equilibrium necessarily satisfies the One-Shot deviation principle, the set of subgame perfect equilibria for a given game is always a subset of the set of Nash equilibria for that game. In some cases the sets can be identical, the Ultimatum game provides an intuitive example of a game with fewer subgame perfect equilibria than Nash equilibria. An example for a game possessing an ordinary Nash equilibrium and a perfect equilibrium is shown in Figure 1. The strategies for player 1 are given by whereas player 2 has the choice between 2 as his choice to be kind or unkind to player 1 might depend on the previously made by player 1. The payoff matrix of the game is shown in Table 1, observe that there are two different Nash equilibria, given by the strategy profiles L, and R. Consider the equilibrium given by the strategy profile L, more formally, the equilibrium is not an equilibrium with respect to the subgame induced by node 22. It is likely that in real life player 2 would choose the strategy instead which would in turn inspire player 1 to change his strategy to R, the resulting profile R, is not only a Nash equilibrium but it is also an equilibrium in all subgames. It is therefore a perfect equilibrium. Reinhard Selten proved that any game which can be broken into sub-games containing a sub-set of all the choices in the main game will have a subgame perfect Nash Equilibrium strategy. Subgame perfection is used with games of complete information. Subgame perfection can be used with extensive form games of complete, one game in which the backward induction solution is well known is tic-tac-toe, but in theory even Go has such an optimum strategy for all players