1.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following

2.
Complexity class
–
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form, the set of problems that can be solved by an abstract machine M using O of resource R, Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is a measurement, and does not give time or space in requirements in terms of seconds or bytes. The O is read as order of, for the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class. The resource in question can either be time, essentially the number of operations on an abstract machine. The simplest complexity classes are defined by the factors, The type of computational problem. However, complexity classes can be defined based on problems, counting problems, optimization problems, promise problems. The resource that are being bounded and the bounds, These two properties are usually stated together, such as time, logarithmic space, constant depth. Many complexity classes can be characterized in terms of the logic needed to express them. Bounding the computation time above by some function f often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in time on a multi-tape Turing machine. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that the complexities in any two reasonable and general models of computation are polynomially related. This forms the basis for the complexity class P, which is the set of problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of problems is FP. The Blum axioms can be used to define complexity classes without referring to a computational model. Many important complexity classes can be defined by bounding the time or space used by the algorithm, some important complexity classes of decision problems defined in this manner are the following, It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitchs theorem. #P is an important complexity class of counting problems, classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems, many complexity classes are defined using the concept of a reduction

3.
Decision problem
–
In computability theory and computational complexity theory, a decision problem is a question in some formal system that can be posed as a yes-no question, dependent on the input values. For example, the given two numbers x and y, does x evenly divide y. is a decision problem. The answer can be yes or no, and depends upon the values of x and y. A method for solving a problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the problem given two numbers x and y, does x evenly divide y. would give the steps for determining whether x evenly divides y. One such algorithm is long division, taught to school children. If the remainder is zero the answer produced is yes, otherwise it is no, a decision problem which can be solved by an algorithm, such as this example, is called decidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve, difficult, in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of theory, meanwhile, categorizes undecidable decision problems by Turing degree. A decision problem is any arbitrary yes-or-no question on a set of inputs. Because of this, it is traditional to define the decision problem equivalently as and these inputs can be natural numbers, but may also be values of some other kind, such as strings over the binary alphabet or over some other finite set of symbols. The subset of strings for which the problem returns yes is a formal language, alternatively, using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. A classic example of a decision problem is the set of prime numbers. It is possible to decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any method is enough to establish decidability. A decision problem A is called decidable or effectively solvable if A is a recursive set, a problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Problems that are not decidable are called undecidable, the halting problem is an important undecidable decision problem, for more examples, see list of undecidable problems. Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions

4.
NP (complexity)
–
In computational complexity theory, NP is a complexity class used to describe certain types of decision problems. Informally, NP is the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs, more precisely, these proofs have to be verifiable by deterministic computations that can be performed in polynomial time. Equivalently, the definition of NP is the set of decision problems solvable in polynomial time by a theoretical non-deterministic Turing machine. This second definition is the basis for the abbreviation NP, which stands for nondeterministic, however, the verifier-based definition tends to be more intuitive and practical in common applications compared to the formal machine definition. A method for solving a problem is given in the form of an algorithm. In the above definitions for NP, polynomial time refers to the number of machine operations needed by an algorithm relative to the size of the problem. Polynomial time is therefore a measure of efficiency of an algorithm, decision problems are commonly categorized into complexity classes based on the fastest known machine algorithms. As such, decision problems may change if a faster algorithm is discovered. The most important open question in complexity theory, the P versus NP problem, asks whether polynomial time algorithms actually exist for solving NP-complete and it is widely believed that this is not the case. The complexity class NP is also related to the complexity class co-NP, whether or not NP = co-NP is another outstanding question in complexity theory. The complexity class NP can be defined in terms of NTIME as follows, alternatively, NP can be defined using deterministic Turing machines as verifiers. In particular, the versions of many interesting search problems. In this example, the answer is yes, since the subset of integers corresponds to the sum + +5 =0, the task of deciding whether such a subset with sum zero exists is called the subset sum problem. To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets, as the number of integers that we feed into the algorithm becomes larger, the number of subsets grows exponentially and so does the computation time. However, notice that, if we are given a subset, we can easily check or verify whether the subset sum is zero. So if the sum is indeed zero, that particular subset is the proof or witness for the fact that the answer is yes, an algorithm that verifies whether a given subset has sum zero is called verifier. More generally, a problem is said to be in NP if there exists a verifier V for the problem. Given any instance I of problem P, where the answer is yes, there must exist a certificate W such that, given the ordered pair as input, furthermore, if the answer to I is no, the verifier will return no with input for all possible W

5.
Integer factorization
–
In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. If these integers are further restricted to numbers, the process is called prime factorization. When the numbers are large, no efficient, non-quantum integer factorization algorithm is known. However, it has not been proven that no efficient algorithm exists, the presumed difficulty of this problem is at the heart of widely used algorithms in cryptography such as RSA. Many areas of mathematics and computer science have been brought to bear on the problem, including elliptic curves, algebraic number theory, not all numbers of a given length are equally hard to factor. The hardest instances of these problems are semiprimes, the product of two prime numbers, many cryptographic protocols are based on the difficulty of factoring large composite integers or a related problem—for example, the RSA problem. An algorithm that efficiently factors an arbitrary integer would render RSA-based public-key cryptography insecure, by the fundamental theorem of arithmetic, every positive integer has a unique prime factorization. If the integer is then it can be recognized as such in polynomial time. If composite however, the theorem gives no insight into how to obtain the factors, given a general algorithm for integer factorization, any integer can be factored down to its constituent prime factors simply by repeated application of this algorithm. The situation is complicated with special-purpose factorization algorithms, whose benefits may not be realized as well or even at all with the factors produced during decomposition. For example, if N =10 × p × q where p < q are very large primes, trial division will quickly produce the factors 2 and 5 but will take p divisions to find the next factor. Among the b-bit numbers, the most difficult to factor in practice using existing algorithms are those that are products of two primes of similar size, for this reason, these are the integers used in cryptographic applications. The largest such semiprime yet factored was RSA-768, a 768-bit number with 232 decimal digits and this factorization was a collaboration of several research institutions, spanning two years and taking the equivalent of almost 2000 years of computing on a single-core 2.2 GHz AMD Opteron. Like all recent factorization records, this factorization was completed with an optimized implementation of the general number field sieve run on hundreds of machines. No algorithm has been published that can factor all integers in polynomial time, neither the existence nor non-existence of such algorithms has been proved, but it is generally suspected that they do not exist and hence that the problem is not in class P. The problem is clearly in class NP but has not been proved to be in, or not in and it is generally suspected not to be in NP-complete. There are published algorithms that are faster than O for all positive ε, i. e. sub-exponential, the best published asymptotic running time is for the general number field sieve algorithm, which, for a b-bit number n, is, O. For current computers, GNFS is the best published algorithm for large n, for a quantum computer, however, Peter Shor discovered an algorithm in 1994 that solves it in polynomial time

6.
Parity game
–
A parity game is played on a colored directed graph, where each node has been colored by a priority – one of finitely many natural numbers. Two players,0 and 1, move a token along the edges of the graph, the owner of the node that the token falls on selects the successor node, resulting in a path, called the play. The winner of a play is the player whose opponent is unable to move. The winner of a play is determined by the priorities appearing in the play. Typically, player 0 wins an infinite play if the largest priority that occurs often in the play is even. This explains the word parity in the title, parity games lie in the third level of the Borel hierarchy, and are consequently determined. Games related to parity games were used in Rabins proof of decidability of second-order theory of n successors. The Knaster–Tarski theorem leads to a simple proof of determinacy of parity games. Moreover, parity games are history-free determined and this means that if a player has a winning strategy then that player has a winning strategy that depends only on the current board position, and not on the history of the play. Solving a parity game played on a finite graph means deciding, for a starting position. It has been shown that this problem is in NP and Co-NP, more precisely UP and co-UP and it remains an open question whether this decision problem is solvable in PTime. Given that parity games are determined, solving a given parity game is equivalent to solving the following simple looking graph-theoretic problem. Zielonka outlined a recursive algorithm that solves parity games, let G = be parity game, where V0 resp. V1 are the sets of nodes belonging to player 0 resp. 1, V = V0 ∪ V1 is the set of all nodes, E ⊆ V × V is the set of edges. Zielonkas algorithm is based on the notation of attractors, let U ⊆ V be a set of nodes and i =0,1 be a player. The i-attractor of U is the least set of nodes A t t r i containing U such that i can force a visit to U from every node in A t t r i, zielonkas algorithm is based on a recursive descent on the number of priorities. If the maximal priority is 0, it is immediate to see that player 0 wins the whole game, otherwise, let p be the largest one and let i = p mod 2 be the player associated with the priority