1.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
2.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
3.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
4.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
5.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
6.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
7.
NP (complexity)
–
In computational complexity theory, NP is a complexity class used to describe certain types of decision problems. Informally, NP is the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs, more precisely, these proofs have to be verifiable by deterministic computations that can be performed in polynomial time. Equivalently, the definition of NP is the set of decision problems solvable in polynomial time by a theoretical non-deterministic Turing machine. This second definition is the basis for the abbreviation NP, which stands for nondeterministic, however, the verifier-based definition tends to be more intuitive and practical in common applications compared to the formal machine definition. A method for solving a problem is given in the form of an algorithm. In the above definitions for NP, polynomial time refers to the number of machine operations needed by an algorithm relative to the size of the problem. Polynomial time is therefore a measure of efficiency of an algorithm, decision problems are commonly categorized into complexity classes based on the fastest known machine algorithms. As such, decision problems may change if a faster algorithm is discovered. The most important open question in complexity theory, the P versus NP problem, asks whether polynomial time algorithms actually exist for solving NP-complete and it is widely believed that this is not the case. The complexity class NP is also related to the complexity class co-NP, whether or not NP = co-NP is another outstanding question in complexity theory. The complexity class NP can be defined in terms of NTIME as follows, alternatively, NP can be defined using deterministic Turing machines as verifiers. In particular, the versions of many interesting search problems. In this example, the answer is yes, since the subset of integers corresponds to the sum + +5 =0, the task of deciding whether such a subset with sum zero exists is called the subset sum problem. To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets, as the number of integers that we feed into the algorithm becomes larger, the number of subsets grows exponentially and so does the computation time. However, notice that, if we are given a subset, we can easily check or verify whether the subset sum is zero. So if the sum is indeed zero, that particular subset is the proof or witness for the fact that the answer is yes, an algorithm that verifies whether a given subset has sum zero is called verifier. More generally, a problem is said to be in NP if there exists a verifier V for the problem. Given any instance I of problem P, where the answer is yes, there must exist a certificate W such that, given the ordered pair as input, furthermore, if the answer to I is no, the verifier will return no with input for all possible W
8.
Non-deterministic Turing machine
–
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers. In essence, a Turing machine is imagined to be a computer that reads. It determines what action it should perform next according to its internal state, an example of one of a Turing Machines rules might thus be, If you are in state 2 and you see an A, change it to B and move left. In a deterministic Turing machine, the set of rules prescribes at most one action to be performed for any given situation, by contrast, a non-deterministic Turing machine may have a set of rules that prescribes more than one action for a given situation. For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the one position to the right. For example, an X on the tape in state 3 might allow the NTM to write a Y, move right, and switch to state 5, or to write an X, move left, and stay in state 3. L is the movement to the left, and R is to the right, the difference with a standard Turing machine is that for those, the transition relation is a function. How does the NTM know which of these actions it should take, there are two ways of looking at it. One is to say that the machine is the luckiest possible guesser, it always picks a transition that eventually leads to an accepting state, the other is to imagine that the machine branches into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single computation path that it follows, If at least one branch of the tree halts with an accept condition, we say that the NTM accepts the input. NTMs can compute the results as DTMs, that is, they are capable of computing the same values. The time complexity of these varies, however, as is discusssed below. NTMs effectively include DTMs as special cases, so it is clear that DTMs are not more powerful. The 3-tape DTMs are easily simulated with a normal single-tape DTM, therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is considered to be a property of simulations of NTMs by DTMs, the most famous unresolved question in computer science. The time complexity of NTMs is not the same as for DTMs and it is a common misconception that quantum computers are NTMs. It is believed but has not been proven that the power of computers is incomparable to that of NTMs. That is, problems likely exist that an NTM could efficiently solve that a computer cannot
9.
BPP (complexity)
–
BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in time with a deterministic machine. Alternatively, BPP can be defined using only deterministic Turing machines, for some applications this definition is preferable since it does not mention probabilistic Turing machines. In practice, a probability of 1⁄3 might not be acceptable, however. It can be any constant between 0 and 1⁄2 and the set BPP will be unchanged and this makes it possible to create a highly accurate algorithm by merely running the algorithm several times and taking a majority vote of the answers. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1⁄2100, besides the problems in P, which are obviously in BPP, many problems were known to be in BPP but not known to be in P. The number of problems is decreasing, and it is conjectured that P = BPP. For a long time, one of the most famous problems that was known to be in BPP, in other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero. It suffices to choose each variables value uniformly at random from a subset of at least d values to achieve bounded error probability. If the access to randomness is removed from the definition of BPP, in the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP. Adding postselection to BPP, or allowing computation paths to have different lengths, BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP. A Monte Carlo algorithm is an algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time and this is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs fail with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP, alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time and it is known that BPP is closed under complement, that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly is not any more powerful than the machine without this extra power. The relationship between BPP and NP is unknown, it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and it is known that RP is a subset of BPP, and BPP is a subset of PP
10.
IP (complexity)
–
In computational complexity theory, the class IP is the class of problems solvable by an interactive proof system. The concept of a proof system was first introduced by Shafi Goldwasser, Silvio Micali. These two machines exchange a number, p, of messages and once the interaction is completed. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol, the opposite inclusion is straightforward, because the verifier can always send to the prover the results of their private coin tosses, which proves that the two types of protocols are equivalent. The proof can be divided in two parts, we show that IP ⊆ PSPACE and PSPACE ⊆ IP, in order to demonstrate that IP ⊆ PSPACE, we present a simulation of an interactive proof system by a polynomial space machine. This expression is the average of NMj+1, weighted by the probability that the verifier sent message mj+1, take M0 to be the empty message sequence, here we will show that NM0 can be computed in polynomial space, and that NM0 = Pr. First, to compute NM0, an algorithm can calculate the values NMj for every j. Since the depth of the recursion is p, only polynomial space is necessary, the second requirement is that we need NM0 = Pr, the value needed to determine whether w is in A. We use induction to prove this as follows and we must show that for every 0 ≤ j ≤ p and every Mj, NMj = Pr, and we will do this using induction on j. The base case is to prove for j = p, then we will use induction to go from p down to 0. The base case of j = p is fairly simple, since mp is either accept or reject, if mp is accept, NMp is defined to be 1 and Pr =1 since the message stream indicates acceptance, thus the claim is true. If mp is reject, the argument is very similar, for the inductive hypothesis, we assume that for some j+1 ≤ p and any message sequence Mj+1, NMj = Pr and then prove the hypothesis for j and any message sequence Mj. If j is even, mj+1 is a message from V to P, by the definition of NMj, N M j = ∑ m j +1 Pr r N M j +1. Then, by the hypothesis, we can say this is equal to ∑ m j +1 Pr r ∗ Pr. Finally, by definition, we can see that this is equal to Pr, If j is odd, mj+1 is a message from P to V. By definition, N M j = max m j +1 N M j +1, then, by the inductive hypothesis, this equals max m j +1 ∗ Pr. This is equal to Pr since, max m j +1 Pr ≤ Pr because the prover on the side could send the message mj+1 to maximize the expression on the left-hand side. And, max m j +1 Pr ≥ Pr Since the same Prover cannot do any better than send that same message, thus, this holds whether i is even or odd and the proof that IP ⊆ PSPACE is complete