1.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
2.
Complexity class
–
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form, the set of problems that can be solved by an abstract machine M using O of resource R, Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is a measurement, and does not give time or space in requirements in terms of seconds or bytes. The O is read as order of, for the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class. The resource in question can either be time, essentially the number of operations on an abstract machine. The simplest complexity classes are defined by the factors, The type of computational problem. However, complexity classes can be defined based on problems, counting problems, optimization problems, promise problems. The resource that are being bounded and the bounds, These two properties are usually stated together, such as time, logarithmic space, constant depth. Many complexity classes can be characterized in terms of the logic needed to express them. Bounding the computation time above by some function f often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in time on a multi-tape Turing machine. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that the complexities in any two reasonable and general models of computation are polynomially related. This forms the basis for the complexity class P, which is the set of problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of problems is FP. The Blum axioms can be used to define complexity classes without referring to a computational model. Many important complexity classes can be defined by bounding the time or space used by the algorithm, some important complexity classes of decision problems defined in this manner are the following, It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitchs theorem. #P is an important complexity class of counting problems, classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems, many complexity classes are defined using the concept of a reduction
3.
Decision problem
–
In computability theory and computational complexity theory, a decision problem is a question in some formal system that can be posed as a yes-no question, dependent on the input values. For example, the given two numbers x and y, does x evenly divide y. is a decision problem. The answer can be yes or no, and depends upon the values of x and y. A method for solving a problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the problem given two numbers x and y, does x evenly divide y. would give the steps for determining whether x evenly divides y. One such algorithm is long division, taught to school children. If the remainder is zero the answer produced is yes, otherwise it is no, a decision problem which can be solved by an algorithm, such as this example, is called decidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve, difficult, in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of theory, meanwhile, categorizes undecidable decision problems by Turing degree. A decision problem is any arbitrary yes-or-no question on a set of inputs. Because of this, it is traditional to define the decision problem equivalently as and these inputs can be natural numbers, but may also be values of some other kind, such as strings over the binary alphabet or over some other finite set of symbols. The subset of strings for which the problem returns yes is a formal language, alternatively, using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. A classic example of a decision problem is the set of prime numbers. It is possible to decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any method is enough to establish decidability. A decision problem A is called decidable or effectively solvable if A is a recursive set, a problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Problems that are not decidable are called undecidable, the halting problem is an important undecidable decision problem, for more examples, see list of undecidable problems. Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions
4.
Non-deterministic Turing machine
–
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers. In essence, a Turing machine is imagined to be a computer that reads. It determines what action it should perform next according to its internal state, an example of one of a Turing Machines rules might thus be, If you are in state 2 and you see an A, change it to B and move left. In a deterministic Turing machine, the set of rules prescribes at most one action to be performed for any given situation, by contrast, a non-deterministic Turing machine may have a set of rules that prescribes more than one action for a given situation. For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the one position to the right. For example, an X on the tape in state 3 might allow the NTM to write a Y, move right, and switch to state 5, or to write an X, move left, and stay in state 3. L is the movement to the left, and R is to the right, the difference with a standard Turing machine is that for those, the transition relation is a function. How does the NTM know which of these actions it should take, there are two ways of looking at it. One is to say that the machine is the luckiest possible guesser, it always picks a transition that eventually leads to an accepting state, the other is to imagine that the machine branches into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single computation path that it follows, If at least one branch of the tree halts with an accept condition, we say that the NTM accepts the input. NTMs can compute the results as DTMs, that is, they are capable of computing the same values. The time complexity of these varies, however, as is discusssed below. NTMs effectively include DTMs as special cases, so it is clear that DTMs are not more powerful. The 3-tape DTMs are easily simulated with a normal single-tape DTM, therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is considered to be a property of simulations of NTMs by DTMs, the most famous unresolved question in computer science. The time complexity of NTMs is not the same as for DTMs and it is a common misconception that quantum computers are NTMs. It is believed but has not been proven that the power of computers is incomparable to that of NTMs. That is, problems likely exist that an NTM could efficiently solve that a computer cannot
5.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
6.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
7.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case