1.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following

2.
Boolean satisfiability problem
–
In computer science, the Boolean Satisfiability Problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable, on the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula a AND NOT b is satisfiable because one can find the values a = TRUE and b = FALSE, in contrast, a AND NOT a is unsatisfiable. SAT is one of the first problems that was proven to be NP-complete and this means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. g. Artificial intelligence, circuit design, and automatic theorem proving, a propositional logic formula, also called Boolean expression, is built from variables, operators AND, OR, NOT, and parentheses. A formula is said to be if it can be made TRUE by assigning appropriate logical values to its variables. The Boolean satisfiability problem is, given a formula, to whether it is satisfiable. This decision problem is of importance in various areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography. There are several cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, a clause is a disjunction of literals. A clause is called a Horn clause if it contains at most one positive literal, a formula is in conjunctive normal form if it is a conjunction of clauses. The formula is satisfiable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since ∧ ∧ ¬FALSE evaluates to ∧ ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE. In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE and a=FALSE it evaluates to TRUE ∧ ¬TRUE and FALSE ∧ ¬FALSE, different sets of allowed boolean operators lead to different problem versions. As an example, R is a clause, and R ∧ R ∧ R is a generalized conjunctive normal form. This formula is used below, with R being the operator that is TRUE just if exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, for example, transforming the formula ∨ ∨. ∨ into conjunctive normal form yields ∧ ∧ ∧ ∧, ∧ ∧ ∧ ∧, while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables

3.
Cook's theorem
–
In computational complexity theory, the Cook–Levin theorem, also known as Cooks theorem, states that the Boolean satisfiability problem is NP-complete. That is, any problem in NP can be reduced in time by a deterministic Turing machine to the problem of determining whether a Boolean formula is satisfiable. The theorem is named after Stephen Cook and Leonid Levin, crucially, the same follows for any NP-complete problem. The question of such an algorithm exists is called the P versus NP problem. The concept of NP-completeness was developed in the late 1960s and early 1970s in parallel by researchers in the US, in the US in 1971, Stephen Cook published his paper The complexity of theorem proving procedures in conference proceedings of the newly founded ACM Symposium on Theory of Computing. Richard Karps subsequent paper, Reducibility among combinatorial problems, generated renewed interest in Cooks paper by providing a list of 21 NP-complete problems, Cook and Karp received a Turing Award for this work. That is, there exists an oracle A such that, for all subexponential deterministic time complexity classes T, in particular, for this oracle, PA ≠ NPA. In the USSR, an equivalent to Baker, Gill. Later Levins paper, Universal search problems, was published in 1973, although it was mentioned in talks, Levins approach was slightly different from Cooks and Karps in that he considered search problems, which require finding solutions rather than simply determining existence. He provided 6 such NP-complete search problems, or universal problems, additionally he found for each of these problems an algorithm that solves it in optimal time. A decision problem is in NP if it can be solved by an algorithm in polynomial time. An instance of the Boolean satisfiability problem is a Boolean expression that combines Boolean variables using Boolean operators, an expression is satisfiable if there is some assignment of truth values to the variables that makes the entire expression true. Given any decision problem in NP, construct a machine that solves it in polynomial time. Then for each input to that machine, build a Boolean expression which says that the input is passed to the machine, the machine correctly. This proof is based on the one given by Garey and Johnson, there are two parts to proving that the Boolean satisfiability problem is NP-complete. One is to show that SAT is an NP problem, the other is to show that every NP problem can be reduced to an instance of a SAT problem by a polynomial-time many-one reduction. SAT is in NP because any assignment of Boolean values to Boolean variables that is claimed to satisfy the given expression can be verified in polynomial time by a deterministic Turing machine. Suppose further that M accepts or rejects an instance of the problem in time p where n is the size of the instance, for each input, I, we specify a Boolean expression which is satisfiable if and only if the machine M accepts I

4.
3SAT
–
3sat is a public, advertising-free, television network in Central Europe. The programming is in German and is aimed primarily at audiences in Germany, Austria, 3sat was established to broadcast cultural programmes, originally by satellite. The network was founded as a network by Germanys ZDF, Austrias ORF. 3sat began broadcasting on 1 December 1984, ZDF leads the cooperative, though decisions are reached through consensus of the cooperatives partners. In 1990, DFF, television broadcaster of the German Democratic Republic became a member of 3sat. Eventually it was decided to keep the original 3sat name, dFFs membership of 3sat was dissolved on 31 December 1991, as DFF itself ceased to exist, in accordance with Germanys Unification Treaty. On 1 December 1993, ARD joined 3sat as a cooperative member and this followed ARDs creation of its own satellite channel, Eins Plus. 3sat broadcasts programming 24 hours a day,7 days a week, 3sat is available on the European Astra satellites at 19. 2° east, on cable television, and in Austria and Germany on digital terrestrial television. Since 2003, it can be viewed by 40 million households in Germany, Austria and Switzerland and 85.5 million households in Europe

5.
Binary Decision Diagram
–
In computer science, a binary decision diagram or branching program is a data structure that is used to represent a Boolean function. On a more level, BDDs can be considered as a compressed representation of sets or relations. Unlike other compressed representations, operations are performed directly on the compressed representation, other data structures used to represent a Boolean function include negation normal form, and propositional directed acyclic graph. A Boolean function can be represented as a rooted, directed, acyclic graph, there are two types of terminal nodes called 0-terminal and 1-terminal. Each decision node N is labeled by Boolean variable V N and has two child nodes called low child and high child, the edge from node V N to a low child represents an assignment of V N to 0. Such a BDD is called ordered if different variables appear in the order on all paths from the root. A BDD is said to be reduced if the two rules have been applied to its graph, Merge any isomorphic subgraphs. Eliminate any node whose two children are isomorphic, in popular usage, the term BDD almost always refers to Reduced Ordered Binary Decision Diagram. The advantage of an ROBDD is that it is canonical for a particular function and this property makes it useful in functional equivalence checking and other operations like functional technology mapping. A path from the node to the 1-terminal represents a variable assignment for which the represented Boolean function is true. As the path descends to a low child from a node, the left figure below shows a binary decision tree, and a truth table, each representing the function f. In the tree on the left, the value of the function can be determined for a variable assignment by following a path down the graph to a terminal. In the figures below, dotted lines represent edges to a low child, therefore, to find, begin at x1, traverse down the dotted line to x2, then down two solid lines. This leads to the terminal 1, which is the value of f, the binary decision tree of the left figure can be transformed into a binary decision diagram by maximally reducing it according to the two reduction rules. The resulting BDD is shown in the right figure, the basic idea from which the data structure was created is the Shannon expansion. A switching function is split into two sub-functions by assigning one variable, if such a sub-function is considered as a sub-tree, it can be represented by a binary decision tree. Binary decision diagrams were introduced by Lee, and further studied and made known by Akers, applying these two concepts results in an efficient data structure and algorithms for the representation of sets and relations. By extending the sharing to several BDDs, i. e. one sub-graph is used by several BDDs, the notion of a BDD is now generally used to refer to that particular data structure

6.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage