1.
3sat
–
3sat is a public, advertising-free, television network in Central Europe. The programming is in German and is aimed primarily at audiences in Germany, Austria, 3sat was established to broadcast cultural programmes, originally by satellite. The network was founded as a network by Germanys ZDF, Austrias ORF. 3sat began broadcasting on 1 December 1984, ZDF leads the cooperative, though decisions are reached through consensus of the cooperatives partners. In 1990, DFF, television broadcaster of the German Democratic Republic became a member of 3sat. Eventually it was decided to keep the original 3sat name, dFFs membership of 3sat was dissolved on 31 December 1991, as DFF itself ceased to exist, in accordance with Germanys Unification Treaty. On 1 December 1993, ARD joined 3sat as a cooperative member and this followed ARDs creation of its own satellite channel, Eins Plus. 3sat broadcasts programming 24 hours a day,7 days a week, 3sat is available on the European Astra satellites at 19. 2° east, on cable television, and in Austria and Germany on digital terrestrial television. Since 2003, it can be viewed by 40 million households in Germany, Austria and Switzerland and 85.5 million households in Europe
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Boolean logic
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
4.
Formula (mathematical logic)
–
In mathematical logic, a well-formed formula, abbreviated wff, often simply formula, is a finite sequence of symbols from a given alphabet that is part of a formal language. A formal language can be identified with the set of formulas in the language, a formula is a syntactic object that can be given a semantic meaning by means of an interpretation. Two key uses of formulas are in propositional logic and predicate logic, a key use of formulas is in propositional logic and predicate logics such as first-order logic. In those contexts, a formula is a string of symbols φ for which it makes sense to ask is φ true, once any free variables in φ have been instantiated. In formal logic, proofs can be represented by sequences of formulas with certain properties, although the term formula may be used for written marks, it is more precisely understood as the sequence of symbols being expressed, with the marks being a token instance of formula. Thus the same formula may be more than once. They are given meanings by interpretations, for example, in a propositional formula, each propositional variable may be interpreted as a concrete proposition, so that the overall formula expresses a relationship between these propositions. A formula need not be interpreted, however, to be considered solely as a formula, the formulas of propositional calculus, also called propositional formulas, are expressions such as. Their definition begins with the choice of a set V of propositional variables. The alphabet consists of the letters in V along with the symbols for the propositional connectives and parentheses, the formulas will be certain expressions over this alphabet. The formulas are inductively defined as follows, Each propositional variable is, on its own, If φ is a formula, then ¬φ is a formula. If φ and ψ are formulas, and • is any binary connective, here • could be the usual operators ∨, ∧, →, or ↔. The sequence of symbols p)) is not a formula, because it does not conform to the grammar, a complex formula may be difficult to read, owing to, for example, the proliferation of parentheses. To alleviate this last phenomenon, precedence rules are assumed among the operators, for example, assuming the precedence 1. Then the formula may be abbreviated as p → q ∧ r → s ∨ ¬q ∧ ¬s This is, however, If the precedence was assumed, for example, to be left-right associative, in following order,1. ∨4. →, then the formula above would be rewritten as → The definition of a formula in first-order logic Q S is relative to the signature of the theory at hand. This signature specifies the constant symbols, relation symbols, and function symbols of the theory at hand, the definition of a formula comes in several parts. First, the set of terms is defined recursively, terms, informally, are expressions that represent objects from the domain of discourse
5.
Contradiction
–
In classical logic, a contradiction consists of a logical incompatibility between two or more propositions. It occurs when the propositions, taken together, yield two conclusions which form the logical, usually opposite inversions of each other. Illustrating a general tendency in applied logic, Aristotles law of noncontradiction states that One cannot say of something that it is, by extension, outside of classical logic, one can speak of contradictions between actions when one presumes that their motives contradict each other. By creation of a paradox, Platos Euthydemus dialogue demonstrates the need for the notion of contradiction, in the ensuing dialogue Dionysodorus denies the existence of contradiction, all the while that Socrates is contradicting him. I in my astonishment said, What do you mean Dionysodorus, the dictum is that there is no such thing as a falsehood, a man must either say what is true or say nothing. Indeed, Dionysodorus agrees that there is no such thing as false opinion, there is no such thing as ignorance and demands of Socrates to Refute me. Socrates responds But how can I refute you, if, as you say, note, The symbol ⊥ represents an arbitrary contradiction, with the dual tee symbol ⊤ used to denote an arbitrary tautology. Contradiction is sometimes symbolized by Opq, and tautology by Vpq, the turnstile symbol, ⊢ is often read as yields or proves. In classical logic, particularly in propositional and first-order logic, a proposition φ is a contradiction if, since for contradictory φ it is true that ⊢ φ → ψ for all ψ, one may prove any proposition from a set of axioms which contains contradictions. This is called the principle of explosion or ex falso quodlibet, in a complete logic, a formula is contradictory if and only if it is unsatisfiable. Therefore, a proof that ¬ φ ⊢ ⊥ also proves that φ is true, the use of this fact constitutes the technique of the proof by contradiction, which mathematicians use extensively. This applies only in a logic using the excluded middle A ∨ ¬ A as an axiom, in mathematics, the symbol used to represent a contradiction within a proof varies. A consistency proof requires an axiomatic system a demonstration that it is not the case both the formula p and its negation ~p can be derived in the system. Posts solution to the problem is described in the demonstration An Example of a Successful Absolute Proof of Consistency offered by Ernest Nagel and they too observe a problem with respect to the notion of contradiction with its usual truth values of truth and falsity. They observe that, The property of being a tautology has been defined in notions of truth, yet these notions obviously involve a reference to something outside the formula calculus. Therefore, the mentioned in the text in effect offers an interpretation of the calculus. This being so, the authors have not done what they promised, namely, proofs of consistency which are based on models, and which argue from the truth of axioms to their consistency, merely shift the problem. Given some primitive formulas such as PMs primitives S1 V S2, so what will be the definition of tautologous
6.
NP-complete
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete
7.
NP (complexity)
–
In computational complexity theory, NP is a complexity class used to describe certain types of decision problems. Informally, NP is the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs, more precisely, these proofs have to be verifiable by deterministic computations that can be performed in polynomial time. Equivalently, the definition of NP is the set of decision problems solvable in polynomial time by a theoretical non-deterministic Turing machine. This second definition is the basis for the abbreviation NP, which stands for nondeterministic, however, the verifier-based definition tends to be more intuitive and practical in common applications compared to the formal machine definition. A method for solving a problem is given in the form of an algorithm. In the above definitions for NP, polynomial time refers to the number of machine operations needed by an algorithm relative to the size of the problem. Polynomial time is therefore a measure of efficiency of an algorithm, decision problems are commonly categorized into complexity classes based on the fastest known machine algorithms. As such, decision problems may change if a faster algorithm is discovered. The most important open question in complexity theory, the P versus NP problem, asks whether polynomial time algorithms actually exist for solving NP-complete and it is widely believed that this is not the case. The complexity class NP is also related to the complexity class co-NP, whether or not NP = co-NP is another outstanding question in complexity theory. The complexity class NP can be defined in terms of NTIME as follows, alternatively, NP can be defined using deterministic Turing machines as verifiers. In particular, the versions of many interesting search problems. In this example, the answer is yes, since the subset of integers corresponds to the sum + +5 =0, the task of deciding whether such a subset with sum zero exists is called the subset sum problem. To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets, as the number of integers that we feed into the algorithm becomes larger, the number of subsets grows exponentially and so does the computation time. However, notice that, if we are given a subset, we can easily check or verify whether the subset sum is zero. So if the sum is indeed zero, that particular subset is the proof or witness for the fact that the answer is yes, an algorithm that verifies whether a given subset has sum zero is called verifier. More generally, a problem is said to be in NP if there exists a verifier V for the problem. Given any instance I of problem P, where the answer is yes, there must exist a certificate W such that, given the ordered pair as input, furthermore, if the answer to I is no, the verifier will return no with input for all possible W
8.
P versus NP problem
–
The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be verified by a computer can also be quickly solved by a computer. The underlying issues were first discussed in the 1950s, in letters from John Nash to the National Security Agency and it is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. The general class of questions for which some algorithm can provide an answer in time is called class P or just P. For some questions, there is no way to find an answer quickly. The class of questions for which an answer can be verified in polynomial time is called NP, consider the subset sum problem, an example of a problem that is easy to verify, but whose answer may be difficult to compute. Given a set of integers, does some nonempty subset of them sum to 0, for instance, does a subset of the set add up to 0. The answer yes, because the subset adds up to zero can be verified with three additions. There is no algorithm to find such a subset in polynomial time. An answer to the P = NP question would determine whether problems that can be verified in polynomial time, like the subset-sum problem, can also be solved in polynomial time. Although the P versus NP problem was defined in 1971, there were previous inklings of the problems involved, the difficulty of proof. In 1955, mathematician John Nash wrote a letter to the NSA, if proved this would imply what we today would call P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. The most common resources are time and space, in such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic and sequential, arguably the biggest open question in theoretical computer science concerns the relationship between those two classes, Is P equal to NP. In 2012,10 years later, the poll was repeated. To attack the P = NP question, the concept of NP-completeness is very useful, NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time, and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems, informally, an NP-complete problem is an NP problem that is at least as tough as any other problem in NP
9.
Artificial intelligence
–
Artificial intelligence is intelligence exhibited by machines. Colloquially, the artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning. As machines become increasingly capable, mental facilities once thought to require intelligence are removed from the definition, for instance, optical character recognition is no longer perceived as an example of artificial intelligence, having become a routine technology. AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a tool or towards satisfying particular applications. The central problems of AI research include reasoning, knowledge, planning, learning, natural language processing, perception, general intelligence is among the fields long-term goals. Approaches include statistical methods, computational intelligence, and traditional symbolic AI, Many tools are used in AI, including versions of search and mathematical optimization, logic, methods based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience, the field was founded on the claim that human intelligence can be so precisely described that a machine can be made to simulate it. Some people also consider AI a danger to humanity if it progresses unabatedly, while thought-capable artificial beings appeared as storytelling devices in antiquity, the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull. With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine, since the 19th century, artificial beings are common in fiction, as in Mary Shelleys Frankenstein or Karel Čapeks R. U. R. The study of mechanical or formal reasoning began with philosophers and mathematicians in antiquity, in the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning. Around the 1940s, Alan Turings theory of computation suggested that a machine, by shuffling symbols as simple as 0 and 1 and this insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurology, information theory and cybernetics, the first work that is now generally recognized as AI was McCullouch and Pitts 1943 formal design for Turing-complete artificial neurons. The field of AI research was born at a conference at Dartmouth College in 1956, attendees Allen Newell, Herbert Simon, John McCarthy, Marvin Minsky and Arthur Samuel became the founders and leaders of AI research. At the conference, Newell and Simon, together with programmer J. C, shaw, presented the first true artificial intelligence program, the Logic Theorist. This spurred tremendous research in the domain, computers were winning at checkers, solving problems in algebra, proving logical theorems. By the middle of the 1960s, research in the U. S. was heavily funded by the Department of Defense and laboratories had been established around the world. AIs founders were optimistic about the future, Herbert Simon predicted, machines will be capable, within twenty years, Marvin Minsky agreed, writing, within a generation. The problem of creating artificial intelligence will substantially be solved and they failed to recognize the difficulty of some of the remaining tasks
10.
Automatic theorem proving
–
Automated theorem proving is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was an impetus for the development of computer science. While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic, freges Begriffsschrift introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published 1884, expressed mathematics in formal logic and this approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, shortly after World War II, the first general purpose computers became available. In 1954, Martin Davis programmed Presburgers algorithm for a JOHNNIAC vacuum tube computer at the Princeton Institute for Advanced Study, according to Davis, Its great triumph was to prove that the sum of two even numbers is even. More ambitious was the Logic Theory Machine, a system for the propositional logic of the Principia Mathematica, developed by Allen Newell, Herbert A. Simon. The system used heuristic guidance, and managed to prove 38 of the first 52 theorems of the Principia, the heuristic approach of the Logic Theory Machine tried to emulate human mathematicians, and could not guarantee that a proof could be found for every valid theorem even in principle. In contrast, other, more systematic algorithms achieved, at least theoretically, the propositional formulas could then be checked for unsatisfiability using a number of methods. Gilmores program used conversion to disjunctive normal form, a form in which the satisfiability of a formula is obvious, depending on the underlying logic, the problem of deciding the validity of a formula varies from trivial to impossible. For the frequent case of logic, the problem is decidable but Co-NP-complete. However, invalid formulas, cannot always be recognized, the above applies to first order theories, such as Peano Arithmetic. However, for a model that may be described by a first order theory, some statements may be true. Despite this theoretical limit, in practice, theorem provers can solve many hard problems, a simpler, but related, problem is proof verification, where an existing proof for a theorem is certified valid. For this, it is required that each individual proof step can be verified by a primitive recursive function or program. Proof assistants require a user to give hints to the system. However, these successes are sporadic, and work on hard problems usually requires a proficient user, other techniques would include model checking, which, in the simplest case, involves brute-force enumeration of many possible states. There are hybrid theorem proving systems which use model checking as an inference rule, there are also programs which were written to prove a particular theorem, with a proof that if the program finishes with a certain result, then the theorem is true
11.
Propositional logic
–
Logical connectives are found in natural languages. In English for example, some examples are and, or, not”, the following is an example of a very simple inference within the scope of propositional logic, Premise 1, If its raining then its cloudy. Both premises and the conclusion are propositions, the premises are taken for granted and then with the application of modus ponens the conclusion follows. Not only that, but they will also correspond with any other inference of this form, Propositional logic may be studied through a formal system in which formulas of a formal language may be interpreted to represent propositions. A system of rules and axioms allows certain formulas to be derived. These derived formulas are called theorems and may be interpreted to be true propositions, a constructed sequence of such formulas is known as a derivation or proof and the last formula of the sequence is the theorem. The derivation may be interpreted as proof of the represented by the theorem. When a formal system is used to represent formal logic, only statement letters are represented directly, usually in truth-functional propositional logic, formulas are interpreted as having either a truth value of true or a truth value of false. Truth-functional propositional logic and systems isomorphic to it, are considered to be zeroth-order logic, although propositional logic had been hinted by earlier philosophers, it was developed into a formal logic by Chrysippus in the 3rd century BC and expanded by his successor Stoics. The logic was focused on propositions and this advancement was different from the traditional syllogistic logic which was focused on terms. However, later in antiquity, the propositional logic developed by the Stoics was no longer understood, consequently, the system was essentially reinvented by Peter Abelard in the 12th century. Propositional logic was eventually refined using symbolic logic, the 17th/18th-century mathematician Gottfried Leibniz has been credited with being the founder of symbolic logic for his work with the calculus ratiocinator. Although his work was the first of its kind, it was unknown to the larger logical community, consequently, many of the advances achieved by Leibniz were recreated by logicians like George Boole and Augustus De Morgan completely independent of Leibniz. Just as propositional logic can be considered an advancement from the earlier syllogistic logic, one author describes predicate logic as combining the distinctive features of syllogistic logic and propositional logic. Consequently, predicate logic ushered in a new era in history, however, advances in propositional logic were still made after Frege, including Natural Deduction. Natural deduction was invented by Gerhard Gentzen and Jan Łukasiewicz, Truth-Trees were invented by Evert Willem Beth. The invention of truth-tables, however, is of controversial attribution, within works by Frege and Bertrand Russell, are ideas influential to the invention of truth tables. The actual tabular structure, itself, is credited to either Ludwig Wittgenstein or Emil Post
12.
Logical conjunction
–
In logic and mathematics, and is the truth-functional operator of logical conjunction, the and of a set of operands is true if and only if all of its operands are true. The logical connective that represents this operator is written as ∧ or ⋅. A and B is true only if A is true and B is true, an operand of a conjunction is a conjunct. Related concepts in other fields are, In natural language, the coordinating conjunction, in programming languages, the short-circuit and control structure. And is usually denoted by an operator, in mathematics and logic, ∧ or ×, in electronics, ⋅. In Jan Łukasiewiczs prefix notation for logic, the operator is K, logical conjunction is an operation on two logical values, typically the values of two propositions, that produces a value of true if and only if both of its operands are true. The conjunctive identity is 1, which is to say that AND-ing an expression with 1 will never change the value of the expression. In keeping with the concept of truth, when conjunction is defined as an operator or function of arbitrary arity. The truth table of A ∧ B, As a rule of inference, conjunction introduction is a classically valid, the argument form has two premises, A and B. Intuitively, it permits the inference of their conjunction, therefore, A and B. or in logical operator notation, A, B ⊢ A ∧ B Here is an example of an argument that fits the form conjunction introduction, Bob likes apples. Therefore, Bob likes apples and oranges, Conjunction elimination is another classically valid, simple argument form. Intuitively, it permits the inference from any conjunction of either element of that conjunction, therefore, A. or alternately, A and B. In logical operator notation, A ∧ B ⊢ A. falsehood-preserving, yes When all inputs are false, walsh spectrum, Nonlinearity,1 If using binary values for true and false, then logical conjunction works exactly like normal arithmetic multiplication. Many languages also provide short-circuit control structures corresponding to logical conjunction. Logical conjunction is used for bitwise operations, where 0 corresponds to false and 1 to true,0 AND0 =0,0 AND1 =0,1 AND0 =0,1 AND1 =1. The operation can also be applied to two binary words viewed as bitstrings of length, by taking the bitwise AND of each pair of bits at corresponding positions. For example,11000110 AND10100011 =10000010 and this can be used to select part of a bitstring using a bit mask. For example,10011101 AND00001000 =00001000 extracts the fifth bit of an 8-bit bitstring
13.
Logical disjunction
–
In logic and mathematics, or is the truth-functional operator of disjunction, also known as alternation, the or of a set of operands is true if and only if one or more of its operands is true. The logical connective that represents this operator is written as ∨ or +. A or B is true if A is true, or if B is true, or if both A and B are true. In logic, or by means the inclusive or, distinguished from an exclusive or. An operand of a disjunction is called a disjunct, related concepts in other fields are, In natural language, the coordinating conjunction or. In programming languages, the short-circuit or control structure, or is usually expressed with an infix operator, in mathematics and logic, ∨, in electronics, +, and in most programming languages, |, ||, or or. In Jan Łukasiewiczs prefix notation for logic, the operator is A, logical disjunction is an operation on two logical values, typically the values of two propositions, that has a value of false if and only if both of its operands are false. More generally, a disjunction is a formula that can have one or more literals separated only by ors. A single literal is often considered to be a degenerate disjunction, the disjunctive identity is false, which is to say that the or of an expression with false has the same value as the original expression. In keeping with the concept of truth, when disjunction is defined as an operator or function of arbitrary arity. Falsehood-preserving, The interpretation under which all variables are assigned a value of false produces a truth value of false as a result of disjunction. The mathematical symbol for logical disjunction varies in the literature, in addition to the word or, and the formula Apq, the symbol ∨, deriving from the Latin word vel is commonly used for disjunction. For example, A ∨ B is read as A or B, such a disjunction is false if both A and B are false. In all other cases it is true, all of the following are disjunctions, A ∨ B ¬ A ∨ B A ∨ ¬ B ∨ ¬ C ∨ D ∨ ¬ E. The corresponding operation in set theory is the set-theoretic union, operators corresponding to logical disjunction exist in most programming languages. Disjunction is often used for bitwise operations, for example, x = x | 0b00000001 will force the final bit to 1 while leaving other bits unchanged. Logical disjunction is usually short-circuited, that is, if the first operand evaluates to true then the second operand is not evaluated, the logical disjunction operator thus usually constitutes a sequence point. In a parallel language, it is possible to both sides, they are evaluated in parallel, and if one terminates with value true
14.
Negation
–
Negation is thus a unary logical connective. It may be applied as an operation on propositions, truth values, in classical logic, negation is normally identified with the truth function that takes truth to falsity and vice versa. In intuitionistic logic, according to the Brouwer–Heyting–Kolmogorov interpretation, the negation of a proposition p is the proposition whose proofs are the refutations of p. Classical negation is an operation on one logical value, typically the value of a proposition, that produces a value of true when its operand is false and a value of false when its operand is true. So, if statement A is true, then ¬A would therefore be false, the truth table of ¬p is as follows, Classical negation can be defined in terms of other logical operations. For example, ¬p can be defined as p → F, conversely, one can define F as p & ¬p for any proposition p, where & is logical conjunction. The idea here is that any contradiction is false, while these ideas work in both classical and intuitionistic logic, they do not work in paraconsistent logic, where contradictions are not necessarily false. But in classical logic, we get an identity, p → q can be defined as ¬p ∨ q. Algebraically, classical negation corresponds to complementation in a Boolean algebra and these algebras provide a semantics for classical and intuitionistic logic respectively. The negation of a proposition p is notated in different ways in various contexts of discussion and fields of application. Among these variants are the following, In set theory \ is also used to indicate not member of, U \ A is the set of all members of U that are not members of A. No matter how it is notated or symbolized, the negation ¬p / −p can be read as it is not the case p, not that p. Within a system of logic, double negation, that is. In intuitionistic logic, a proposition implies its double negation but not conversely and this marks one important difference between classical and intuitionistic negation. Algebraically, classical negation is called an involution of period two and this result is known as Glivenkos theorem. De Morgans laws provide a way of distributing negation over disjunction and conjunction, ¬ ≡, in Boolean algebra, a linear function is one such that, If there exists a0, a1. An ∈ such that f = a0 ⊕ ⊕, another way to express this is that each variable always makes a difference in the truth-value of the operation or it never makes a difference. Negation is a logical operator
15.
Logical value
–
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth. In classical logic, with its intended semantics, the values are true and untrue or false. This set of two values is called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables, logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgans laws, assigning values for propositional variables is referred to as valuation. In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of the statement from those axioms, a statement is false if you can deduce a contradiction from it. This leaves open the possibility of statements that have not yet assigned a truth value. Unproven statements in Intuitionistic logic are not given a truth value. Indeed, you can prove that they have no truth value. There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation, see also, Intuitionistic Logic - Semantics. Multi-valued logics allow for more than two values, possibly containing some internal structure. For example, on the interval such structure is a total order. Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions, but even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, Intuitionistic type theory uses types in the place of truth values. Topos theory uses truth values in a sense, the truth values of a topos are the global elements of the subobject classifier. Having truth values in this sense does not make a logic truth valuational
16.
Decision problem
–
In computability theory and computational complexity theory, a decision problem is a question in some formal system that can be posed as a yes-no question, dependent on the input values. For example, the given two numbers x and y, does x evenly divide y. is a decision problem. The answer can be yes or no, and depends upon the values of x and y. A method for solving a problem, given in the form of an algorithm, is called a decision procedure for that problem. A decision procedure for the problem given two numbers x and y, does x evenly divide y. would give the steps for determining whether x evenly divides y. One such algorithm is long division, taught to school children. If the remainder is zero the answer produced is yes, otherwise it is no, a decision problem which can be solved by an algorithm, such as this example, is called decidable. The field of computational complexity categorizes decidable decision problems by how difficult they are to solve, difficult, in this sense, is described in terms of the computational resources needed by the most efficient algorithm for a certain problem. The field of theory, meanwhile, categorizes undecidable decision problems by Turing degree. A decision problem is any arbitrary yes-or-no question on a set of inputs. Because of this, it is traditional to define the decision problem equivalently as and these inputs can be natural numbers, but may also be values of some other kind, such as strings over the binary alphabet or over some other finite set of symbols. The subset of strings for which the problem returns yes is a formal language, alternatively, using an encoding such as Gödel numberings, any string can be encoded as a natural number, via which a decision problem can be defined as a subset of the natural numbers. A classic example of a decision problem is the set of prime numbers. It is possible to decide whether a given natural number is prime by testing every possible nontrivial factor. Although much more efficient methods of primality testing are known, the existence of any method is enough to establish decidability. A decision problem A is called decidable or effectively solvable if A is a recursive set, a problem is called partially decidable, semidecidable, solvable, or provable if A is a recursively enumerable set. Problems that are not decidable are called undecidable, the halting problem is an important undecidable decision problem, for more examples, see list of undecidable problems. Decision problems can be ordered according to many-one reducibility and related to feasible reductions such as polynomial-time reductions
17.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage
18.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
19.
Cryptography
–
Cryptography or cryptology is the practice and study of techniques for secure communication in the presence of third parties called adversaries. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, Applications of cryptography include ATM cards, computer passwords, and electronic commerce. Cryptography prior to the age was effectively synonymous with encryption. The originator of an encrypted message shared the decoding technique needed to recover the information only with intended recipients. The cryptography literature often uses Alice for the sender, Bob for the intended recipient and it is theoretically possible to break such a system, but it is infeasible to do so by any known practical means. The growth of technology has raised a number of legal issues in the information age. Cryptographys potential for use as a tool for espionage and sedition has led governments to classify it as a weapon and to limit or even prohibit its use. In some jurisdictions where the use of cryptography is legal, laws permit investigators to compel the disclosure of encryption keys for documents relevant to an investigation, Cryptography also plays a major role in digital rights management and copyright infringement of digital media. Until modern times, cryptography referred almost exclusively to encryption, which is the process of converting ordinary information into unintelligible text, decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher is a pair of algorithms that create the encryption, the detailed operation of a cipher is controlled both by the algorithm and in each instance by a key. The key is a secret, usually a short string of characters, historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. There are two kinds of cryptosystems, symmetric and asymmetric, in symmetric systems the same key is used to encrypt and decrypt a message. Data manipulation in symmetric systems is faster than asymmetric systems as they generally use shorter key lengths, asymmetric systems use a public key to encrypt a message and a private key to decrypt it. Use of asymmetric systems enhances the security of communication, examples of asymmetric systems include RSA, and ECC. Symmetric models include the commonly used AES which replaced the older DES, in colloquial use, the term code is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a specific meaning. It means the replacement of a unit of plaintext with a code word, English is more flexible than several other languages in which cryptology is always used in the second sense above. RFC2828 advises that steganography is sometimes included in cryptology, the study of characteristics of languages that have some application in cryptography or cryptology is called cryptolinguistics
20.
Boolean algebra (structure)
–
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets and it is also a special case of a De Morgan algebra and a Kleene algebra. The term Boolean algebra honors George Boole, a self-educated English mathematician, booles formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons, the first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whiteheads 1898 Universal Algebra, Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoffs 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing, a Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. It follows from the last three pairs of axioms above, or from the axiom, that a = b ∧ a if. The relation ≤ defined by a ≤ b if these equivalent conditions hold, is an order with least element 0. The meet a ∧ b and the join a ∨ b of two elements coincide with their infimum and supremum, respectively, with respect to ≤, the first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique, the set of axioms is self-dual in the sense that if one exchanges ∨ with ∧ and 0 with 1 in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra, one obtains another Boolean algebra with the same elements, furthermore, every possible input-output behavior can be modeled by a suitable Boolean expression. The smallest element 0 is the empty set and the largest element 1 is the set S itself, starting with the propositional calculus with κ sentence symbols, form the Lindenbaum algebra. This construction yields a Boolean algebra and it is in fact the free Boolean algebra on κ generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra, interval algebras are useful in the study of Lindenbaum-Tarski algebras, every countable Boolean algebra is isomorphic to an interval algebra. For any natural n, the set of all positive divisors of n, defining a≤b if a divides b
21.
Stephen Cook
–
Stephen Arthur Cook, OC OOnt is an American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity. He is currently a university professor at the University of Toronto, Department of Computer Science, Cook received his Bachelors degree in 1961 from the University of Michigan, and his Masters degree and Ph. D. from Harvard University, respectively in 1962 and 1966. He joined the University of California, Berkeley, mathematics department in 1966 as an assistant professor, Stephen Cook is considered one of the forefathers of computational complexity theory. During his PhD, Cook worked on complexity of functions, mainly on multiplication and this theorem was proven independently by Leonid Levin in the Soviet Union, and has thus been given the name the Cook-Levin theorem. The paper also formulated the most famous problem in computer science, informally, the P vs. NP question asks whether every optimization problem whose answers can be efficiently verified for correctness/optimality can be solved optimally with an efficient algorithm. Given the abundance of such problems in everyday life, a positive answer to the P vs. NP question would likely have profound practical and philosophical consequences. Cook conjectures that there are problems which cannot be solved by efficient algorithms. Yet, the conjecture remains open and is among the seven famous Millennium Prize Problems, in 1982, Cook received the Turing award for his contributions to complexity theory. His citation reads, For his advancement of our understanding of the complexity of computation in a significant and his seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NP-Completeness. The ensuing exploration of the boundaries and nature of NP-complete class of problems has been one of the most active and he made another major contribution to the field in his 1979 paper, joint with his student Robert A. They proved that the existence of a system in which every true formula has a short proof is equivalent to NP = coNP. Cook co-authored a book with his student Phuong The Nguyen in this area titled Logical Foundations of Proof Complexity and his main research areas are complexity theory and proof complexity, with excursions into programming language semantics, parallel computation, and artificial intelligence. He named the complexity class NC after Nick Pippenger, the complexity class SC is named after him. The definition of the complexity class AC0 and its hierarchy AC are also introduced by him, according to Don Knuth the KMP algorithm was inspired by Cooks automata for recognizing concatenated palindromes in linear time. Cook was awarded a Steacie Fellowship in 1977, a Killam Research Fellowship in 1982 and he has won John L. Synge Award and Bernard Bolzano Medal, and is a fellow of the Royal Society of London and Royal Society of Canada. Cook was elected to membership in the National Academy of Sciences, Cook won the ACM Turing Award in 1982. Association for Computing Machinery honored him as a Fellow of ACM in 2008 for his contributions to the theory of computational complexity. The Government of Ontario appointed him to the Order of Ontario in 2013 and he has won the 2012 Gerhard Herzberg Canada Gold Medal for Science and Engineering, the highest honor for scientist and engineers in Canada
22.
University of Toronto
–
The University of Toronto is a public research university in Toronto, Ontario, Canada on the grounds that surround Queens Park. It was founded by charter in 1827 as Kings College. Originally controlled by the Church of England, the university assumed the present name in 1850 upon becoming a secular institution, as a collegiate university, it comprises twelve colleges, which differ in character and history, each with substantial autonomy on financial and institutional affairs. It has two campuses in Scarborough and Mississauga. Academically, the University of Toronto is noted for influential movements and curricula in literary criticism and communication theory, by a significant margin, it receives the most annual scientific research funding of any Canadian university. It is one of two members of the Association of American Universities outside the United States, the other being McGill University, the Varsity Blues are the athletic teams that represent the university in intercollegiate league matches, with long and storied ties to gridiron football and ice hockey. The universitys Hart House is an example of the North American student centre. The founding of a college had long been the desire of John Graves Simcoe. As an Oxford-educated military commander who had fought in the American Revolutionary War, the Upper Canada Executive Committee recommended in 1798 a college be established in York, the colonial capital. On March 15,1827, a charter was formally issued by King George IV, proclaiming from this time one College, with the style. For the education of youth in the principles of the Christian Religion, the granting of the charter was largely the result of intense lobbying by John Strachan, the influential Anglican Bishop of Toronto who took office as the colleges first president. The original three-storey Greek Revival school building was built on the present site of Queens Park, under Strachans stewardship, Kings College was a religious institution closely aligned with the Church of England and the British colonial elite, known as the Family Compact. Reformist politicians opposed the control over colonial institutions and fought to have the college secularized. Having anticipated this decision, the enraged Strachan had resigned a year earlier to open Trinity College as a private Anglican seminary, University College was created as the nondenominational teaching branch of the University of Toronto. Established in 1878, the School of Practical Science was precursor to the Faculty of Applied Science and Engineering, while the Faculty of Medicine opened in 1843, medical teaching was conducted by proprietary schools from 1853 until 1887, when the faculty absorbed the Toronto School of Medicine. Meanwhile, the university continued to set examinations and confer medical degrees, the university opened the Faculty of Law in 1887, followed by the Faculty of Dentistry in 1888, when the Royal College of Dental Surgeons became an affiliate. Women were first admitted to the university in 1884, over the next two decades, a collegiate system took shape as the university arranged federation with several ecclesiastical colleges, including Strachans Trinity College in 1904. The university operated the Royal Conservatory of Music from 1896 to 1991, the University of Toronto Press was founded in 1901 as Canadas first academic publishing house
23.
Leonid Levin
–
Leonid Anatolievich Levin is a Soviet-American computer scientist. He obtained his masters degree at Moscow University in 1970 where he studied under Andrey Kolmogorov and he and Stephen Cook independently discovered the existence of NP-complete problems. The Cook–Levin theorem was a breakthrough in science and an important step in the development of the theory of computational complexity. Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and his life is described in a chapter of the book Out of Their Minds, The Lives and Discoveries of 15 Great Computer Scientists. His advisor at MIT was Albert R. Meyer and his life is described in a chapter of the book Out of Their Minds, The Lives and Discoveries of 15 Great Computer Scientists. Levin and Stephen Cook independently discovered the existence of NP-complete problems, the Cook–Levin theorem was a breakthrough in computer science and an important step in the development of the theory of computational complexity. Levin was awarded the Knuth Prize in 2012 for his discovery of NP-completeness and he is currently a professor of computer science at Boston University, where he began teaching in 1980. Levins home page at Boston University,2012 Knuth Prize to Leonid Levin
24.
Russian Academy of Sciences
–
Headquartered in Moscow, the Academy is considered a civil, self-governed, non-commercial organization chartered by the Government of Russia. It combines the members of RAS and scientists employed by institutions, the Academy currently includes around 650 institutions and 55,000 scientific researchers. There are three types of membership in the RAS, full members, corresponding members, and foreign members, Academicians and corresponding members must be citizens of the Russian Federation when elected. However, some academicians and corresponding members were elected before the collapse of the USSR and are now citizens of other countries, Members of RAS are elected based on their scientific contributions – election to membership is considered very prestigious. In the years 2005–2012, the academy had approximately 500 full and 700 corresponding members, but in 2013, after the Russian Academy of Agricultural Sciences and the Russian Academy of Medical Sciences became incorporated into the RAS, a number of the RAS members accordingly increased. As of November 2016, after the last elections, there were 944 full members and 1159 corresponding members in the renewed Russian Academy of Sciences, the RAS consists of 13 specialized scientific divisions, three territorial divisions, sometimes called branches, and 15 regional scientific centers. The Academy has numerous councils, committees, and commissions, all organized for different purposes, Siberian Division of the Russian Academy of Sciences The Siberian Division was established in 1957, with Mikhail Lavrentyev as founding chairman. Research centers are in Novosibirsk, Tomsk, Krasnoyarsk, Irkutsk, Yakutsk, Ulan-Ude, Kemerovo, Tyumen, as of 2005, the Division employed over 33,000 employees,58 of whom were members of the Academy. Ural Division of the Russian Academy of Sciences The Ural Division was established in 1932, research centers are in Yekaterinburg, Perm, Cheliabinsk, Izhevsk, Orenburg, Ufa and Syktyvkar. As of 2007, the Division employed 3,600 scientists,590 full professors,31 full members, started with just three members, The RSSI now has 3,100 members, including 57 from the largest research institutions. Russian universities and technical institutes are not under the supervision of the RAS, the Academy is also increasing its presence in the educational area. In 1990 the Higher Chemical College of the Russian Academy of Sciences was founded, the Academy gives out a number of different prizes, medals and awards among which, Lomonosov Gold Medal Lobachevsky Prize Demidov Prize Kurchatov Medal Pushkin Prize S. V. Expeditions to explore parts of the country had Academy scientists as their leaders or most active participants. A separate organization, called the Russian Academy, was created in 1783 to work on the study of the Russian language, presided over by Princess Yekaterina Dashkova, the Russian Academy was engaged in compiling the six-volume Academic Dictionary of the Russian Language. The Russian Academy was merged into the Imperial Saint Petersburg Academy of Sciences in 1841, in December 1917, Sergey Fedorovich Oldenburg, a leading ethnographer and political activist in the Kadet party, met with Vladimir Lenin to discuss the future of the Academy. They agreed that the expertise of the Academy would be applied to addressing questions of state construction, in 1925 the Soviet government recognized the Russian Academy of Sciences as the highest all-Union scientific institution and renamed it the Academy of Sciences of the USSR. The Soviet Sciences Academy would be affected like all universities by the rules imposed particularly those pertaining to censorship, the Soviet Science Academy ended up with a leader of the philosophy department who was placed there simply to keep the man out of trouble. The government decided to not execute or send the famous writer to the gulag because he had won the Stalin award, doing this would have discredited the Stalin award and thus Stalin the leader of the Communist party himself
25.
Complexity class
–
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form, the set of problems that can be solved by an abstract machine M using O of resource R, Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is a measurement, and does not give time or space in requirements in terms of seconds or bytes. The O is read as order of, for the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class. The resource in question can either be time, essentially the number of operations on an abstract machine. The simplest complexity classes are defined by the factors, The type of computational problem. However, complexity classes can be defined based on problems, counting problems, optimization problems, promise problems. The resource that are being bounded and the bounds, These two properties are usually stated together, such as time, logarithmic space, constant depth. Many complexity classes can be characterized in terms of the logic needed to express them. Bounding the computation time above by some function f often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in time on a multi-tape Turing machine. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that the complexities in any two reasonable and general models of computation are polynomially related. This forms the basis for the complexity class P, which is the set of problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of problems is FP. The Blum axioms can be used to define complexity classes without referring to a computational model. Many important complexity classes can be defined by bounding the time or space used by the algorithm, some important complexity classes of decision problems defined in this manner are the following, It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitchs theorem. #P is an important complexity class of counting problems, classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems, many complexity classes are defined using the concept of a reduction
26.
Reduction (complexity)
–
In computability theory and computational complexity theory, a reduction is an algorithm for transforming one problem into another problem. A reduction from one problem to another may be used to show that the problem is at least as difficult as the first. Intuitively, problem A is reducible to problem B if an algorithm for solving problem B efficiently could also be used as a subroutine to solve problem A efficiently, when this is true, solving A cannot be harder than solving B. Harder means having an estimate of the required computational resources in a given context. We write A ≤m B, usually with a subscript on the ≤ to indicate the type of reduction being used. There are two situations where we need to use reductions, First, we find ourselves trying to solve a problem that is similar to a problem weve already solved. This is perhaps the most obvious use of reductions, second, suppose we have a problem that weve proven is hard to solve, and we have a similar new problem. We might suspect that it is hard to solve. We argue by contradiction, suppose the new problem is easy to solve, then, if we can show that every instance of the old problem can be solved easily by transforming it into instances of the new problem and solving those, we have a contradiction. This establishes that the new problem is also hard, a very simple example of a reduction is from multiplication to squaring. Suppose all we know how to do is to add, subtract, take squares and this seems to imply that these two problems are equally hard. This kind of reduction corresponds to Turing reduction, however, the reduction becomes much harder if we add the restriction that we can only use the squaring function one time, and only at the end. Going in the direction, however, we can certainly square a number with just one multiplication. Using this limited form of reduction, we have shown the result that multiplication is harder in general than squaring. Given two subsets A and B of N and a set of functions F from N to N which is closed under composition, A is called reducible to B under F if ∃ f ∈ F. X ∈ A ⟺ f ∈ B We write A ≤ F B Let S be a subset of P and ≤ a reduction, then S is called closed under ≤ if ∀ s ∈ S. A reduction is a preordering, that is a reflexive and transitive relation, on P×P, as described in the example above, there are two main types of reductions used in computational complexity, the many-one reduction and the Turing reduction. Many-one reductions map instances of one problem to instances of another, Turing reductions compute the solution to one problem, the many-one reduction is a stronger type of Turing reduction, and is more effective at separating problems into distinct complexity classes
27.
Graph coloring
–
In graph theory, graph coloring is a special case of graph labeling, it is an assignment of labels traditionally called colors to elements of a graph subject to certain constraints. In its simplest form, it is a way of coloring the vertices of a such that no two adjacent vertices share the same color, this is called a vertex coloring. Vertex coloring is the point of the subject, and other coloring problems can be transformed into a vertex version. For example, a coloring of a graph is just a vertex coloring of its line graph. However, non-vertex coloring problems are often stated and studied as is and that is partly for perspective, and partly because some problems are best studied in non-vertex form, as for instance is edge coloring. The convention of using colors originates from coloring the countries of a map and this was generalized to coloring the faces of a graph embedded in the plane. By planar duality it became coloring the vertices, and in form it generalizes to all graphs. In mathematical and computer representations, it is typical to use the first few positive or nonnegative integers as the colors, in general, one can use any finite set as the color set. The nature of the coloring problem depends on the number of colors, graph coloring enjoys many practical applications as well as theoretical challenges. Beside the classical types of problems, different limitations can also be set on the graph, or on the way a color is assigned and it has even reached popularity with the general public in the form of the popular number puzzle Sudoku. Graph coloring is still an active field of research. Note, Many terms used in this article are defined in Glossary of graph theory, the first results about graph coloring deal almost exclusively with planar graphs in the form of the coloring of maps. Guthrie’s brother passed on the question to his mathematics teacher Augustus de Morgan at University College, arthur Cayley raised the problem at a meeting of the London Mathematical Society in 1879. The same year, Alfred Kempe published a paper that claimed to establish the result, for his accomplishment Kempe was elected a Fellow of the Royal Society and later President of the London Mathematical Society. In 1890, Heawood pointed out that Kempe’s argument was wrong, however, in that paper he proved the five color theorem, saying that every planar map can be colored with no more than five colors, using ideas of Kempe. The proof went back to the ideas of Heawood and Kempe, the proof of the four color theorem is also noteworthy for being the first major computer-aided proof. Kempe had already drawn attention to the general, non-planar case in 1879, the conjecture remained unresolved for 40 years, until it was established as the celebrated strong perfect graph theorem by Chudnovsky, Robertson, Seymour, and Thomas in 2002. One of the applications of graph coloring, register allocation in compilers, was introduced in 1981
28.
Clique problem
–
In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, the clique problem arises in the following real-world setting. Consider a social network, where the vertices represent people. Then a clique represents a subset of people who all know each other, along with its applications in social networks, the clique problem also has many applications in bioinformatics and computational chemistry. Most versions of the problem are hard. The clique decision problem is NP-complete, the problem of finding the maximum clique is both fixed-parameter intractable and hard to approximate. And, listing all maximal cliques may require exponential time as there exist graphs with exponentially many maximal cliques. To find a maximum clique, one can systematically inspect all subsets, although no polynomial time algorithm is known for this problem, more efficient algorithms than the brute-force search are known. For instance, the Bron–Kerbosch algorithm can be used to list all maximal cliques in worst-case optimal time, the study of complete subgraphs in mathematics predates the clique terminology. For instance, complete subgraphs make an appearance in the mathematical literature in the graph-theoretic reformulation of Ramsey theory by Erdős & Szekeres. Luce & Perry used graphs to model social networks, and adapted the social science terminology to graph theory and they were the first to call complete subgraphs cliques. The first algorithm for solving the problem is that of Harary & Ross. Since the work of Harary and Ross, many others have devised algorithms for various versions of the clique problem, in the 1970s, researchers began studying these algorithms from the point of view of worst-case analysis. See, for instance, Tarjan & Trojanowski, a work on the worst-case complexity of the maximum clique problem. In the 1990s, a series of papers beginning with Feige et al. and reported in the New York Times. Clique-finding algorithms have been used in chemistry, to find chemicals that match a target structure and to model molecular docking and they can also be used to find similar structures within different molecules. In these applications, one forms a graph in which each vertex represents a pair of atoms. Two vertices are connected by a if the matches that they represent are compatible with each other
29.
NP-hard
–
NP-hardness, in computational complexity theory, is the defining property of a class of problems that are, informally, at least as hard as the hardest problems in NP. As a consequence, finding an algorithm to solve any NP-hard problem would give polynomial algorithms for all the problems in NP. A common misconception is that the NP in NP-hard stands for non-polynomial when in fact it stands for Non-deterministic Polynomial acceptable problems, although it is suspected that there are no polynomial-time algorithms for NP-hard problems, this has never been proven. Moreover, the class P in which all problems can be solved in time, is contained in the NP class. Informally, we can think of an algorithm that can call such a machine as a subroutine for solving H. Another definition is to require that there is a reduction from an NP-complete problem G to H. As any problem L in NP reduces in polynomial time to G, L reduces in turn to H in polynomial time so this new definition implies the previous one. Awkwardly, it does not restrict the class NP-hard to decision problems, for instance it also includes search problems, If P ≠ NP, then NP-hard problems cannot be solved in polynomial time. Note that some NP-hard optimization problems can be polynomial-time approximated up to some constant approximation ratio or even up to any approximation ratio. An example of an NP-hard problem is the subset sum problem. That is a problem, and happens to be NP-complete. Another example of an NP-hard problem is the problem of finding the least-cost cyclic route through all nodes of a weighted graph. This is commonly known as the traveling salesman problem, there are decision problems that are NP-hard but not NP-complete, for example the halting problem. This is the problem which asks given a program and its input and that is a yes/no question, so this is a decision problem. It is easy to prove that the problem is NP-hard. It is also easy to see that the problem is not in NP since all problems in NP are decidable in a finite number of operations, while the halting problem. There are also NP-hard problems that are neither NP-complete nor undecidable, for instance, the language of True quantified Boolean formulas is decidable in polynomial space, but not non-deterministic polynomial time. NP-hard problems do not have to be elements of the complexity class NP, NP-hard Class of decision problems which are at least as hard as the hardest problems in NP
30.
Small o notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms