1.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
2.
PSPACE
–
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. PSPACE is a superset of the set of context-sensitive languages. It turns out that allowing the Turing machine to be nondeterministic does not add any extra power, because of Savitchs theorem, NPSPACE is equivalent to PSPACE, essentially because a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space. Also, the complements of all problems in PSPACE are also in PSPACE and it is widely suspected that all are strict. The containments in the line are both known to be strict. The first follows from direct diagonalization and the fact that PSPACE = NPSPACE via Savitchs theorem, the second follows simply from the space hierarchy theorem. The hardest problems in PSPACE are the PSPACE-Complete problems, see PSPACE-Complete for examples of problems that are suspected to be in PSPACE but not in NP. The class PSPACE is closed under union, complementation. An alternative characterization of PSPACE is the set of problems decidable by an alternating Turing machine in polynomial time, a logical characterization of PSPACE from descriptive complexity theory is that it is the set of problems expressible in second-order logic with the addition of a transitive closure operator. A full transitive closure is not needed, a transitive closure. It is the addition of this operator that distinguishes PSPACE from PH, a major result of complexity theory is that PSPACE can be characterized as all the languages recognizable by a particular interactive proof system, the one defining the class IP. In this system, there is an all-powerful prover trying to convince a randomized polynomial-time verifier that a string is in the language, PSPACE can be characterized as the quantum complexity class QIP. PSPACE is also equal to PCTC, problems solvable by classical computers using closed curves, as well as to BQPCTC. PSPACE-complete problems are of importance to studying PSPACE problems because they represent the most difficult problems in PSPACE. Finding a simple solution to a PSPACE-complete problem would mean we have a solution to all other problems in PSPACE because all PSPACE problems could be reduced to a PSPACE-complete problem. An example of a PSPACE-complete problem is the quantified Boolean formula problem, introduction to the Theory of Computation. Chapter 19, Polynomial space, pp. 455–490, introduction to the Theory of Computation. Chapter 8, Space Complexity Complexity Zoo, PSPACE
3.
Shafi Goldwasser
–
Shafrira Goldwasser is an American-born Israeli computer scientist. She is a professor of engineering and computer science at MIT. She joined MIT in 1983, and in 1997 became the first holder of the RSA Professorship and she became a professor at the Weizmann Institute of Science, concurrent to her professorship at MIT, in 1993. She is a member of the Theory of Computation group at MIT Computer Science, Goldwasser was a co-recipient of the 2012 Turing Award. Goldwassers research areas include computational complexity theory, cryptography and computational number theory and her work in complexity theory includes the classification of approximation problems, showing that some problems in NP remain hard even when only an approximate solution is needed. Goldwasser has twice won the Gödel Prize in theoretical science, first in 1993. In 2001 she was elected to the American Academy of Arts and Sciences, in 2004 she was elected to the National Academy of Science and she was selected as an IACR Fellow in 2007. Goldwasser received the 2008-2009 Athena Lecturer Award of the Association for Computing Machinerys Committee on Women in Computing and she is the recipient of The Franklin Institutes 2010 Benjamin Franklin Medal in Computer and Cognitive Science. She received the IEEE Emanuel R. Piore Award in 2011, and was awarded the 2012 Turing Award along with Silvio Micali for their work in the field of cryptography
4.
Silvio Micali
–
His research centers on the theory of cryptography and information security. Micali won the Gödel Prize in 1993, in 2007, he was selected to be a member of the National Academy of Sciences and a Fellow of the IACR. He is also a member of the National Academy of Engineering and he received the Turing Award for the year 2012 along with Shafi Goldwasser for their work in the field of cryptography. In 2015 the University of Salerno acknowledges his studies giving him an honoris causa degree in Computer Science, categorý, Fellows of the American Academy of Arts and Sciences
5.
String (computer science)
–
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, a string is generally understood as a data type and is often implemented as an array of bytes that stores a sequence of elements, typically characters, using some character encoding. A string may also more general arrays or other sequence data types and structures. When a string appears literally in source code, it is known as a literal or an anonymous string. In formal languages, which are used in logic and theoretical computer science. Let Σ be a non-empty finite set of symbols, called the alphabet, no assumption is made about the nature of the symbols. A string over Σ is any sequence of symbols from Σ. For example, if Σ =, then 01011 is a string over Σ, the length of a string s is the number of symbols in s and can be any non-negative integer, it is often denoted as |s|. The empty string is the string over Σ of length 0. The set of all strings over Σ of length n is denoted Σn, for example, if Σ =, then Σ2 =. Note that Σ0 = for any alphabet Σ, the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ =, although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*, for any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ =, s = bear, and t = hug, then st = bearhug, String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element, for any string s, therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers, a string s is said to be a substring or factor of t if there exist strings u and v such that t = usv
6.
Formal language
–
In mathematics, computer science, and linguistics, a formal language is a set of strings of symbols together with a set of rules that are specific to it. The alphabet of a language is the set of symbols, letters. The strings formed from this alphabet are called words, and the words belong to a particular formal language are sometimes called well-formed words or well-formed formulas. A formal language is defined by means of a formal grammar such as a regular grammar or context-free grammar. The field of language theory studies primarily the purely syntactical aspects of such languages—that is. Formal language theory sprang out of linguistics, as a way of understanding the syntactic regularities of natural languages. The first formal language is thought to be the one used by Gottlob Frege in his Begriffsschrift, literally meaning concept writing, axel Thues early semi-Thue system, which can be used for rewriting strings, was influential on formal grammars. The elements of an alphabet are called its letters, alphabets may be infinite, however, most definitions in formal language theory specify finite alphabets, and most results only apply to them. A word over an alphabet can be any sequence of letters. The set of all words over an alphabet Σ is usually denoted by Σ*, the length of a word is the number of letters it is composed of. For any alphabet there is one word of length 0, the empty word. By concatenation one can combine two words to form a new word, whose length is the sum of the lengths of the original words, the result of concatenating a word with the empty word is the original word. A formal language L over an alphabet Σ is a subset of Σ*, that is, sometimes the sets of words are grouped into expressions, whereas rules and constraints may be formulated for the creation of well-formed expressions. In computer science and mathematics, which do not usually deal with natural languages, in practice, there are many languages that can be described by rules, such as regular languages or context-free languages. The notion of a formal grammar may be closer to the concept of a language. By an abuse of the definition, a formal language is often thought of as being equipped with a formal grammar that describes it. The following rules describe a formal language L over the alphabet Σ =, Every nonempty string that does not contain + or =, a string containing = is in L if and only if there is exactly one =, and it separates two valid strings of L. A string containing + but not = is in L if, no string is in L other than those implied by the previous rules
7.
BPP (complexity)
–
BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in time with a deterministic machine. Alternatively, BPP can be defined using only deterministic Turing machines, for some applications this definition is preferable since it does not mention probabilistic Turing machines. In practice, a probability of 1⁄3 might not be acceptable, however. It can be any constant between 0 and 1⁄2 and the set BPP will be unchanged and this makes it possible to create a highly accurate algorithm by merely running the algorithm several times and taking a majority vote of the answers. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1⁄2100, besides the problems in P, which are obviously in BPP, many problems were known to be in BPP but not known to be in P. The number of problems is decreasing, and it is conjectured that P = BPP. For a long time, one of the most famous problems that was known to be in BPP, in other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero. It suffices to choose each variables value uniformly at random from a subset of at least d values to achieve bounded error probability. If the access to randomness is removed from the definition of BPP, in the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP. Adding postselection to BPP, or allowing computation paths to have different lengths, BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP. A Monte Carlo algorithm is an algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time and this is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs fail with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP, alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time and it is known that BPP is closed under complement, that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly is not any more powerful than the machine without this extra power. The relationship between BPP and NP is unknown, it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and it is known that RP is a subset of BPP, and BPP is a subset of PP
8.
Truth value
–
In logic and mathematics, a truth value, sometimes called a logical value, is a value indicating the relation of a proposition to truth. In classical logic, with its intended semantics, the values are true and untrue or false. This set of two values is called the Boolean domain. Corresponding semantics of logical connectives are truth functions, whose values are expressed in the form of truth tables, logical biconditional becomes the equality binary relation, and negation becomes a bijection which permutes true and false. Conjunction and disjunction are dual with respect to negation, which is expressed by De Morgans laws, assigning values for propositional variables is referred to as valuation. In intuitionistic logic, and more generally, constructive mathematics, statements are assigned a value only if they can be given a constructive proof. It starts with a set of axioms, and a statement is true if you can build a proof of the statement from those axioms, a statement is false if you can deduce a contradiction from it. This leaves open the possibility of statements that have not yet assigned a truth value. Unproven statements in Intuitionistic logic are not given a truth value. Indeed, you can prove that they have no truth value. There are various ways of interpreting Intuitionistic logic, including the Brouwer–Heyting–Kolmogorov interpretation, see also, Intuitionistic Logic - Semantics. Multi-valued logics allow for more than two values, possibly containing some internal structure. For example, on the interval such structure is a total order. Not all logical systems are truth-valuational in the sense that logical connectives may be interpreted as truth functions, but even non-truth-valuational logics can associate values with logical formulae, as is done in algebraic semantics. The algebraic semantics of intuitionistic logic is given in terms of Heyting algebras, Intuitionistic type theory uses types in the place of truth values. Topos theory uses truth values in a sense, the truth values of a topos are the global elements of the subobject classifier. Having truth values in this sense does not make a logic truth valuational
9.
NP (complexity)
–
In computational complexity theory, NP is a complexity class used to describe certain types of decision problems. Informally, NP is the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs, more precisely, these proofs have to be verifiable by deterministic computations that can be performed in polynomial time. Equivalently, the definition of NP is the set of decision problems solvable in polynomial time by a theoretical non-deterministic Turing machine. This second definition is the basis for the abbreviation NP, which stands for nondeterministic, however, the verifier-based definition tends to be more intuitive and practical in common applications compared to the formal machine definition. A method for solving a problem is given in the form of an algorithm. In the above definitions for NP, polynomial time refers to the number of machine operations needed by an algorithm relative to the size of the problem. Polynomial time is therefore a measure of efficiency of an algorithm, decision problems are commonly categorized into complexity classes based on the fastest known machine algorithms. As such, decision problems may change if a faster algorithm is discovered. The most important open question in complexity theory, the P versus NP problem, asks whether polynomial time algorithms actually exist for solving NP-complete and it is widely believed that this is not the case. The complexity class NP is also related to the complexity class co-NP, whether or not NP = co-NP is another outstanding question in complexity theory. The complexity class NP can be defined in terms of NTIME as follows, alternatively, NP can be defined using deterministic Turing machines as verifiers. In particular, the versions of many interesting search problems. In this example, the answer is yes, since the subset of integers corresponds to the sum + +5 =0, the task of deciding whether such a subset with sum zero exists is called the subset sum problem. To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets, as the number of integers that we feed into the algorithm becomes larger, the number of subsets grows exponentially and so does the computation time. However, notice that, if we are given a subset, we can easily check or verify whether the subset sum is zero. So if the sum is indeed zero, that particular subset is the proof or witness for the fact that the answer is yes, an algorithm that verifies whether a given subset has sum zero is called verifier. More generally, a problem is said to be in NP if there exists a verifier V for the problem. Given any instance I of problem P, where the answer is yes, there must exist a certificate W such that, given the ordered pair as input, furthermore, if the answer to I is no, the verifier will return no with input for all possible W
10.
Non-deterministic Turing machine
–
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers. In essence, a Turing machine is imagined to be a computer that reads. It determines what action it should perform next according to its internal state, an example of one of a Turing Machines rules might thus be, If you are in state 2 and you see an A, change it to B and move left. In a deterministic Turing machine, the set of rules prescribes at most one action to be performed for any given situation, by contrast, a non-deterministic Turing machine may have a set of rules that prescribes more than one action for a given situation. For example, an X on the tape in state 3 might make the DTM write a Y on the tape, move the one position to the right. For example, an X on the tape in state 3 might allow the NTM to write a Y, move right, and switch to state 5, or to write an X, move left, and stay in state 3. L is the movement to the left, and R is to the right, the difference with a standard Turing machine is that for those, the transition relation is a function. How does the NTM know which of these actions it should take, there are two ways of looking at it. One is to say that the machine is the luckiest possible guesser, it always picks a transition that eventually leads to an accepting state, the other is to imagine that the machine branches into many copies, each of which follows one of the possible transitions. Whereas a DTM has a single computation path that it follows, If at least one branch of the tree halts with an accept condition, we say that the NTM accepts the input. NTMs can compute the results as DTMs, that is, they are capable of computing the same values. The time complexity of these varies, however, as is discusssed below. NTMs effectively include DTMs as special cases, so it is clear that DTMs are not more powerful. The 3-tape DTMs are easily simulated with a normal single-tape DTM, therefore, the length of an accepting computation of the DTM is, in general, exponential in the length of the shortest accepting computation of the NTM. This is considered to be a property of simulations of NTMs by DTMs, the most famous unresolved question in computer science. The time complexity of NTMs is not the same as for DTMs and it is a common misconception that quantum computers are NTMs. It is believed but has not been proven that the power of computers is incomparable to that of NTMs. That is, problems likely exist that an NTM could efficiently solve that a computer cannot
11.
BQP
–
In computational complexity theory, BQP is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the analogue of the complexity class BPP. In other words, there is an algorithm for a computer that solves the decision problem with high probability and is guaranteed to run in polynomial time. On any given run of the algorithm, it has a probability of at most 1/3 that it give the wrong answer. Similarly to other bounded error probabilistic classes the choice of 1/3 in the definition is arbitrary and we can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. BQP can also be viewed as the associated with certain bounded-error uniform families of quantum circuits. For example, algorithms are known for factoring an n-bit integer using just over 2n qubits, usually, computation on a quantum computer ends with a measurement. This leads to a collapse of state to one of the basis states. It can be said that the state is measured to be in the correct state with high probability. Quantum computers have gained widespread interest because some problems of practical interest are known to be in BQP, just like P and BPP, BQP is low for itself, which means BQPBQP = BQP. Informally, this is true because polynomial time algorithms are closed under composition, if a polynomial time algorithm calls as a subroutine polynomially many polynomial time algorithms, the resulting algorithm is still polynomial time. BQP contains P and BPP and is contained in AWPP, PP, the relation between BQP and NP is not known. Adding postselection to BQP results in the complexity class PostBQP which is equal to PP
12.
Quantum computing
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
13.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete
14.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
15.
Complexity Zoo
–
Scott Joel Aaronson is a theoretical computer scientist. His primary area of research is quantum computing and computational complexity theory more generally, Aaronson grew up in the United States, though he spent a year in Asia when his father—a science writer turned public-relations executive—was posted to Hong Kong. He enrolled in a program for gifted youngsters run by Clarkson University, Aaronson had shown ability in mathematics from an early age, teaching himself calculus at the age of 11, provoked by symbols in a babysitters textbook. He discovered computer programming at age 11, and felt he lagged behind peers, partly for this reason, he felt drawn to theoretical computing, particularly computational complexity. At Cornell, he interested in quantum computing, and devoted himself to computational complexity. After postdoctorates at the Institute for Advanced Study and the University of Waterloo and his primary area of research is quantum computing and computational complexity theory more generally. In the summer of 2016 he moved from MIT to the University of Texas at Austin as David J. Bruton Jr, centennial Professor of Computer Science and as the founding director of UT Austins new quantum computing center. Aaronson is one of two winners of the 2012 Alan T. Waterman Award, best Paper Award of CSR2011 for the paper The Equivalence of Sampling and Searching. He is a founder of the Complexity Zoo wiki, which all classes of computational complexity. He is the author of the much-read blog Shtetl-Optimized as well as the essay Who Can Name The Bigger Number and it weaves together seemingly disparate topics into a cohesive whole, including quantum mechanics, complexity, free will, time travel, the anthropic principle and many others. Many of these applications of computational complexity were later fleshed out in his article Why Philosophers Should Care About Computational Complexity. An article of Aaronsons, The Limits of Quantum Computers, was published in Scientific American, Aaronson is frequently cited in non-academic press, such as Science News, The Age, ZDNet, Slashdot, New Scientist, The New York Times, and Forbes magazine. Aaronson was the subject of attention in October 2007, when he accused Love Communications of plagiarizing a lecture he wrote on quantum mechanics in an advertisement of theirs. He alleged that a commercial for Ricoh Australia by Sydney-based agency Love Communications appropriated content almost verbatim from the lecture, Aaronson received an email from the agency claiming to have sought legal advice and saying they did not believe that they were in violation of his copyright. Unsatisfied, Aaronson pursued the matter, and the agency settled the dispute without admitting wrongdoing by making a contribution to two science organizations of his choice. Concerning this matter, Aaronson stated, Someone suggested a cameo with the models, scott Aaronson at the Mathematics Genealogy Project Aaronsons blog Aaronson homepage
16.
Complexity class
–
In computational complexity theory, a complexity class is a set of problems of related resource-based complexity. A typical complexity class has a definition of the form, the set of problems that can be solved by an abstract machine M using O of resource R, Complexity classes are concerned with the rate of growth of the requirement in resources as the input n increases. It is a measurement, and does not give time or space in requirements in terms of seconds or bytes. The O is read as order of, for the purposes of computational complexity theory, some of the details of the function can be ignored, for instance many possible polynomials can be grouped together as a class. The resource in question can either be time, essentially the number of operations on an abstract machine. The simplest complexity classes are defined by the factors, The type of computational problem. However, complexity classes can be defined based on problems, counting problems, optimization problems, promise problems. The resource that are being bounded and the bounds, These two properties are usually stated together, such as time, logarithmic space, constant depth. Many complexity classes can be characterized in terms of the logic needed to express them. Bounding the computation time above by some function f often yields complexity classes that depend on the chosen machine model. For instance, the language can be solved in time on a multi-tape Turing machine. If we allow polynomial variations in running time, Cobham-Edmonds thesis states that the complexities in any two reasonable and general models of computation are polynomially related. This forms the basis for the complexity class P, which is the set of problems solvable by a deterministic Turing machine within polynomial time. The corresponding set of problems is FP. The Blum axioms can be used to define complexity classes without referring to a computational model. Many important complexity classes can be defined by bounding the time or space used by the algorithm, some important complexity classes of decision problems defined in this manner are the following, It turns out that PSPACE = NPSPACE and EXPSPACE = NEXPSPACE by Savitchs theorem. #P is an important complexity class of counting problems, classes like IP and AM are defined using Interactive proof systems. ALL is the class of all decision problems, many complexity classes are defined using the concept of a reduction
17.
AC0
–
AC0 is a complexity class used in circuit complexity. It is the smallest class in the AC hierarchy, and consists of all families of circuits of depth O and polynomial size, with unlimited-fanin AND gates and it thus contains NC0, which has only bounded-fanin AND and OR gates. Integer addition and subtraction are computable in AC0, but multiplication is not, in 1984 Furst, Saxe, and Sipser showed that calculating the parity of an input cannot be decided by any AC0 circuits, even with non-uniformity. It follows that AC0 is not equal to NC1, because a family of circuits in the class can compute parity. More precise bounds follow from switching lemma, using them, it has been shown that there is an oracle separation between the polynomial hierarchy and PSPACE
18.
ACC0
–
ACC0, sometimes called ACC, is a class of computational models and problems defined in circuit complexity, a field of theoretical computer science. The class is defined by augmenting the class AC0 of constant-depth alternating circuits with the ability to count, specifically, a problem belongs to ACC0 if it can be solved by polynomial-size, constant-depth circuits of unbounded fan-in gates, including gates that count modulo a fixed integer. ACC0 corresponds to computation in any solvable monoid, more formally, a language belongs to AC0 if it can be computed by a family of circuits C1, C2. A language belongs to ACC0 if it belongs to AC0 for some m, in some texts, ACCi refers to a hierarchy of circuit classes with ACC0 at its lowest level, where the circuits in ACCi have depth O and polynomial size. The class ACC0 can also be defined in terms of computations of nonuniform deterministic finite automata over monoids. In this framework, the input is interpreted as elements from a fixed monoid, the class ACC0 is the family of languages accepted by a NUDFA over some monoid that does not contain an unsolvable group as a subsemigroup. This inclusion is strict, because a single MOD-2 gate computes the parity function, more generally, the function MODm can not be computed in AC0 for prime p unless m is a power of p. The class ACC0 is included in TC0 and it is conjectured that ACC0 is unable to compute the majority function of its inputs, but this remains unresolved as of July 2014. Every problem in ACC0 can be solved by circuits of depth 2, with AND gates of polylogarithmic fan-in at the inputs, the proof follows ideas of the proof of Todas theorem. Williams proves that ACC0 does not contain NEXPTIME, the proof uses many results in complexity theory, including the time hierarchy theorem, IP = PSPACE, derandomization, and the representation of ACC0 via SYM+ circuits. It is known that computing the permanent is impossible for logtime-uniform ACC0 circuits, which implies that the complexity class PP is not contained in logtime-uniform ACC0
19.
CC (complexity)
–
In computational complexity theory, CC is the complexity class containing decision problems which can be solved by comparator circuits of polynomial size. The most important problem which is complete for CC is a variant of the stable marriage problem. A comparator circuit is a network of wires and gates, each comparator gate, which is a directed edge connecting two wires, takes its two inputs and outputs them in sorted order. The input to any wire can be either a variable, its negation, one of the wires is designated as the output wire. The comparator circuit value problem is the problem of evaluating a comparator circuit given an encoding of the circuit, the complexity class CC is defined as the class of problems logspace reducible to CCVP. An equivalent definition is the class of problems AC0 reducible to CCVP, since there are sorting networks which can be constructed in AC0, this shows that the majority function is in CC. A problem in CC is CC-complete if every problem in CC can be reduced to it using a logspace reduction, the comparator circuit value problem is CC-complete. In the stable marriage problem, there is a number of men and women. Each person ranks all members of the opposite sex, a matching between men and women is stable if there are no unpaired man and woman who prefer each other over their current partners. Among the stable matchings, there is one in each woman gets the best man that she ever gets in any stable matching. The decision version of the matching problem is, given the rankings of all men and women, whether a given man. Although the classical Gale–Shapley algorithm cannot be implemented as a comparator circuit, another problem which is CC-complete is lexicographically-first maximal matching. In this problem, we are given a graph with an order on the vertices. The lexicographically-first maximal matching is obtained by successively matching vertices from the first bipartition to the minimal available vertices from the second bipartition, the problem asks whether the given edge belongs to this matching. Scott Aaronson showed that the model is CC-complete. The problem is to decide whether any pebbles are present in a particular pile after executing the program and he used this to show that the problem of deciding whether any balls reach a designated sink vertex in a Digi-Comp II-like device is also CC-complete. The comparator circuit evaluation problem can be solved in polynomial time, on the other hand, comparator circuits can solve directed reachability, and so CC contains NL. There is a world in which CC and NC are incomparable