1.
Polynomial hierarchy
–
In computational complexity theory, the polynomial hierarchy is a hierarchy of complexity classes that generalize the classes P, NP and co-NP to oracle machines. It is a counterpart to the arithmetical hierarchy and analytical hierarchy from mathematical logic. There are multiple equivalent definitions of the classes of the polynomial hierarchy. If any Σ k P = Σ k +1 P, or if any Σ k P = Π k P, then the hierarchy collapses to level k, in particular, if P = NP, then the hierarchy collapses completely. The union of all classes in the hierarchy is the complexity class PH. The polynomial hierarchy is an analogue of the hierarchy and arithmetical hierarchy. It is known that PH is contained within PSPACE, but it is not known whether the two classes are equal. One useful reformulation of this problem is that PH = PSPACE if, if the polynomial hierarchy has any complete problems, then it has only finitely many distinct levels. Since there are PSPACE-complete problems, we know that if PSPACE = PH, then the hierarchy must collapse. Each class in the hierarchy contains ≤ m P -complete problems. Furthermore, each class in the hierarchy is closed under ≤ m P -reductions, meaning that for a class C in the hierarchy. These two facts together imply that if K i is a problem for Σ i P, then Σ i +1 P = N P K i. For instance, Σ2 P = N P S A T, in other words, if a language is defined based on some oracle in C, then we can assume that it is defined based on a complete problem for C. Complete problems therefore act as representatives of the class for which they are complete, the Sipser–Lautemann theorem states that the class BPP is contained in the second level of the polynomial hierarchy. Kannans theorem states that for any k, Σ2 is not contained in SIZE, todas theorem states that the polynomial hierarchy is contained in P#P. EXPTIME Exponential hierarchy Arithmetic hierarchy A. R. Meyer and L. J. Stockmeyer. The Equivalence Problem for Regular Expressions with Squaring Requires Exponential Space, in Proceedings of the 13th IEEE Symposium on Switching and Automata Theory, pp. 125–129,1972. The paper that introduced the polynomial hierarchy, theoretical Computer Science, vol.3, pp. 1–22,1976. Michael R. Garey and David S. Johnson, computers and Intractability, A Guide to the Theory of NP-Completeness
2.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
3.
Boolean satisfiability problem
–
In computer science, the Boolean Satisfiability Problem is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable, on the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula a AND NOT b is satisfiable because one can find the values a = TRUE and b = FALSE, in contrast, a AND NOT a is unsatisfiable. SAT is one of the first problems that was proven to be NP-complete and this means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. g. Artificial intelligence, circuit design, and automatic theorem proving, a propositional logic formula, also called Boolean expression, is built from variables, operators AND, OR, NOT, and parentheses. A formula is said to be if it can be made TRUE by assigning appropriate logical values to its variables. The Boolean satisfiability problem is, given a formula, to whether it is satisfiable. This decision problem is of importance in various areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography. There are several cases of the Boolean satisfiability problem in which the formulas are required to have a particular structure. A literal is either a variable, then called positive literal, or the negation of a variable, a clause is a disjunction of literals. A clause is called a Horn clause if it contains at most one positive literal, a formula is in conjunctive normal form if it is a conjunction of clauses. The formula is satisfiable, choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since ∧ ∧ ¬FALSE evaluates to ∧ ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE. In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE and a=FALSE it evaluates to TRUE ∧ ¬TRUE and FALSE ∧ ¬FALSE, different sets of allowed boolean operators lead to different problem versions. As an example, R is a clause, and R ∧ R ∧ R is a generalized conjunctive normal form. This formula is used below, with R being the operator that is TRUE just if exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, for example, transforming the formula ∨ ∨. ∨ into conjunctive normal form yields ∧ ∧ ∧ ∧, ∧ ∧ ∧ ∧, while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables
4.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
5.
NP (complexity)
–
In computational complexity theory, NP is a complexity class used to describe certain types of decision problems. Informally, NP is the set of all decision problems for which the instances where the answer is yes have efficiently verifiable proofs, more precisely, these proofs have to be verifiable by deterministic computations that can be performed in polynomial time. Equivalently, the definition of NP is the set of decision problems solvable in polynomial time by a theoretical non-deterministic Turing machine. This second definition is the basis for the abbreviation NP, which stands for nondeterministic, however, the verifier-based definition tends to be more intuitive and practical in common applications compared to the formal machine definition. A method for solving a problem is given in the form of an algorithm. In the above definitions for NP, polynomial time refers to the number of machine operations needed by an algorithm relative to the size of the problem. Polynomial time is therefore a measure of efficiency of an algorithm, decision problems are commonly categorized into complexity classes based on the fastest known machine algorithms. As such, decision problems may change if a faster algorithm is discovered. The most important open question in complexity theory, the P versus NP problem, asks whether polynomial time algorithms actually exist for solving NP-complete and it is widely believed that this is not the case. The complexity class NP is also related to the complexity class co-NP, whether or not NP = co-NP is another outstanding question in complexity theory. The complexity class NP can be defined in terms of NTIME as follows, alternatively, NP can be defined using deterministic Turing machines as verifiers. In particular, the versions of many interesting search problems. In this example, the answer is yes, since the subset of integers corresponds to the sum + +5 =0, the task of deciding whether such a subset with sum zero exists is called the subset sum problem. To answer if some of the integers add to zero we can create an algorithm which obtains all the possible subsets, as the number of integers that we feed into the algorithm becomes larger, the number of subsets grows exponentially and so does the computation time. However, notice that, if we are given a subset, we can easily check or verify whether the subset sum is zero. So if the sum is indeed zero, that particular subset is the proof or witness for the fact that the answer is yes, an algorithm that verifies whether a given subset has sum zero is called verifier. More generally, a problem is said to be in NP if there exists a verifier V for the problem. Given any instance I of problem P, where the answer is yes, there must exist a certificate W such that, given the ordered pair as input, furthermore, if the answer to I is no, the verifier will return no with input for all possible W
6.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete
7.
P versus NP problem
–
The P versus NP problem is a major unsolved problem in computer science. Informally speaking, it asks whether every problem whose solution can be verified by a computer can also be quickly solved by a computer. The underlying issues were first discussed in the 1950s, in letters from John Nash to the National Security Agency and it is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute to carry a US$1,000,000 prize for the first correct solution. The general class of questions for which some algorithm can provide an answer in time is called class P or just P. For some questions, there is no way to find an answer quickly. The class of questions for which an answer can be verified in polynomial time is called NP, consider the subset sum problem, an example of a problem that is easy to verify, but whose answer may be difficult to compute. Given a set of integers, does some nonempty subset of them sum to 0, for instance, does a subset of the set add up to 0. The answer yes, because the subset adds up to zero can be verified with three additions. There is no algorithm to find such a subset in polynomial time. An answer to the P = NP question would determine whether problems that can be verified in polynomial time, like the subset-sum problem, can also be solved in polynomial time. Although the P versus NP problem was defined in 1971, there were previous inklings of the problems involved, the difficulty of proof. In 1955, mathematician John Nash wrote a letter to the NSA, if proved this would imply what we today would call P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. The most common resources are time and space, in such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic and sequential, arguably the biggest open question in theoretical computer science concerns the relationship between those two classes, Is P equal to NP. In 2012,10 years later, the poll was repeated. To attack the P = NP question, the concept of NP-completeness is very useful, NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time, and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems, informally, an NP-complete problem is an NP problem that is at least as tough as any other problem in NP
8.
Richard M. Karp
–
Richard Manning Karp is an American computer scientist and computational theorist at the University of California, Berkeley. Born to Abraham and Rose Karp in Boston, Massachusetts, Karp has three siblings, Robert, David, and Carolyn. He attended Harvard University, where he received his bachelors degree in 1955, his masters degree in 1956 and he started working at IBMs Thomas J. Watson Research Center. In 1968, he became Professor of Computer Science, Mathematics, apart from a 4-year period as a professor at the University of Washington, he has remained at Berkeley. From 1988 to 1995 and 1999 to the present he has also been a Research Scientist at the International Computer Science Institute in Berkeley, in 1994 he was inducted as a Fellow of the Association for Computing Machinery. He is the recipient of honorary degrees. In 2012, Karp became the director of the Simons Institute for the Theory of Computing at the University of California. He has made other important discoveries in computer science and operations research in the area of combinatorial algorithms. His major current research interests include bioinformatics, in 1973 he and John Hopcroft published the Hopcroft–Karp algorithm, still the fastest known method for finding maximum cardinality matchings in bipartite graphs. In 1980, along with Richard J. Lipton, Karp proved the Karp-Lipton theorem, in 1987 he co-developed with Michael O. Rabin the Rabin-Karp string search algorithm. Karp introduced the now standard methodology for proving problems to be NP-complete which has led to the identification of many theoretical and practical problems as being computationally difficult. ACM Crossroads magazine interview/bio of Richard Karp Karps Home Page at Berkeley Biography of Richard Karp from the Institute for Operations Research and the Management Sciences
9.
PSPACE
–
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. PSPACE is a superset of the set of context-sensitive languages. It turns out that allowing the Turing machine to be nondeterministic does not add any extra power, because of Savitchs theorem, NPSPACE is equivalent to PSPACE, essentially because a deterministic Turing machine can simulate a non-deterministic Turing machine without needing much more space. Also, the complements of all problems in PSPACE are also in PSPACE and it is widely suspected that all are strict. The containments in the line are both known to be strict. The first follows from direct diagonalization and the fact that PSPACE = NPSPACE via Savitchs theorem, the second follows simply from the space hierarchy theorem. The hardest problems in PSPACE are the PSPACE-Complete problems, see PSPACE-Complete for examples of problems that are suspected to be in PSPACE but not in NP. The class PSPACE is closed under union, complementation. An alternative characterization of PSPACE is the set of problems decidable by an alternating Turing machine in polynomial time, a logical characterization of PSPACE from descriptive complexity theory is that it is the set of problems expressible in second-order logic with the addition of a transitive closure operator. A full transitive closure is not needed, a transitive closure. It is the addition of this operator that distinguishes PSPACE from PH, a major result of complexity theory is that PSPACE can be characterized as all the languages recognizable by a particular interactive proof system, the one defining the class IP. In this system, there is an all-powerful prover trying to convince a randomized polynomial-time verifier that a string is in the language, PSPACE can be characterized as the quantum complexity class QIP. PSPACE is also equal to PCTC, problems solvable by classical computers using closed curves, as well as to BQPCTC. PSPACE-complete problems are of importance to studying PSPACE problems because they represent the most difficult problems in PSPACE. Finding a simple solution to a PSPACE-complete problem would mean we have a solution to all other problems in PSPACE because all PSPACE problems could be reduced to a PSPACE-complete problem. An example of a PSPACE-complete problem is the quantified Boolean formula problem, introduction to the Theory of Computation. Chapter 19, Polynomial space, pp. 455–490, introduction to the Theory of Computation. Chapter 8, Space Complexity Complexity Zoo, PSPACE
10.
BPP (complexity)
–
BPP is one of the largest practical classes of problems, meaning most problems of interest in BPP have efficient probabilistic algorithms that can be run quickly on real modern machines. BPP also contains P, the class of problems solvable in time with a deterministic machine. Alternatively, BPP can be defined using only deterministic Turing machines, for some applications this definition is preferable since it does not mention probabilistic Turing machines. In practice, a probability of 1⁄3 might not be acceptable, however. It can be any constant between 0 and 1⁄2 and the set BPP will be unchanged and this makes it possible to create a highly accurate algorithm by merely running the algorithm several times and taking a majority vote of the answers. For example, if one defined the class with the restriction that the algorithm can be wrong with probability at most 1⁄2100, besides the problems in P, which are obviously in BPP, many problems were known to be in BPP but not known to be in P. The number of problems is decreasing, and it is conjectured that P = BPP. For a long time, one of the most famous problems that was known to be in BPP, in other words, is there an assignment of values to the variables such that when a nonzero polynomial is evaluated on these values, the result is nonzero. It suffices to choose each variables value uniformly at random from a subset of at least d values to achieve bounded error probability. If the access to randomness is removed from the definition of BPP, in the definition of the class, if we replace the ordinary Turing machine with a quantum computer, we get the class BQP. Adding postselection to BPP, or allowing computation paths to have different lengths, BPPpath is known to contain NP, and it is contained in its quantum counterpart PostBQP. A Monte Carlo algorithm is an algorithm which is likely to be correct. Problems in the class BPP have Monte Carlo algorithms with polynomial bounded running time and this is compared to a Las Vegas algorithm which is a randomized algorithm which either outputs the correct answer, or outputs fail with low probability. Las Vegas algorithms with polynomial bound running times are used to define the class ZPP, alternatively, ZPP contains probabilistic algorithms that are always correct and have expected polynomial running time. This is weaker than saying it is a polynomial time algorithm, since it may run for super-polynomial time and it is known that BPP is closed under complement, that is, BPP = co-BPP. BPP is low for itself, meaning that a BPP machine with the power to solve BPP problems instantly is not any more powerful than the machine without this extra power. The relationship between BPP and NP is unknown, it is not known whether BPP is a subset of NP, NP is a subset of BPP or neither. If NP is contained in BPP, which is considered unlikely since it would imply practical solutions for NP-complete problems, then NP = RP and it is known that RP is a subset of BPP, and BPP is a subset of PP
11.
Cantor's diagonal argument
–
Such sets are now known as uncountable sets, and the size of infinite sets is now treated by the theory of cardinal numbers which Cantor began. The diagonal argument was not Cantors first proof of the uncountability of the real numbers, diagonalization arguments are often also the source of contradictions like Russells paradox and Richards paradox. Historically, the argument first appeared in the work of Paul du Bois-Reymond in 1875. In his 1891 article, Cantor considered the set T of all sequences of binary digits. To prove this, given an enumeration of elements from T, in the example, this yields, By construction, s differs from each sn, since their nth digits differ. Hence, s cannot occur in the enumeration, based on this theorem, Cantor then uses a proof by contradiction to show that, The set T is uncountable. He assumes for contradiction that T was countable, then all its elements could be written as an enumeration s1, s2, …, sn, …. Applying the previous theorem to this enumeration would produce a sequence s not belonging to the enumeration, however, s was an element of T and should therefore be in the enumeration. This contradicts the assumption, so T must be uncountable. The interpretation of Cantors result will depend upon ones view of mathematics, to constructivists, the argument shows no more than that there is no bijection between the natural numbers and T. It does not rule out the possibility that the latter are subcountable, the uncountability of the real numbers was already established by Cantors first uncountability proof, but it also follows from the above result. To see this, we build a one-to-one correspondence between the set T of infinite binary strings and a subset of R. Since T is uncountable, this subset of R must be uncountable, to construct this one-to-one correspondence, observe that a string in T, such as t = 0111…, appears after the binary point in the binary expansion of a number between 0 and 1. This suggests defining the function f =0. t, where t is a string in T, however, this function is not injective. For instance, f =0. 1000… = 1/2, and f =0. 0111… = 1/4 + 1/8 + 1/16 + … = 1/2, map to the same number, 1/2. Modifying this function produces a bijection from T to the open interval =, the idea is to remove the problem elements from T and, and handle them separately. From, remove the numbers having two binary expansions and these are numbers of the form m/2n, where m is an odd integer and n is a natural number. Put these numbers in a sequence, a =, from T, remove the strings appearing after the binary point in the binary expansions of 0,1, and the numbers in sequence a
12.
Scott Aaronson
–
Scott Joel Aaronson is a theoretical computer scientist. His primary area of research is quantum computing and computational complexity theory more generally, Aaronson grew up in the United States, though he spent a year in Asia when his father—a science writer turned public-relations executive—was posted to Hong Kong. He enrolled in a program for gifted youngsters run by Clarkson University, Aaronson had shown ability in mathematics from an early age, teaching himself calculus at the age of 11, provoked by symbols in a babysitters textbook. He discovered computer programming at age 11, and felt he lagged behind peers, partly for this reason, he felt drawn to theoretical computing, particularly computational complexity. At Cornell, he interested in quantum computing, and devoted himself to computational complexity. After postdoctorates at the Institute for Advanced Study and the University of Waterloo and his primary area of research is quantum computing and computational complexity theory more generally. In the summer of 2016 he moved from MIT to the University of Texas at Austin as David J. Bruton Jr, centennial Professor of Computer Science and as the founding director of UT Austins new quantum computing center. Aaronson is one of two winners of the 2012 Alan T. Waterman Award, best Paper Award of CSR2011 for the paper The Equivalence of Sampling and Searching. He is a founder of the Complexity Zoo wiki, which all classes of computational complexity. He is the author of the much-read blog Shtetl-Optimized as well as the essay Who Can Name The Bigger Number and it weaves together seemingly disparate topics into a cohesive whole, including quantum mechanics, complexity, free will, time travel, the anthropic principle and many others. Many of these applications of computational complexity were later fleshed out in his article Why Philosophers Should Care About Computational Complexity. An article of Aaronsons, The Limits of Quantum Computers, was published in Scientific American, Aaronson is frequently cited in non-academic press, such as Science News, The Age, ZDNet, Slashdot, New Scientist, The New York Times, and Forbes magazine. Aaronson was the subject of attention in October 2007, when he accused Love Communications of plagiarizing a lecture he wrote on quantum mechanics in an advertisement of theirs. He alleged that a commercial for Ricoh Australia by Sydney-based agency Love Communications appropriated content almost verbatim from the lecture, Aaronson received an email from the agency claiming to have sought legal advice and saying they did not believe that they were in violation of his copyright. Unsatisfied, Aaronson pursued the matter, and the agency settled the dispute without admitting wrongdoing by making a contribution to two science organizations of his choice. Concerning this matter, Aaronson stated, Someone suggested a cameo with the models, scott Aaronson at the Mathematics Genealogy Project Aaronsons blog Aaronson homepage