1.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912

2.
Sanjeev Arora
–
Sanjeev Arora is an Indian American theoretical computer scientist who is best known for his work on probabilistically checkable proofs and, in particular, the PCP theorem. He is currently the Charles C and he received a B. S. in Mathematics with Computer Science from MIT in 1990 and received a Ph. D. in Computer Science from the University of California, Berkeley in 1994 under Umesh Vazirani. Earlier, in 1986, Sanjeev Arora had topped the prestigious IIT JEE and he was a visiting scholar at the Institute for Advanced Study in 2002-03. His Ph. D. thesis on probabilistically checkable proofs received the ACM Doctoral Dissertation Award in 1995. He was awarded the Gödel Prize for his work on the PCP theorem in 2001, in 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded the ACM Infosys Foundation Award, given to researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and he is a coauthor of the book Computational Complexity, A Modern Approach and is a founder, and on the Executive Board, of Princetons Center for Computational Intractability. He and his coauthors have argued that certain products are associated with computational asymmetry which under certain conditions may lead to market instability. Sanjeev Aroras Homepage Sanjeev Arora at the Mathematics Genealogy Project

3.
AND gate
–
The AND gate is a basic digital logic gate that implements logical conjunction - it behaves according to the truth table to the right. A HIGH output results if both the inputs to the AND gate are HIGH. If neither or only one input to the AND gate is HIGH, in another sense, the function of AND effectively finds the minimum between two binary digits, just as the OR function finds the maximum between two binary digits. Therefore, the output is always 0, except all the inputs are 1. There are three symbols for AND gates, the American symbol and the IEC symbol, as well as the deprecated DIN symbol, for more information see Logic Gate Symbols. The AND gate with inputs A and B and output C implements the logical expression C = A ⋅ B, an AND gate is usually designed using N-channel or P-channel MOSFETs. The digital inputs a and b cause the output F to have the result as the AND function. If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are considered the universal gates, AND gates are available in IC packages. 7408 IC is a famous QUAD 2-Input AND GATES and contains four independent gates each of which performs the logic AND function, OR gate NOT gate NAND gate NOR gate XOR gate XNOR gate IMPLY gate Boolean algebra Logic gate

4.
OR gate
–
The OR gate is a digital logic gate that implements logical disjunction – it behaves according to the truth table to the right. A HIGH output results if one or both the inputs to the gate are HIGH, if neither input is high, a LOW output results. In another sense, the function of OR effectively finds the maximum between two digits, just as the complementary AND function finds the minimum. There are two symbols of OR gates, the American symbol and the IEC symbol, as well as the deprecated DIN symbol, for more information see Logic Gate Symbols. OR Gates are basic logic gates, and as such they are available in TTL, the standard 4000 series CMOS IC is the 4071, which includes four independent two-input OR gates. The ancestral TTL device is the 7432, there are many offshoots of the original 7432 OR gate, all having the same pinout but different internal architecture, allowing them to operate in different voltage ranges and/or at higher speeds. In addition to the standard 2-Input OR Gate, 3- and 4-Input OR Gates are also available, if no specific OR gates are available, one can be made from NAND or NOR gates in the configuration shown in the image below. Any logic gate can be made from a combination of NAND or NOR gates, with active low open collector logic outputs, as used for control signals in many circuits, an OR function can be produced by wiring together several outputs. This arrangement is called a wired OR and this implementation of an OR function typically is also found in integrated circuits of N or P-type only transistor processes. AND gate NOT gate NAND gate NOR gate XOR gate XNOR gate Boolean algebra Logic gate

5.
CC (complexity)
–
In computational complexity theory, CC is the complexity class containing decision problems which can be solved by comparator circuits of polynomial size. The most important problem which is complete for CC is a variant of the stable marriage problem. A comparator circuit is a network of wires and gates, each comparator gate, which is a directed edge connecting two wires, takes its two inputs and outputs them in sorted order. The input to any wire can be either a variable, its negation, one of the wires is designated as the output wire. The comparator circuit value problem is the problem of evaluating a comparator circuit given an encoding of the circuit, the complexity class CC is defined as the class of problems logspace reducible to CCVP. An equivalent definition is the class of problems AC0 reducible to CCVP, since there are sorting networks which can be constructed in AC0, this shows that the majority function is in CC. A problem in CC is CC-complete if every problem in CC can be reduced to it using a logspace reduction, the comparator circuit value problem is CC-complete. In the stable marriage problem, there is a number of men and women. Each person ranks all members of the opposite sex, a matching between men and women is stable if there are no unpaired man and woman who prefer each other over their current partners. Among the stable matchings, there is one in each woman gets the best man that she ever gets in any stable matching. The decision version of the matching problem is, given the rankings of all men and women, whether a given man. Although the classical Gale–Shapley algorithm cannot be implemented as a comparator circuit, another problem which is CC-complete is lexicographically-first maximal matching. In this problem, we are given a graph with an order on the vertices. The lexicographically-first maximal matching is obtained by successively matching vertices from the first bipartition to the minimal available vertices from the second bipartition, the problem asks whether the given edge belongs to this matching. Scott Aaronson showed that the model is CC-complete. The problem is to decide whether any pebbles are present in a particular pile after executing the program and he used this to show that the problem of deciding whether any balls reach a designated sink vertex in a Digi-Comp II-like device is also CC-complete. The comparator circuit evaluation problem can be solved in polynomial time, on the other hand, comparator circuits can solve directed reachability, and so CC contains NL. There is a world in which CC and NC are incomparable

6.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined

7.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker

8.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete

9.
AC0
–
AC0 is a complexity class used in circuit complexity. It is the smallest class in the AC hierarchy, and consists of all families of circuits of depth O and polynomial size, with unlimited-fanin AND gates and it thus contains NC0, which has only bounded-fanin AND and OR gates. Integer addition and subtraction are computable in AC0, but multiplication is not, in 1984 Furst, Saxe, and Sipser showed that calculating the parity of an input cannot be decided by any AC0 circuits, even with non-uniformity. It follows that AC0 is not equal to NC1, because a family of circuits in the class can compute parity. More precise bounds follow from switching lemma, using them, it has been shown that there is an oracle separation between the polynomial hierarchy and PSPACE

10.
Modulo operation
–
In computing, the modulo operation finds the remainder after division of one number by another. Given two positive numbers, a and n, a n is the remainder of the Euclidean division of a by n. Although typically performed with a and n both being integers, many computing systems allow other types of numeric operands, the range of numbers for an integer modulo of n is 0 to n −1. See modular arithmetic for an older and related convention applied in number theory, when either a or n is negative, the naive definition breaks down and programming languages differ in how these values are defined. In mathematics, the result of the operation is the remainder of the Euclidean division. Computers and calculators have various ways of storing and representing numbers, usually, in number theory, the positive remainder is always chosen, but programming languages choose depending on the language and the signs of a or n. Standard Pascal and ALGOL68 give a positive remainder even for negative divisors, a modulo 0 is undefined in most systems, although some do define it as a. Despite its widespread use, truncated division is shown to be inferior to the other definitions, when the result of a modulo operation has the sign of the dividend, it can lead to surprising mistakes. For special cases, on some hardware, faster alternatives exist, optimizing compilers may recognize expressions of the form expression % constant where constant is a power of two and automatically implement them as expression &. This can allow writing clearer code without compromising performance and this optimization is not possible for languages in which the result of the modulo operation has the sign of the dividend, unless the dividend is of an unsigned integer type. This is because, if the dividend is negative, the modulo will be negative, some modulo operations can be factored or expanded similar to other mathematical operations. This may be useful in cryptography proofs, such as the Diffie–Hellman key exchange, identity, mod n = a mod n. nx mod n =0 for all positive integer values of x. If p is a number which is not a divisor of b, then abp−1 mod p = a mod p. B−1 mod n denotes the multiplicative inverse, which is defined if and only if b and n are relatively prime. Distributive, mod n = mod n. ab mod n = mod n, division, a/b mod n = mod n, when the right hand side is defined. Inverse multiplication, mod n = a mod n, modulo and modulo – many uses of the word modulo, all of which grew out of Carl F. Gausss introduction of modular arithmetic in 1801. Modular exponentiation ^ Perl usually uses arithmetic modulo operator that is machine-independent, for examples and exceptions, see the Perl documentation on multiplicative operators. ^ Mathematically, these two choices are but two of the number of choices available for the inequality satisfied by a remainder

11.
ACC0
–
ACC0, sometimes called ACC, is a class of computational models and problems defined in circuit complexity, a field of theoretical computer science. The class is defined by augmenting the class AC0 of constant-depth alternating circuits with the ability to count, specifically, a problem belongs to ACC0 if it can be solved by polynomial-size, constant-depth circuits of unbounded fan-in gates, including gates that count modulo a fixed integer. ACC0 corresponds to computation in any solvable monoid, more formally, a language belongs to AC0 if it can be computed by a family of circuits C1, C2. A language belongs to ACC0 if it belongs to AC0 for some m, in some texts, ACCi refers to a hierarchy of circuit classes with ACC0 at its lowest level, where the circuits in ACCi have depth O and polynomial size. The class ACC0 can also be defined in terms of computations of nonuniform deterministic finite automata over monoids. In this framework, the input is interpreted as elements from a fixed monoid, the class ACC0 is the family of languages accepted by a NUDFA over some monoid that does not contain an unsolvable group as a subsemigroup. This inclusion is strict, because a single MOD-2 gate computes the parity function, more generally, the function MODm can not be computed in AC0 for prime p unless m is a power of p. The class ACC0 is included in TC0 and it is conjectured that ACC0 is unable to compute the majority function of its inputs, but this remains unresolved as of July 2014. Every problem in ACC0 can be solved by circuits of depth 2, with AND gates of polylogarithmic fan-in at the inputs, the proof follows ideas of the proof of Todas theorem. Williams proves that ACC0 does not contain NEXPTIME, the proof uses many results in complexity theory, including the time hierarchy theorem, IP = PSPACE, derandomization, and the representation of ACC0 via SYM+ circuits. It is known that computing the permanent is impossible for logtime-uniform ACC0 circuits, which implies that the complexity class PP is not contained in logtime-uniform ACC0