1.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
2.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
3.
Quantum computing
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
4.
Sanjeev Arora
–
Sanjeev Arora is an Indian American theoretical computer scientist who is best known for his work on probabilistically checkable proofs and, in particular, the PCP theorem. He is currently the Charles C and he received a B. S. in Mathematics with Computer Science from MIT in 1990 and received a Ph. D. in Computer Science from the University of California, Berkeley in 1994 under Umesh Vazirani. Earlier, in 1986, Sanjeev Arora had topped the prestigious IIT JEE and he was a visiting scholar at the Institute for Advanced Study in 2002-03. His Ph. D. thesis on probabilistically checkable proofs received the ACM Doctoral Dissertation Award in 1995. He was awarded the Gödel Prize for his work on the PCP theorem in 2001, in 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded the ACM Infosys Foundation Award, given to researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and he is a coauthor of the book Computational Complexity, A Modern Approach and is a founder, and on the Executive Board, of Princetons Center for Computational Intractability. He and his coauthors have argued that certain products are associated with computational asymmetry which under certain conditions may lead to market instability. Sanjeev Aroras Homepage Sanjeev Arora at the Mathematics Genealogy Project
5.
Christos Papadimitriou
–
Christos Harilaos Papadimitriou is a Greek theoretical computer scientist, and professor of Computer Science at the University of California, Berkeley. Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in Electrical Engineering. He continued to study at Princeton University, where he received his MS in Electrical Engineering in 1974 and his PhD in Electrical Engineering, Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, and UCSD, and is currently the C. Lester Hogan Professor of Electrical Engineering and Computer Science at U. C, in 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. He became fellow of the U. S. National Academy of Engineering for contributions to complexity theory, database theory, in 2009 he was elected to the US National Academy of Sciences. During the 36th International Colloquium on Automata, Languages and Programming, in 2012, he, along with Elias Koutsoupias, was awarded the Gödel Prize for their joint work on the concept of the price of anarchy. Papadimitriou is the author of the textbook Computational Complexity, one of the most widely used textbooks in the field of complexity theory. He has also co-authored the textbook Algorithms with Sanjoy Dasgupta and Umesh Vazirani, and his name was listed in the 19th position on the CiteSeer search engine academic database and digital library. In 2011, Papadimitriou received a honoris causa from the National Technical University of Athens. In 2013, Papadimitriou received a honoris causa from the École polytechnique fédérale de Lausanne. Papadimitriou was awarded the IEEE John von Neumann Medal in 2016, the EATCS Award in 2015, the Gödel Prize in 2012, elements of the Theory of Computation. Prentice-Hall,1982, second edition September 1997, Greek edition Combinatorial Optimization, Algorithms and Complexity. Prentice-Hall,1982, second edition, Dover,1998, the Theory of Database Concurrency Control. A compilation of articles written for the Greek newspaper To Vima, mcGraw-Hill, September 2006 Logicomix, An Epic Search for Truth. Bloomsbury Publishing and Bloomsbury USA, September 2009 and he co-authored a paper with Bill Gates, co-founder of Microsoft. At UC Berkeley, in 2006, he joined a professor-and-graduate-student band called Lady X and The Positive Eigenvalues
6.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
7.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
8.
Nondeterministic algorithm
–
In computer science, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run, a concurrent algorithm can perform differently on different runs due to a race condition. A probabilistic algorithms behaviors depends on a number generator. An algorithm that solves a problem in polynomial time can run in polynomial time or exponential time depending on the choices it makes during execution. The nondeterministic algorithms are used to find an approximation to a solution. The notion was introduced by Robert W. Floyd, often in computational theory, the term algorithm refers to a deterministic algorithm. A nondeterministic algorithm is different from its more familiar deterministic counterpart in its ability to arrive at outcomes using various routes and this property is captured mathematically in nondeterministic models of computation such as the nondeterministic finite automaton. In some scenarios, all paths are allowed to run simultaneously. In algorithm design, nondeterministic algorithms are used when the problem solved by the algorithm inherently allows multiple outcomes. Crucially, every outcome the nondeterministic algorithm produces is valid, regardless of which choices the algorithm makes while running, in computational complexity theory, nondeterministic algorithms are ones that, at every possible step, can allow for multiple continuations. These algorithms do not arrive at a solution for every possible path, however. The choices can be interpreted as guesses in a search process, a large number of problems can be conceptualized through nondeterministic algorithms, including the most famous unresolved question in computing theory, P vs NP. One way to simulate a nondeterministic algorithm N using a deterministic algorithm D is to treat sets of states of N as states of D and this means that D simultaneously traces all the possible execution paths of N. Another is randomization, which consists of letting all choices be determined by a number generator. The result is called a deterministic algorithm. The problem, given a number n larger than two, determine whether it is prime. A nondeterministic algorithm for this problem is the based on Fermats little theorem, Repeat thirty times. If a n −1 ≠1, return answer composite Return answer probably prime, if this algorithm returns the answer composite then the number is certainly not prime
9.
Closed timelike curve
–
In mathematical physics, a closed timelike curve is a world line in a Lorentzian manifold, of a material particle in spacetime that is closed, returning to its starting point. When discussing the evolution of a system in general relativity, or more specifically Minkowski space, a light cone represents any possible future evolution of an object given its current state, or every possible location given its current location. An objects possible future locations are limited by the speed that the object can move, for instance, an object located at position p at time t0 can only move to locations within p + c by time t1. This is commonly represented on a graph with physical locations along the axis and time running vertically, with units of t for time. Light cones in this representation appear as lines at 45 degrees centered on the object, on such a diagram, every possible future location of the object lies within the cone. Additionally, every location has a future time, implying that an object may stay at any location in space indefinitely. Any single point on such a diagram is known as an event, separate events are considered to be timelike if they are separated across the time axis, or spacelike if they differ along the space axis. If the object were in free fall, it would travel up the t-axis, if it accelerates, the actual path an object takes through spacetime, as opposed to the ones it could take, is known as the worldline. Another definition is that the light cone represents all possible worldlines, in simple examples of spacetime metrics the light cone is directed forward in time. This corresponds to the case that an object cannot be in two places at once, or alternately that it cannot move instantly to another location. In these spacetimes, the worldlines of physical objects are, by definition, however this orientation is only true of locally flat spacetimes. In curved spacetimes the light cone will be tilted along the spacetimes geodesic, for instance, while moving in the vicinity of a star, the stars gravity will pull on the object, affecting its worldline, so its possible future positions lie closer to the star. This appears as a slightly tilted lightcone on the spacetime diagram. In extreme examples, in spacetimes with suitably high-curvature metrics, the cone can be tilted beyond 45 degrees. That means there are potential future positions, from the frame of reference. From this outside viewpoint, the object can move instantaneously through space, in these situations the object would have to move, since its present spatial location would not be in its own future light cone. Additionally, with enough of a tilt, there are event locations that lie in the past as seen from the outside, with a suitable movement of what appears to it its own space axis, the object appears to travel though time as seen externally. An object in such an orbit would repeatedly return to the point in spacetime if it stays in free fall
10.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined