1.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
2.
Quantum simulator
–
Quantum simulators permit the study of quantum systems that are difficult to study in the laboratory and impossible to model with a supercomputer. In this instance, simulators are special purpose devices designed to provide insight about specific physics problems, a universal quantum simulator is a quantum computer proposed by Richard Feynman in 1982. Feynman showed that a classical Turing machine would experience an exponential slowdown when simulating quantum phenomena, david Deutsch in 1985, took the ideas further and described a universal quantum computer. In 1996, Seth Lloyd showed that a quantum computer can be programmed to simulate any local quantum system efficiently. A quantum system of particles is described by a Hilbert space whose dimension is exponentially large in the number of particles. Therefore, the approach to simulate such a system requires exponential time on a classical computer. However, it is conceivable that a system of many particles could be simulated by a quantum computer using a number of quantum bits similar to the number of particles in the original system. As shown by Lloyd, this is true for a class of systems known as local quantum systems. This has been extended to much larger classes of quantum systems, Quantum simulators have been realized on a number of experimental platforms, including systems of ultracold quantum gases, trapped ions, photonic systems and superconducting circuits. A trapped-ion simulator, built by a team that included the NIST and reported in April 2012, can engineer, previous endeavors were unable to go beyond 30 quantum bits. As described in the scientific journal Nature, the capability of this simulator is 10 times more than previous devices, also, it has passed a series of important benchmarking tests that indicate a capability to solve problems in material science that are impossible to model on conventional computers. Furthermore, many important problems in physics, especially low-temperature physics, conventional computers, including supercomputers, are inadequate for simulating quantum systems with as few as 30 particles. The trapped-ion simulator consists of a tiny, single-plane crystal of hundreds of beryllium ions, less than 1 millimeter in diameter, hovering inside a device called a Penning trap. The outermost electron of each ion acts as a quantum magnet and is used as a qubit. In the benchmarking experiment, physicists used laser beams to cool the ions to near absolute zero, carefully timed microwave and laser pulses then caused the qubits to interact, mimicking the quantum behavior of materials otherwise very difficult to study in the laboratory. Although the two systems may outwardly appear dissimilar, their behavior is engineered to be mathematically identical, in this way, simulators allow researchers to vary parameters that couldn’t be changed in natural solids, such as atomic lattice spacing and geometry. Friedenauer et al. adiabatically manipulated 2 spins, showing their separation into ferromagnetic and antiferromagnetic states, kim et al. used adiabatic quantum simulation to demonstrate the sharpening of a phase transition between paramagnetic and ferromagnetic ordering as the number of spins increased from 2 to 9. Britton, et al. from NIST has experimentally benchmarked Ising interactions in a system of hundreds of qubits for studies of quantum magnetism
3.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
4.
Quantum computing
–
Quantum computing studies theoretical computation systems that make direct use of quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Quantum computers are different from binary digital electronic computers based on transistors, a quantum Turing machine is a theoretical model of such a computer, and is also known as the universal quantum computer. The field of computing was initiated by the work of Paul Benioff and Yuri Manin in 1980, Richard Feynman in 1982. A quantum computer with spins as quantum bits was also formulated for use as a quantum space–time in 1968, there exist quantum algorithms, such as Simons algorithm, that run faster than any possible probabilistic classical algorithm. A classical computer could in principle simulate a quantum algorithm, as quantum computation does not violate the Church–Turing thesis, on the other hand, quantum computers may be able to efficiently solve problems which are not practically feasible on classical computers. A classical computer has a made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits, in general, a quantum computer with n qubits can be in an arbitrary superposition of up to 2 n different states simultaneously. A quantum computer operates by setting the qubits in a drift that represents the problem at hand. The sequence of gates to be applied is called a quantum algorithm, the calculation ends with a measurement, collapsing the system of qubits into one of the 2 n pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most n classical bits of information, Quantum algorithms are often probabilistic, in that they provide the correct solution only with a certain known probability. Note that the term non-deterministic computing must not be used in case to mean probabilistic. An example of an implementation of qubits of a computer could start with the use of particles with two spin states, down and up. This is true because any such system can be mapped onto an effective spin-1/2 system, a quantum computer with a given number of qubits is fundamentally different from a classical computer composed of the same number of classical bits. This means that when the state of the qubits is measured. To better understand this point, consider a classical computer that operates on a three-bit register, if there is no uncertainty over its state, then it is in exactly one of these states with probability 1. However, if it is a computer, then there is a possibility of it being in any one of a number of different states. The state of a quantum computer is similarly described by an eight-dimensional vector. Here, however, the coefficients a k are complex numbers, and it is the sum of the squares of the absolute values, ∑ i | a i |2
5.
Ran Raz
–
Ran Raz is a computer scientist who works in the area of computational complexity theory. He was a professor in the faculty of mathematics and computer science at the Weizmann Institute and he is now a professor of computer science at Princeton University. Ran Raz is well known for his work on interactive proof systems and his two most-cited papers are Raz on multi-prover interactive proofs and Raz & Safra on probabilistically checkable proofs. Ran Raz received Erdős Prize in 2002 and his work has been awarded in the top conferences in theoretical computer science. In 2004, he received the best paper award in ACM Symposium on Theory of Computing for Raz, in 2008, the work Moshkovitz & Raz received the best paper award in IEEE Symposium on Foundations of Computer Science. Raz, Ran, Safra, Shmuel, A sub-constant error-probability low-degree test, STOC1997, pp. 475–484, doi,10. 1145/258533.258641, ISBN 0-89791-888-6. Raz, Ran, A parallel repetition theorem, SIAM Journal on Computing,27, 763–803, Raz, Ran, Multi-linear formulas for permanent and determinant are of super-polynomial size, Proc. STOC2004, pp. 633–641, doi,10. 1145/1007352.1007353, Raz, Ran, Shpilka, Amir, Deterministic polynomial identity testing in non commutative models, Proc. CCC2004, pp. 215–222, doi,10. 1109/CCC.2004.1313845, Moshkovitz, Dana, Raz, Ran, Two query PCP with sub-constant error, Proc. FOCS2008, pp. 314–323, doi,10. 1109/FOCS.2008.60, ISBN 978-0-7695-3436-7
6.
Jones polynomial
–
In the mathematical field of knot theory, the Jones polynomial is a knot polynomial discovered by Vaughan Jones in 1984. Specifically, it is an invariant of a knot or link which assigns to each oriented knot or link a Laurent polynomial in the variable t 1 /2 with integer coefficients. Suppose we have an oriented link L, given as a knot diagram and we will define the Jones polynomial, V, using Kauffmans bracket polynomial, which we denote by ⟨ ⟩. Note that here the bracket polynomial is a Laurent polynomial in the variable A with integer coefficients, first, we define the auxiliary polynomial X = − w ⟨ L ⟩, where w denotes the writhe of L in its given diagram. The writhe of a diagram is the number of positive crossings minus the number of negative crossings, the writhe is not a knot invariant. X is a knot invariant since it is invariant under changes of the diagram of L by the three Reidemeister moves, invariance under type II and III Reidemeister moves follows from invariance of the bracket under those moves. The bracket polynomial is known to change by multiplication by − A ±3 under a type I Reidemeister move, the definition of the X polynomial given above is designed to nullify this change, since the writhe changes appropriately by +1 or -1 under type I moves. Now make the substitution A = t −1 /4 in X to get the Jones polynomial V and this results in a Laurent polynomial with integer coefficients in the variable t 1 /2. This construction of the Jones Polynomial for tangles is a generalization of the Kauffman bracket of a link. The construction was developed by Professor Vladimir G. Turaev and published in 1990 in the Journal of Mathematics and Science. Let k be an integer and S k denote the set of all isotopic types of tangle diagrams, with 2 k ends, having no crossing points. Jones original formulation of his polynomial came from his study of operator algebras, a theorem of Alexanders states that it is the trace closure of a braid, say with n strands. Now define a representation ρ of the group on n strands, Bn. The standard braid generator σ i is sent to A ⋅ e i + A −1 ⋅1 and it can be checked easily that this defines a representation. Take the braid word σ obtained previously from L and compute δ n −1 t r ρ where tr is the Markov trace and this gives ⟨ L ⟩, where ⟨ ⟩ is the bracket polynomial. This can be seen by considering, as Kauffman did, the Temperley–Lieb algebra as a particular diagram algebra, an advantage of this approach is that one can pick similar representations into other algebras, such as the R-matrix representations, leading to generalized Jones invariants. Thus, a knot, a knot equivalent to its mirror image, has palindromic entries in its Jones polynomial. See the article on skein relation for an example of a computation using these relations, another remarkable property of this invariant states that the Jones Polynomial of an alternating link is an alternating polynomial
7.
International Standard Serial Number
–
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication. The ISSN is especially helpful in distinguishing between serials with the same title, ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature. The ISSN system was first drafted as an International Organization for Standardization international standard in 1971, ISO subcommittee TC 46/SC9 is responsible for maintaining the standard. When a serial with the content is published in more than one media type. For example, many serials are published both in print and electronic media, the ISSN system refers to these types as print ISSN and electronic ISSN, respectively. The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers, as an integer number, it can be represented by the first seven digits. The last code digit, which may be 0-9 or an X, is a check digit. Formally, the form of the ISSN code can be expressed as follows, NNNN-NNNC where N is in the set, a digit character. The ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, for calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, the modulus 11 of the sum must be 0. There is an online ISSN checker that can validate an ISSN, ISSN codes are assigned by a network of ISSN National Centres, usually located at national libraries and coordinated by the ISSN International Centre based in Paris. The International Centre is an organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, at the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept, where ISBNs are assigned to individual books, an ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an identifier associated with a serial title. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change, separate ISSNs are needed for serials in different media. Thus, the print and electronic versions of a serial need separate ISSNs. Also, a CD-ROM version and a web version of a serial require different ISSNs since two different media are involved, however, the same ISSN can be used for different file formats of the same online serial
8.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
9.
ArXiv
–
In many fields of mathematics and physics, almost all scientific papers are self-archived on the arXiv repository. Begun on August 14,1991, arXiv. org passed the half-million article milestone on October 3,2008, by 2014 the submission rate had grown to more than 8,000 per month. The arXiv was made possible by the low-bandwidth TeX file format, around 1990, Joanne Cohn began emailing physics preprints to colleagues as TeX files, but the number of papers being sent soon filled mailboxes to capacity. Additional modes of access were added, FTP in 1991, Gopher in 1992. The term e-print was quickly adopted to describe the articles and its original domain name was xxx. lanl. gov. Due to LANLs lack of interest in the rapidly expanding technology, in 1999 Ginsparg changed institutions to Cornell University and it is now hosted principally by Cornell, with 8 mirrors around the world. Its existence was one of the factors that led to the current movement in scientific publishing known as open access. Mathematicians and scientists regularly upload their papers to arXiv. org for worldwide access, Ginsparg was awarded a MacArthur Fellowship in 2002 for his establishment of arXiv. The annual budget for arXiv is approximately $826,000 for 2013 to 2017, funded jointly by Cornell University Library, annual donations were envisaged to vary in size between $2,300 to $4,000, based on each institution’s usage. As of 14 January 2014,174 institutions have pledged support for the period 2013–2017 on this basis, in September 2011, Cornell University Library took overall administrative and financial responsibility for arXivs operation and development. Ginsparg was quoted in the Chronicle of Higher Education as saying it was supposed to be a three-hour tour, however, Ginsparg remains on the arXiv Scientific Advisory Board and on the arXiv Physics Advisory Committee. The lists of moderators for many sections of the arXiv are publicly available, additionally, an endorsement system was introduced in 2004 as part of an effort to ensure content that is relevant and of interest to current research in the specified disciplines. Under the system, for categories that use it, an author must be endorsed by an established arXiv author before being allowed to submit papers to those categories. Endorsers are not asked to review the paper for errors, new authors from recognized academic institutions generally receive automatic endorsement, which in practice means that they do not need to deal with the endorsement system at all. However, the endorsement system has attracted criticism for allegedly restricting scientific inquiry, perelman appears content to forgo the traditional peer-reviewed journal process, stating, If anybody is interested in my way of solving the problem, its all there – let them go and read about it. The arXiv generally re-classifies these works, e. g. in General mathematics, papers can be submitted in any of several formats, including LaTeX, and PDF printed from a word processor other than TeX or LaTeX. The submission is rejected by the software if generating the final PDF file fails, if any image file is too large. ArXiv now allows one to store and modify an incomplete submission, the time stamp on the article is set when the submission is finalized
10.
Quantum state
–
In quantum physics, quantum state refers to the state of an isolated quantum system. A quantum state provides a probability distribution for the value of each observable, knowledge of the quantum state together with the rules for the systems evolution in time exhausts all that can be predicted about the systems behavior. A mixture of states is again a quantum state. Quantum states that cannot be written as a mixture of states are called pure quantum states. Mathematically, a quantum state can be represented by a ray in a Hilbert space over the complex numbers. The ray is a set of nonzero vectors differing by just a scalar factor, any of them can be chosen as a state vector to represent the ray. A unit vector is usually picked, but its phase factor can be chosen freely anyway, nevertheless, such factors are important when state vectors are added together to form a superposition. Hilbert space is a generalization of the ordinary Euclidean space and it all possible pure quantum states of the given system. If this Hilbert space, by choice of representation, is exhibited as a function space, a more complicated case is given by the spin part of a state vector | ψ ⟩ =12, which involves superposition of joint spin states for two particles with spin 1⁄2. A mixed quantum state corresponds to a mixture of pure states, however. Mixed states are described by so-called density matrices, a pure state can also be recast as a density matrix, in this way, pure states can be represented as a subset of the more general mixed states. For example, if the spin of an electron is measured in any direction, e. g. with a Stern–Gerlach experiment, the Hilbert space for the electrons spin is therefore two-dimensional. A mixed state, in case, is a 2 ×2 matrix that is Hermitian, positive-definite. These probability distributions arise for both mixed states and pure states, it is impossible in quantum mechanics to prepare a state in all properties of the system are fixed. This is exemplified by the uncertainty principle, and reflects a difference between classical and quantum physics. Even in quantum theory, however, for every observable there are states that have an exact. In the mathematical formulation of mechanics, pure quantum states correspond to vectors in a Hilbert space. The operator serves as a function which acts on the states of the system
11.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
12.
Quantum circuit
–
This analogous structure is referred to as an n-qubit register. The elementary logic gates of a computer, other than the NOT gate, are not reversible. A reversible gate is a function on n-bit data that returns n-bit data. The set of data is the space n, which consists of 2n strings of 0s. More precisely, a reversible gate is a bijective mapping f from the set n of n-bit data onto itself. An example of such a reversible gate f is a mapping that applies a fixed permutation to its inputs, for reasons of practical engineering, one typically studies gates only for small values of n, e. g. n=1, n=2 or n=3. These gates can be described by tables. To define quantum gates, we first need to specify the quantum replacement of an n-bit datum, the quantized version of classical n-bit space n is the Hilbert space H QB = ℓ2. This is by definition the space of complex-valued functions on n and is naturally an inner product space and this space can also be regarded as consisting of linear superpositions of classical bit strings. Note that HQB is a space over the complex numbers of dimension 2n. The elements of space are called n-qubits. Using Dirac ket notation, if x1, x2, all n-qubits are complex linear combinations of these computational basis states. Quantum logic gates, in contrast to classical logic gates, are always reversible, One requires a special kind of reversible function, namely a unitary mapping, that is, a linear transformation of a complex inner product space that preserves the Hermitian inner product. An n-qubit quantum gate is a unitary mapping U from the space HQB of n-qubits onto itself, typically, we are only interested in gates for small values of n. A reversible n-bit classical logic gate gives rise to a reversible n-bit quantum gate as follows, to each reversible n-bit logic gate f corresponds a quantum gate Wf defined as follows, note that Wf permutes the computational basis states. Of particular importance is the controlled NOT gate WCNOT defined on a quantized 2 qubit, other examples of quantum logic gates derived from classical ones are the Toffoli gate and the Fredkin gate. However, the Hilbert-space structure of the qubits permits many quantum gates that are not induced by classical ones, again, we consider first reversible classical computation. Conceptually, there is no difference between a reversible n-bit circuit and a reversible n-bit logic gate, either one is just a function on the space of n bit data