1.
Parallel computing
–
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time, there are several different forms of parallel computing, bit-level, instruction-level, data, and task parallelism. Parallelism has been employed for years, mainly in high-performance computing. As power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture, specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. Communication and synchronization between the different subtasks are typically some of the greatest obstacles to getting good parallel program performance, a theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahls law. Traditionally, computer software has been written for serial computation, to solve a problem, an algorithm is constructed and implemented as a serial stream of instructions. These instructions are executed on a central processing unit on one computer, only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the hand, uses multiple processing elements simultaneously to solve a problem. This is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm simultaneously with the others. The processing elements can be diverse and include such as a single computer with multiple processors, several networked computers, specialized hardware. Frequency scaling was the dominant reason for improvements in performance from the mid-1980s until 2004. The runtime of a program is equal to the number of instructions multiplied by the time per instruction. Maintaining everything else constant, increasing the frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V2 × F, where C is the capacitance being switched per clock cycle, V is voltage, increases in frequency increase the amount of power used in a processor. Moores law is the observation that the number of transistors in a microprocessor doubles every 18 to 24 months. Despite power consumption issues, and repeated predictions of its end, with the end of frequency scaling, these additional transistors can be used to add extra hardware for parallel computing. Optimally, the speedup from parallelization would be linear—doubling the number of processing elements should halve the runtime, however, very few parallel algorithms achieve optimal speedup
2.
Cambridge University Press
–
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by Henry VIII in 1534, it is the worlds oldest publishing house and it also holds letters patent as the Queens Printer. The Presss mission is To further the Universitys mission by disseminating knowledge in the pursuit of education, learning, Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global presence, publishing hubs, and offices in more than 40 countries. Its publishing includes journals, monographs, reference works, textbooks. Cambridge University Press is an enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press and it originated from Letters Patent granted to the University of Cambridge by Henry VIII in 1534, and has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses, authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, and Stephen Hawking. In 1591, Thomass successor, John Legate, printed the first Cambridge Bible, the London Stationers objected strenuously, claiming that they had the monopoly on Bible printing. The universitys response was to point out the provision in its charter to print all manner of books. In July 1697 the Duke of Somerset made a loan of £200 to the university towards the house and presse and James Halman, Registrary of the University. It was in Bentleys time, in 1698, that a body of scholars was appointed to be responsible to the university for the Presss affairs. The Press Syndicates publishing committee still meets regularly, and its role still includes the review, John Baskerville became University Printer in the mid-eighteenth century. Baskervilles concern was the production of the finest possible books using his own type-design, a technological breakthrough was badly needed, and it came when Lord Stanhope perfected the making of stereotype plates. This involved making a mould of the surface of a page of type. The Press was the first to use this technique, and in 1805 produced the technically successful, under the stewardship of C. J. Clay, who was University Printer from 1854 to 1882, the Press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks, during Clays administration, the Press also undertook a sizable co-publishing venture with Oxford, the Revised Version of the Bible, which was begun in 1870 and completed in 1885. It was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories, the Cambridge Modern History was published between 1902 and 1912
3.
Stephen Cook
–
Stephen Arthur Cook, OC OOnt is an American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity. He is currently a university professor at the University of Toronto, Department of Computer Science, Cook received his Bachelors degree in 1961 from the University of Michigan, and his Masters degree and Ph. D. from Harvard University, respectively in 1962 and 1966. He joined the University of California, Berkeley, mathematics department in 1966 as an assistant professor, Stephen Cook is considered one of the forefathers of computational complexity theory. During his PhD, Cook worked on complexity of functions, mainly on multiplication and this theorem was proven independently by Leonid Levin in the Soviet Union, and has thus been given the name the Cook-Levin theorem. The paper also formulated the most famous problem in computer science, informally, the P vs. NP question asks whether every optimization problem whose answers can be efficiently verified for correctness/optimality can be solved optimally with an efficient algorithm. Given the abundance of such problems in everyday life, a positive answer to the P vs. NP question would likely have profound practical and philosophical consequences. Cook conjectures that there are problems which cannot be solved by efficient algorithms. Yet, the conjecture remains open and is among the seven famous Millennium Prize Problems, in 1982, Cook received the Turing award for his contributions to complexity theory. His citation reads, For his advancement of our understanding of the complexity of computation in a significant and his seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NP-Completeness. The ensuing exploration of the boundaries and nature of NP-complete class of problems has been one of the most active and he made another major contribution to the field in his 1979 paper, joint with his student Robert A. They proved that the existence of a system in which every true formula has a short proof is equivalent to NP = coNP. Cook co-authored a book with his student Phuong The Nguyen in this area titled Logical Foundations of Proof Complexity and his main research areas are complexity theory and proof complexity, with excursions into programming language semantics, parallel computation, and artificial intelligence. He named the complexity class NC after Nick Pippenger, the complexity class SC is named after him. The definition of the complexity class AC0 and its hierarchy AC are also introduced by him, according to Don Knuth the KMP algorithm was inspired by Cooks automata for recognizing concatenated palindromes in linear time. Cook was awarded a Steacie Fellowship in 1977, a Killam Research Fellowship in 1982 and he has won John L. Synge Award and Bernard Bolzano Medal, and is a fellow of the Royal Society of London and Royal Society of Canada. Cook was elected to membership in the National Academy of Sciences, Cook won the ACM Turing Award in 1982. Association for Computing Machinery honored him as a Fellow of ACM in 2008 for his contributions to the theory of computational complexity. The Government of Ontario appointed him to the Order of Ontario in 2013 and he has won the 2012 Gerhard Herzberg Canada Gold Medal for Science and Engineering, the highest honor for scientist and engineers in Canada
4.
Adder (electronics)
–
An adder is a digital circuit that performs addition of numbers. In many computers and other kinds of processors adders are used in the arithmetic logic units and they are also utilized in other parts of the processor, where they are used to calculate addresses, table indices, increment and decrement operators, and similar operations. Although adders can be constructed for many number representations, such as binary-coded decimal or excess-3, in cases where twos complement or ones complement is being used to represent negative numbers, it is trivial to modify an adder into an adder–subtractor. Other signed number representations require more logic around the basic adder, the half adder adds two single binary digits A and B. It has two outputs, sum and carry, the carry signal represents an overflow into the next digit of a multi-digit addition. The value of the sum in decimal system is 2C + S, the simplest half-adder design, pictured on the right, incorporates an XOR gate for S and an AND gate for C. With the addition of an OR gate to combine their carry outputs, the half adder adds two input bits and generates a carry and sum, which are the two outputs of a half adder. The input variables of a half adder are called the augend and addend bits, the output variables are the sum and carry. The truth table for the adder is, A full adder adds binary numbers. A one-bit full adder adds three one-bit numbers, often written as A, B, and Cin, A and B are the operands, and Cin is a bit carried in from the previous less-significant stage. The full adder is usually a component in a cascade of adders, the circuit produces a two-bit output. Output carry and sum typically represented by the signals Cout and S, a full adder can be implemented in many different ways such as with a custom transistor-level circuit or composed of other gates. One example implementation is with S = A ⊕ B ⊕ C in, in this implementation, the final OR gate before the carry-out output may be replaced by an XOR gate without altering the resulting logic. Using only two types of gates is convenient if the circuit is being implemented using simple IC chips which contain only one type per chip. The critical path of a full adder runs through both XOR-gates and ends at the sum bit s. Assumed that an XOR-gate takes 1 delays to complete, the delay imposed by the path of a full adder is equal to T FA =2 ⋅ T XOR =2 D. The truth table for the adder is, It is possible to create a logical circuit using multiple full adders to add N-bit numbers. Each full adder inputs a Cin, which is the Cout of the previous adder and this kind of adder is called a ripple-carry adder, since each carry bit ripples to the next full adder
5.
Sanjeev Arora
–
Sanjeev Arora is an Indian American theoretical computer scientist who is best known for his work on probabilistically checkable proofs and, in particular, the PCP theorem. He is currently the Charles C and he received a B. S. in Mathematics with Computer Science from MIT in 1990 and received a Ph. D. in Computer Science from the University of California, Berkeley in 1994 under Umesh Vazirani. Earlier, in 1986, Sanjeev Arora had topped the prestigious IIT JEE and he was a visiting scholar at the Institute for Advanced Study in 2002-03. His Ph. D. thesis on probabilistically checkable proofs received the ACM Doctoral Dissertation Award in 1995. He was awarded the Gödel Prize for his work on the PCP theorem in 2001, in 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded the ACM Infosys Foundation Award, given to researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and he is a coauthor of the book Computational Complexity, A Modern Approach and is a founder, and on the Executive Board, of Princetons Center for Computational Intractability. He and his coauthors have argued that certain products are associated with computational asymmetry which under certain conditions may lead to market instability. Sanjeev Aroras Homepage Sanjeev Arora at the Mathematics Genealogy Project
6.
Christos Papadimitriou
–
Christos Harilaos Papadimitriou is a Greek theoretical computer scientist, and professor of Computer Science at the University of California, Berkeley. Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in Electrical Engineering. He continued to study at Princeton University, where he received his MS in Electrical Engineering in 1974 and his PhD in Electrical Engineering, Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, and UCSD, and is currently the C. Lester Hogan Professor of Electrical Engineering and Computer Science at U. C, in 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. He became fellow of the U. S. National Academy of Engineering for contributions to complexity theory, database theory, in 2009 he was elected to the US National Academy of Sciences. During the 36th International Colloquium on Automata, Languages and Programming, in 2012, he, along with Elias Koutsoupias, was awarded the Gödel Prize for their joint work on the concept of the price of anarchy. Papadimitriou is the author of the textbook Computational Complexity, one of the most widely used textbooks in the field of complexity theory. He has also co-authored the textbook Algorithms with Sanjoy Dasgupta and Umesh Vazirani, and his name was listed in the 19th position on the CiteSeer search engine academic database and digital library. In 2011, Papadimitriou received a honoris causa from the National Technical University of Athens. In 2013, Papadimitriou received a honoris causa from the École polytechnique fédérale de Lausanne. Papadimitriou was awarded the IEEE John von Neumann Medal in 2016, the EATCS Award in 2015, the Gödel Prize in 2012, elements of the Theory of Computation. Prentice-Hall,1982, second edition September 1997, Greek edition Combinatorial Optimization, Algorithms and Complexity. Prentice-Hall,1982, second edition, Dover,1998, the Theory of Database Concurrency Control. A compilation of articles written for the Greek newspaper To Vima, mcGraw-Hill, September 2006 Logicomix, An Epic Search for Truth. Bloomsbury Publishing and Bloomsbury USA, September 2009 and he co-authored a paper with Bill Gates, co-founder of Microsoft. At UC Berkeley, in 2006, he joined a professor-and-graduate-student band called Lady X and The Positive Eigenvalues
7.
International Standard Serial Number
–
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication. The ISSN is especially helpful in distinguishing between serials with the same title, ISSN are used in ordering, cataloging, interlibrary loans, and other practices in connection with serial literature. The ISSN system was first drafted as an International Organization for Standardization international standard in 1971, ISO subcommittee TC 46/SC9 is responsible for maintaining the standard. When a serial with the content is published in more than one media type. For example, many serials are published both in print and electronic media, the ISSN system refers to these types as print ISSN and electronic ISSN, respectively. The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers, as an integer number, it can be represented by the first seven digits. The last code digit, which may be 0-9 or an X, is a check digit. Formally, the form of the ISSN code can be expressed as follows, NNNN-NNNC where N is in the set, a digit character. The ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, for calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, the modulus 11 of the sum must be 0. There is an online ISSN checker that can validate an ISSN, ISSN codes are assigned by a network of ISSN National Centres, usually located at national libraries and coordinated by the ISSN International Centre based in Paris. The International Centre is an organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, at the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept, where ISBNs are assigned to individual books, an ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an identifier associated with a serial title. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change, separate ISSNs are needed for serials in different media. Thus, the print and electronic versions of a serial need separate ISSNs. Also, a CD-ROM version and a web version of a serial require different ISSNs since two different media are involved, however, the same ISSN can be used for different file formats of the same online serial
8.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
9.
Big O notation
–
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, in computer science, big O notation is used to classify algorithms according to how their running time or space requirements grow as the input size grows. Big O notation characterizes functions according to their rates, different functions with the same growth rate may be represented using the same O notation. The letter O is used because the rate of a function is also referred to as order of the function. A description of a function in terms of big O notation usually only provides a bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o, Ω, ω, Big O notation is also used in many other fields to provide similar estimates. Let f and g be two functions defined on some subset of the real numbers. That is, f = O if and only if there exists a real number M. In many contexts, the assumption that we are interested in the rate as the variable x goes to infinity is left unstated. If f is a product of several factors, any constants can be omitted, for example, let f = 6x4 − 2x3 +5, and suppose we wish to simplify this function, using O notation, to describe its growth rate as x approaches infinity. This function is the sum of three terms, 6x4, −2x3, and 5, of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x, namely 6x4. Now one may apply the rule, 6x4 is a product of 6. Omitting this factor results in the simplified form x4, thus, we say that f is a big-oh of. Mathematically, we can write f = O, one may confirm this calculation using the formal definition, let f = 6x4 − 2x3 +5 and g = x4. Applying the formal definition from above, the statement that f = O is equivalent to its expansion, | f | ≤ M | x 4 | for some choice of x0 and M. To prove this, let x0 =1 and M =13, Big O notation has two main areas of application. In mathematics, it is used to describe how closely a finite series approximates a given function. In computer science, it is useful in the analysis of algorithms, in both applications, the function g appearing within the O is typically chosen to be as simple as possible, omitting constant factors and lower order terms
10.
NP-completeness
–
In computational complexity theory, a decision problem is NP-complete when it is both in NP and NP-hard. The set of NP-complete problems is often denoted by NP-C or NPC, the abbreviation NP refers to nondeterministic polynomial time. That is, the required to solve the problem using any currently known algorithm increases very quickly as the size of the problem grows. As a consequence, determining whether or not it is possible to solve problems quickly. NP-complete problems are addressed by using heuristic methods and approximation algorithms. A problem p in NP is NP-complete if every problem in NP can be transformed into p in polynomial time. NP-complete problems are studied because the ability to quickly verify solutions to a problem seems to correlate with the ability to solve that problem. It is not known whether every problem in NP can be quickly solved—this is called the P versus NP problem, because of this, it is often said that NP-complete problems are harder or more difficult than NP problems in general. A decision problem C is NP-complete if, C is in NP, C can be shown to be in NP by demonstrating that a candidate solution to C can be verified in polynomial time. Note that a problem satisfying condition 2 is said to be NP-hard, a consequence of this definition is that if we had a polynomial time algorithm for C, we could solve all problems in NP in polynomial time. The concept of NP-completeness was introduced in 1971, though the term NP-complete was introduced later, at 1971 STOC conference, there was a fierce debate among the computer scientists about whether NP-complete problems could be solved in polynomial time on a deterministic Turing machine. This is known as the question of whether P=NP, nobody has yet been able to determine conclusively whether NP-complete problems are in fact solvable in polynomial time, making this one of the great unsolved problems of mathematics. The Clay Mathematics Institute is offering a US $1 million reward to anyone who has a proof that P=NP or that P≠NP. Cook–Levin theorem states that the Boolean satisfiability problem is NP-complete, in 1972, Richard Karp proved that several other problems were also NP-complete, thus there is a class of NP-complete problems. For more details refer to Introduction to the Design and Analysis of Algorithms by Anany Levitin, an interesting example is the graph isomorphism problem, the graph theory problem of determining whether a graph isomorphism exists between two graphs. Two graphs are isomorphic if one can be transformed into the other simply by renaming vertices, consider these two problems, Graph Isomorphism, Is graph G1 isomorphic to graph G2. Subgraph Isomorphism, Is graph G1 isomorphic to a subgraph of graph G2, the Subgraph Isomorphism problem is NP-complete. The graph isomorphism problem is suspected to be neither in P nor NP-complete and this is an example of a problem that is thought to be hard, but is not thought to be NP-complete