Cambridge University Press
Cambridge University Press is the publishing business of the University of Cambridge. Granted letters patent by King Henry VIII in 1534, it is the world's oldest publishing house and the second-largest university press in the world, it holds letters patent as the Queen's Printer. The press mission is "to further the University's mission by disseminating knowledge in the pursuit of education and research at the highest international levels of excellence". Cambridge University Press is a department of the University of Cambridge and is both an academic and educational publisher. With a global sales presence, publishing hubs, offices in more than 40 countries, it publishes over 50,000 titles by authors from over 100 countries, its publishing includes academic journals, reference works and English language teaching and learning publications. Cambridge University Press is a charitable enterprise that transfers part of its annual surplus back to the university. Cambridge University Press is both the oldest publishing house in the world and the oldest university press.
It originated from letters patent granted to the University of Cambridge by Henry VIII in 1534, has been producing books continuously since the first University Press book was printed. Cambridge is one of the two privileged presses. Authors published by Cambridge have included John Milton, William Harvey, Isaac Newton, Bertrand Russell, Stephen Hawking. University printing began in Cambridge when the first practising University Printer, Thomas Thomas, set up a printing house on the site of what became the Senate House lawn – a few yards from where the press's bookshop now stands. In those days, the Stationers' Company in London jealously guarded its monopoly of printing, which explains the delay between the date of the university's letters patent and the printing of the first book. In 1591, Thomas's successor, John Legate, printed the first Cambridge Bible, an octavo edition of the popular Geneva Bible; the London Stationers objected strenuously. The university's response was to point out the provision in its charter to print "all manner of books".
Thus began the press's tradition of publishing the Bible, a tradition that has endured for over four centuries, beginning with the Geneva Bible, continuing with the Authorized Version, the Revised Version, the New English Bible and the Revised English Bible. The restrictions and compromises forced upon Cambridge by the dispute with the London Stationers did not come to an end until the scholar Richard Bentley was given the power to set up a'new-style press' in 1696. In July 1697 the Duke of Somerset made a loan of £200 to the university "towards the printing house and presse" and James Halman, Registrary of the University, lent £100 for the same purpose, it was in Bentley's time, in 1698, that a body of senior scholars was appointed to be responsible to the university for the press's affairs. The Press Syndicate's publishing committee still meets and its role still includes the review and approval of the press's planned output. John Baskerville became University Printer in the mid-eighteenth century.
Baskerville's concern was the production of the finest possible books using his own type-design and printing techniques. Baskerville wrote, "The importance of the work demands all my attention. Caxton would have found nothing to surprise him if he had walked into the press's printing house in the eighteenth century: all the type was still being set by hand. A technological breakthrough was badly needed, it came when Lord Stanhope perfected the making of stereotype plates; this involved making a mould of the whole surface of a page of type and casting plates from that mould. The press was the first to use this technique, in 1805 produced the technically successful and much-reprinted Cambridge Stereotype Bible. By the 1850s the press was using steam-powered machine presses, employing two to three hundred people, occupying several buildings in the Silver Street and Mill Lane area, including the one that the press still occupies, the Pitt Building, built for the press and in honour of William Pitt the Younger.
Under the stewardship of C. J. Clay, University Printer from 1854 to 1882, the press increased the size and scale of its academic and educational publishing operation. An important factor in this increase was the inauguration of its list of schoolbooks. During Clay's administration, the press undertook a sizeable co-publishing venture with Oxford: the Revised Version of the Bible, begun in 1870 and completed in 1885, it was in this period as well that the Syndics of the press turned down what became the Oxford English Dictionary—a proposal for, brought to Cambridge by James Murray before he turned to Oxford. The appointment of R. T. Wright as Secretary of the Press Syndicate in 1892 marked the beginning of the press's development as a modern publishing business with a defined editorial policy and administrative structure, it was Wright who devised the plan for one of the most distinctive Cambridge contributions to publishing—the Cambridge Histories. The Cambridge Modern History was published
Parallel computing
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can be divided into smaller ones, which can be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling; as power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture in the form of multi-core processors. Parallel computing is related to concurrent computing—they are used together, conflated, though the two are distinct: it is possible to have parallelism without concurrency, concurrency without parallelism. In parallel computing, a computational task is broken down into several many similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion.
In contrast, in concurrent computing, the various processes do not address related tasks. Parallel computers can be classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is implemented as a serial stream of instructions; these instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements to solve a problem; this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Parallel computing was used for scientific computing and the simulation of scientific problems in the natural and engineering sciences, such as meteorology.
This led to the design of parallel software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004; the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle, V is voltage, F is the processor frequency. Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores.
The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers, thus parallelisation of serial programmes has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months; this could mean that after 2020 a typical processor will have hundreds of cores. An operating system can ensure that different tasks and user programmes are run in parallel on the available cores. However, for a serial software programme to take full advantage of the multi-core architecture the programmer needs to restructure and parallelise the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelise their software code to take
Sanjeev Arora
Sanjeev Arora is an Indian American theoretical computer scientist, best known for his work on probabilistically checkable proofs and, in particular, the PCP theorem. He is the Charles C. Fitzmorris Professor of Computer Science at Princeton University, his research interests include computational complexity theory, uses of randomness in computation, probabilistically checkable proofs, computing approximate solutions to NP-hard problems, geometric embeddings of metric spaces, he received a B. S. in Mathematics with Computer Science from MIT in 1990 and received a Ph. D. in Computer Science from the University of California, Berkeley in 1994 under Umesh Vazirani. Earlier, in 1986, Sanjeev Arora had topped the prestigious IIT JEE but transferred to MIT after 2 years at IIT Kanpur, he was a visiting scholar at the Institute for Advanced Study in 2002-03. He was awarded the Gödel Prize for his work on the PCP theorem in 2001 and again in 2010 for the discovery of a polynomial time approximation scheme for the Euclidean travelling salesman problem.
In 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded the ACM Infosys Foundation Award, given to mid-career researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems. In 2012 he became a Simons Investigator. Arora was elected to the National Academy of Sciences on May 2, 2018, he is a coauthor of the book Computational Complexity: A Modern Approach and is a founder, on the Executive Board, of Princeton's Center for Computational Intractability. He and his coauthors have argued that certain financial products are associated with computational asymmetry which under certain conditions may lead to market instability. Sanjeev Arora's Homepage Sanjeev Arora at the Mathematics Genealogy Project
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
Majority function
In Boolean logic, the majority function is a function from n inputs to one output. The value of the operation is false when n/2 or more arguments are false, true otherwise. Alternatively, representing true values as 1 and false values as 0, we may use the formula Majority = ⌊ 1 2 + − 1 / 2 n ⌋. The" − 1/2" in the formula serves to break ties in favor of zeros. If the term "−1/2" is omitted, the formula can be used for a function that breaks ties in favor of ones. A majority gate is a logical gate used in circuit complexity and other applications of Boolean circuits. A majority gate returns only if more than 50 % of its inputs are true. For instance, in a full adder, the carry output is found by applying a majority function to the three inputs, although this part of the adder is broken down into several simpler logical gates. Many systems have triple modular redundancy. A major result in circuit complexity asserts that the majority function cannot be computed by AC0 circuits of subexponential size.
For n = 1 the median operator is just the unary identity operation x. For n = 3 the ternary median operator can be expressed using conjunction and disjunction as xy + yz + zx. Remarkably this expression denotes the same operation independently of whether the symbol + is interpreted as inclusive or or exclusive or. For an arbitrary n there exists a monotone formula for majority of size O; this is proved using probabilistic method. Thus, this formula is non-constructive. However, one can obtain an explicit formula for majority of polynomial size using a sorting network of Ajtai, Komlós, Szemerédi; the majority function produces "1" when more than half of the inputs are 1. Most applications deliberately force an odd number of inputs so they don't have to deal with the question of what happens when half the inputs are 0 and half the inputs are 1; the few systems that calculate the majority function on an number of inputs are biased towards "0"—they produce "0" when half the inputs are 0 -- for example, a 4-input majority gate has a 0 output only when two or more 0's appear at its inputs.
In a few systems, a 4-input majority network randomly chooses "1" or "0" when two 0's appear at its inputs. For any x, y, z, the ternary median operator 〈x, y, z〉 satisfies the following equations. 〈x, y, y〉 = y 〈x, y, z〉 = 〈z, x, y〉 〈x, y, z〉 = 〈x, z, y〉 〈〈x, w, y〉, w, z〉 = 〈x, w, 〈y, w, z〉〉An abstract system satisfying these as axioms is a median algebra. Knuth, Donald E.. Introduction to combinatorial algorithms and Boolean functions; the Art of Computer Programming. 4a. Upper Saddle River, NJ: Addison-Wesley. Pp. 64–74. ISBN 0-321-53496-4. Media related to Majority functions at Wikimedia Commons Boolean algebra Boolean algebras canonically defined Boyer–Moore majority vote algorithm Majority problem
International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
Christos Papadimitriou
Christos Harilaos Papadimitriou is a Greek theoretical computer scientist, professor of Computer Science at Columbia University. Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in Electrical Engineering, he continued to study at Princeton University, where he received his MS in Electrical Engineering in 1974 and his PhD in Electrical Engineering and Computer Science in 1976. Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, UCSD, University of California, Berkeley and is the Donovan Family Professor of Computer Science at Columbia University. Papadimitriou co-authored a paper on pancake sorting with Bill Gates a Harvard undergraduate. Papadimitriou recalled "Two years I called to tell him our paper had been accepted to a fine math journal, he sounded eminently disinterested. He had moved to Albuquerque, New Mexico to run a small company writing code for microprocessors, of all things.
I remember thinking:'Such a brilliant kid. What a waste.'"In 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. He became fellow of the U. S. National Academy of Engineering for contributions to complexity theory, database theory, combinatorial optimization. In 2009 he was elected to the US National Academy of Sciences. During the 36th International Colloquium on Automata and Programming, there was a special event honoring Papadimitriou's contributions to computer science. In 2012, he, along with Elias Koutsoupias, was awarded the Gödel Prize for their joint work on the concept of the price of anarchy. Papadimitriou is the author of the textbook Computational Complexity, one of the most used textbooks in the field of computational complexity theory, he has co-authored the textbook Algorithms with Sanjoy Dasgupta and Umesh Vazirani, the graphic novel Logicomix with Apostolos Doxiadis. His name was listed in the 19th position on the CiteSeer search engine academic database and digital library.
In 1997, Papadimitriou received a doctorate honoris causa from the ETH Zurich. In 2011, Papadimitriou received a doctorate honoris causa from the National Technical University of Athens. In 2013, Papadimitriou received a doctorate honoris causa from the École polytechnique fédérale de Lausanne. Papadimitriou was awarded the IEEE John von Neumann Medal in 2016, the EATCS Award in 2015, the Gödel Prize in 2012, the IEEE Computer Society Charles Babbage Award in 2004, the Knuth Prize in 2002. Elements of the Theory of Computation. Prentice-Hall, 1982. Greek edition Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, 1982; the Theory of Database Concurrency Control. CS Press, 1986. Computational Complexity. Addison Wesley, 1994. Turing. MIT Press, November 2003. Life Sentence to Hackers?. Kastaniotis Editions, 2004. A compilation of articles written for the Greek newspaper To Vima. Algorithms. McGraw-Hill, September 2006 Logicomix, An Epic Search for Truth. Bloomsbury Publishing and Bloomsbury USA, September 2009.
He co-authored a paper with co-founder of Microsoft, on pancake sorting. At UC Berkeley, in 2006, he joined a professor-and-graduate-student band called Lady X and The Positive Eigenvalues