The MIT Press is a university press affiliated with the Massachusetts Institute of Technology in Cambridge, Massachusetts. The MIT Press traces its origins back to 1926 when MIT published under its own name a lecture series entitled Problems of Atomic Dynamics given by the visiting German physicist and Nobel Prize winner, Max Born. Six years MIT's publishing operations were first formally instituted by the creation of an imprint called Technology Press in 1932; this imprint was founded by James R. Killian, Jr. at the time editor of MIT's alumni magazine and to become MIT president. Technology Press published eight titles independently in 1937 entered into an arrangement with John Wiley & Sons in which Wiley took over marketing and editorial responsibilities. In 1962 the association with Wiley came to an end; the press acquired its modern name after this separation, has since functioned as an independent publishing house. A European marketing office was opened in 1969, a Journals division was added in 1972.
In the late 1970s, responding to changing economic conditions, the publisher narrowed the focus of their catalog to a few key areas architecture, computer science and artificial intelligence and cognitive science. In January 2010 the MIT Press published its 9000th title, in 2012 the Press celebrated its 50th anniversary, including publishing a commemorative booklet on paper and online; the press co-founded the distributor TriLiteral LLC with Yale University Press and Harvard University Press. TriLiteral was acquired by LSC Communications in 2018. MIT Press publishes academic titles in the fields of Art and Architecture; the MIT Press is a distributor for such publishers as Zone Books and Semiotext. In 2000, the MIT Press created CogNet, an online resource for the study of the brain and the cognitive sciences; the MIT Press co-owns the distributor TriLiteral LLC with Harvard University Press and Yale University Press. In 1981 the MIT Press published its first book under the Bradford Books imprint, Brainstorms: Philosophical Essays on Mind and Psychology by Daniel C.
Dennett. In 2018, the Press and the MIT Media Lab launched the Knowledge Futures Group to develop and deploy open access publishing technology and platforms; the MIT Press operates the MIT Press Bookstore showcasing both its front and backlist titles, along with a large selection of complementary works from other academic and trade publishers. The retail storefront was located next to a subway entrance to Kendall/MIT station in the heart of Kendall Square, but has been temporarily moved to 301 Massachusetts Avenue in Cambridge, Massachusetts, a short distance north of the MIT Museum near Central Square. Once extensive construction around its former location is completed, the Bookstore is planned to be returned to a site adjacent to the subway entrance; the Bookstore offers customized selections from the MIT Press at many conferences and symposia in the Boston area, sponsors occasional lectures and book signings at MIT. The Bookstore is known for its periodic "Warehouse Sales" offering deep discounts on surplus and returned books and journals from its own catalog, as well as remaindered books from other publishers.
The Press uses a colophon or logo designed by its longtime design director, Muriel Cooper, in 1962. The design is based on a highly-abstracted version of the lower-case letters "mitp", with the ascender of the "t" at the fifth stripe and the descender of the "p" at the sixth stripe the only differentiation, it served as an important reference point for the 2015 redesign of the MIT Media Lab logo by Pentagram. The Arts and Humanities Economics International Affairs and Political Science Science and Technology The Image of the City by Kevin Lynch', 1960 Experiencing Architecture by Steen Eiler Rasmussen', 1962 Beyond The Melting Pot: The Negroes, Puerto Ricans, Jews and Irish of New York City by Nathan Glazer and Daniel P. Moynihan', 1963 The Character of Physical Law by Richard Feynman', 1967 Bauhaus: Weimar, Berlin, Chicago by Hans M. Wingler', 1969 The Subjection Of Women, by John Stuart Mill', 1970 Theory of Colours by Johann Wolfgang von Goethe', 1970 Learning From Las Vegas by Robert Venturi, Denise Scott Brown and Steven Izenour', 1972 The Theory of Industrial Organization by Jean Tirole', 1988 Made in America: Regaining the Productive Edge by Michael L. Dertouzos, Robert M. Solow and Richard K.
Lester', 1989 Introduction to Algorithms by Thomas H. Cormen, Charles E. Leiserson and Ronald L. Rivest', 1990 Understanding Media: The Extensions of Man by Marshall McLuhan', 1994 The Society of the Spectacle, by Guy Debord', 1994 Financial Modeling by Simon Benninga', 1997 Out of the Crisis, by W. Edwards Deming', 2000 The Elusive Quest for Growth: Economists' Adventures and Misadventures in the Tropics by William R. Easterly', 2001 The Language of New Media by Lev Manovich', 2001 The Laws of Simplicity by John Maeda', 2006 101 Things I Learned in Architecture School by Matthew Frederick', 2007 Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville', 2016 Dimensionism: Modern Art in the Age of Einstein, 2018 Official Website MIT Press Journals Homepage The MIT PressLog
Charles E. Leiserson
Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, practical applications thereof. As part of this effort, he developed the Cilk multithreaded language, he invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung, he conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, Clifford Stein.
Leiserson received a B. S. degree in computer science and mathematics from Yale University in 1975 and a Ph. D. degree in computer science from Carnegie Mellon University in 1981, where his advisors were Jon Bentley and H. T. Kung, he joined the faculty of the Massachusetts Institute of Technology, where he is now a Professor. In addition, he is a principal in the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, he was Director of Research and Director of System Architecture for Akamai Technologies, he was Founder and Chief Technology Officer of Cilk Arts, Inc. a start-up that developed Cilk technology for multicore computing applications. Leiserson's dissertation, Area-Efficient VLSI Computation, won the first ACM Doctoral Dissertation Award. In 1985, the National Science Foundation awarded him a Presidential Young Investigator Award, he is a Fellow of the Association for Computing Machinery, the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers, the Society for Industrial and Applied Mathematics.
He received the 2014 Taylor L. Booth Education Award from the IEEE Computer Society "for worldwide computer science education impact through writing a best-selling algorithms textbook, developing courses on algorithms and parallel programming." He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development." He was cited for "distinguished mentoring of computer science leaders and students." He received the 2013 ACM Paris Kanellakis Theory and Practice Award for "contributions to robust parallel and distributed computing." Thinking Machines Corporation Thomas H. Cormen Ronald L. Rivest Clifford Stein Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-03141-7. Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-53196-2. Cormen, Thomas H.. Introduction to Algorithms. MIT Press. ISBN 9780-262-03384-8.
Home page Brief Biography Charles Leiserson Playlist Appearance on WMBR's Dinnertime Sampler radio show October 27, 2004
Computer science is the study of processes that interact with data and that can be represented as data in the form of programs. It enables the use of algorithms to manipulate and communicate digital information. A computer scientist studies the theory of computation and the practice of designing software systems, its fields can be divided into practical disciplines. Computational complexity theory is abstract, while computer graphics emphasizes real-world applications. Programming language theory considers approaches to the description of computational processes, while computer programming itself involves the use of programming languages and complex systems. Human–computer interaction considers the challenges in making computers useful and accessible; the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division.
Algorithms for performing computations have existed since antiquity before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner, he may be considered the first computer scientist and information theorist, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he released his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which gave him the idea of the first programmable mechanical calculator, his Analytical Engine, he started developing this machine in 1834, "in less than two years, he had sketched out many of the salient features of the modern computer".
"A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, considered to be the first computer program. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, making all kinds of punched card equipment and was in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit; when the machine was finished, some hailed it as "Babbage's dream come true". During the 1940s, as new and more powerful computing machines were developed, the term computer came to refer to the machines rather than their human predecessors.
As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City; the renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world; the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s; the world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science degree program in the United States was formed at Purdue University in 1962.
Since practical computers became available, many applications of computing have become distinct areas of study in their own rights. Although many believed it was impossible that computers themselves could be a scientific field of study, in the late fifties it became accepted among the greater academic population, it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM 704 and the IBM 709 computers, which were used during the exploration period of such devices. "Still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, you would have to start the whole process over again". During the late 1950s, the computer science discipline was much in its developmental stages, such issues were commonplace. Time has seen significant improvements in the effectiveness of computing technology. Modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base.
Computers were quite costly, some degree of humanitarian aid was needed for efficient use—in part from professional computer operators. As computer adoption became more widespread and affordable, less human assistance was needed for common usage. Despite its short history as a formal academic discipline, computer science has made a number of fundamental contributions to science and society—in fact, along with electronics, it is
A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator forms a total preorder over the data, with: if a ≤ b and b ≤ c a ≤ c for all a and b, a ≤ b or b ≤ a, it is possible that both a ≤ b ≤ a. In a stable sort, the input order determines the sorted order in this case. A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and a balance scale, their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one is heavier. Some of the most well-known comparison sorts include: Quicksort Heapsort Shellsort Merge sort Introsort Insertion sort Selection sort Bubble sort Odd–even sort Cocktail shaker sort Cycle sort Merge-insertion sort Smoothsort Timsort There are fundamental limits on the performance of comparison sorts.
A comparison sort must have an average-case lower bound of Ω comparison operations, known as linearithmic time. This is a consequence of the limited information available through comparisons alone — or, to put it differently, of the vague algebraic structure of ordered sets. In this sense, mergesort and introsort are asymptotically optimal in terms of the number of comparisons they must perform, although this metric neglects other operations. Non-comparison sorts can achieve O performance by using operations other than comparisons, allowing them to sidestep this lower bound. Comparison sorts may run faster on some lists; the Ω lower bound applies only to the case. Real-world measures of sorting speed may need to take into account the ability of some algorithms to optimally use fast cached computer memory, or the application may benefit from sorting methods where sorted data begins to appear to the user as opposed to sorting methods where no output is available until the whole list is sorted.
Despite these limitations, comparison sorts offer the notable practical advantage that control over the comparison function allows sorting of many different datatypes and fine control over how the list is sorted. For example, reversing the result of the comparison function allows the list to be sorted in reverse. Comparison sorts adapt more to complex orders such as the order of floating-point numbers. Additionally, once a comparison function is written, any comparison sort can be used without modification; this flexibility, together with the efficiency of the above comparison sorting algorithms on modern computers, has led to widespread preference for comparison sorts in most practical work. Some sorting problems admit a faster solution than the Ω bound for comparison sorting; when the keys form a small range, counting sort is an example algorithm. Other integer sorting algorithms, such as radix sort, are not asymptotically faster than comparison sorting, but can be faster in practice; the problem of sorting pairs of numbers by their sum is not subject to the Ω bound either.
The number of comparisons that a comparison sort algorithm requires increases in proportion to n log , where n is the number of elements to sort. This bound is asymptotically tight. Given a list of distinct numbers, there are n factorial permutations one of, the list in sorted order; the sort algorithm must gain enough information from the comparisons to identify the correct permutation. If the algorithm always completes after at most f steps, it cannot distinguish more than 2f cases because the keys are distinct and each comparison has only two possible outcomes. Therefore, 2 f ≥ n!, or equivalently f ≥ log 2 . By looking at the first n / 2 factors of n! = n ⋯ 1, we obtain log 2
Computational complexity theory
Computational complexity theory focuses on classifying computational problems according to their inherent difficulty, relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used; the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e. the amount of resources needed to solve them, such as time and storage. Other measures of complexity are used, such as the amount of communication, the number of gates in a circuit and the number of processors. One of the roles of computational complexity theory is to determine the practical limits on what computers can and cannot do; the P versus NP problem, one of the seven Millennium Prize Problems, is dedicated to the field of computational complexity.
Related fields in theoretical computer science are analysis of algorithms and computability theory. A key distinction between analysis of algorithms and computational complexity theory is that the former is devoted to analyzing the amount of resources needed by a particular algorithm to solve a problem, whereas the latter asks a more general question about all possible algorithms that could be used to solve the same problem. More computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources. In turn, imposing restrictions on the available resources is what distinguishes computational complexity from computability theory: the latter theory asks what kind of problems can, in principle, be solved algorithmically. A computational problem can be viewed as an infinite collection of instances together with a solution for every instance; the input string for a computational problem is referred to as a problem instance, should not be confused with the problem itself.
In computational complexity theory, a problem refers to the abstract question to be solved. In contrast, an instance of this problem is a rather concrete utterance, which can serve as the input for a decision problem. For example, consider the problem of primality testing; the instance is a number and the solution is "yes" if the number is prime and "no" otherwise. Stated another way, the instance is a particular input to the problem, the solution is the output corresponding to the given input. To further highlight the difference between a problem and an instance, consider the following instance of the decision version of the traveling salesman problem: Is there a route of at most 2000 kilometres passing through all of Germany's 15 largest cities? The quantitative answer to this particular problem instance is of little use for solving other instances of the problem, such as asking for a round trip through all sites in Milan whose total length is at most 10 km. For this reason, complexity theory addresses computational problems and not particular problem instances.
When considering computational problems, a problem instance is a string over an alphabet. The alphabet is taken to be the binary alphabet, thus the strings are bitstrings; as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, graphs can be encoded directly via their adjacency matrices, or by encoding their adjacency lists in binary. Though some proofs of complexity-theoretic theorems assume some concrete choice of input encoding, one tries to keep the discussion abstract enough to be independent of the choice of encoding; this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the central objects of study in computational complexity theory. A decision problem is a special type of computational problem whose answer is either yes or no, or alternately either 1 or 0. A decision problem can be viewed as a formal language, where the members of the language are instances whose output is yes, the non-members are those instances whose output is no.
The objective is to decide, with the aid of an algorithm, whether a given input string is a member of the formal language under consideration. If the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a decision problem is the following; the input is an arbitrary graph. The problem consists in deciding; the formal language associated with this decision problem is the set of all connected graphs — to obtain a precise definition of this language, one has to decide how graphs are encoded as binary strings. A function problem is a computational problem where a single output is expected for every input, but the output is more complex than that of a decision problem—that is, the output isn't just yes or no. Notable examples include the integer factorization problem, it is tempting to think that the notion of function problems is much richer than the notion of decision problems. However, this is not the case, since function problems can be recast as decision problems.
For example, the multiplication of two integers can be expressed as the set of triples such that the relation a × b = c holds. Deciding whether a given triple is a member of this set corresponds to solving
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
The Art of Computer Programming
The Art of Computer Programming is a comprehensive monograph written by Donald Knuth that covers many kinds of programming algorithms and their analysis. Knuth began the project conceived as a single book with twelve chapters, in 1962; the first three volumes of what was expected to be a seven-volume set were published in 1968, 1969, 1973. The first published installment of Volume 4 appeared in paperback as Fascicle 2 in 2005; the hardback Volume 4A, combining Volume 4, Fascicles 0–4, was published in 2011. Volume 4, Fascicle 6 was released in December 2015. Fascicles 5 and 6 are expected to comprise the first two-thirds of Volume 4B, scheduled to be published on May 17, 2019. After winning a Westinghouse Talent Search scholarship, Knuth enrolled at the Case Institute of Technology, where his performance was so outstanding that the faculty voted to award him a master of science upon his completion of the baccalaureate degree. During his summer vacations, Knuth was hired by the Burroughs Corporation to write compilers, earning more in his summer months than full professors did for an entire year.
Such exploits made Knuth a topic of discussion among the mathematics department, which included Richard S. Varga. Knuth started to write a book about compiler design in 1962, soon realized that the scope of the book needed to be much larger. In June 1965, Knuth finished the first draft of what was planned to be a single volume of twelve chapters, his hand-written first-draft manuscript was 3000 pages long: he had assumed that about five hand-written pages would translate into one printed page, but his publisher said instead that about 1 1⁄2 hand-written pages translated to one printed page. This meant the book would be 2000 pages in length; the publisher was nervous about accepting such a project from a graduate student. At this point, Knuth received support from Richard S. Varga, the scientific adviser to the publisher. Varga was visiting Olga John Todd at Caltech. With Varga's enthusiastic endorsement, the publisher accepted Knuth's expanded plans. In its expanded version, the book would be published in seven volumes, each with just one or two chapters.
Due to the growth in the material, the plan for Volume 4 has since expanded to include Volumes 4A, 4B, 4C, 4D, more. In 1976, Knuth prepared a second edition of Volume 2, requiring it to be typeset again, but the style of type used in the first edition was no longer available. In 1977, he decided to spend some time creating something more suitable. Eight years he returned with TEX, used for all volumes; the offer of a so-called Knuth reward check worth "one hexadecimal dollar" for any errors found, the correction of these errors in subsequent printings, has contributed to the polished and still-authoritative nature of the work, long after its first publication. Another characteristic of the volumes is the variation in the difficulty of the exercises; the level of difficulty ranges from "warm-up" exercises to unsolved research problems. Knuth's dedication reads: This series of books is affectionately dedicatedto the Type 650 computer once installed atCase Institute of Technology,with whom I have spent many pleasant evenings.
All examples in the books use a language called "MIX assembly language", which runs on the hypothetical MIX computer. The MIX computer is being replaced by the MMIX computer, a RISC version. Software such as GNU MDK exists to provide emulation of the MIX architecture. Knuth considers the use of assembly language necessary for the speed and memory usage of algorithms to be judged. Knuth was awarded the 1974 Turing Award "for his major contributions to the analysis of algorithms, in particular for his contributions to the'art of computer programming' through his well-known books in a continuous series by this title." American Scientist has included this work among "100 or so Books that shaped a Century of Science", referring to the twentieth century, within the computer science community it is regarded as the first and still the best comprehensive treatment of its subject. Covers of the third edition of Volume 1 quote Bill Gates as saying, "If you think you're a good programmer… read Art of Computer Programming… You should send me a résumé if you can read the whole thing."
The New York Times referred to it as "the profession's defining treatise". Volume 1 – Fundamental AlgorithmsChapter 1 – Basic concepts Chapter 2 – Information structuresVolume 2 – Seminumerical AlgorithmsChapter 3 – Random numbers Chapter 4 – ArithmeticVolume 3 – Sorting and SearchingChapter 5 – Sorting Chapter 6 – SearchingVolume 4A – Combinatorial AlgorithmsChapter 7 – Combinatorial searching Volume 4B... – Combinatorial Algorithms Chapter 7 – Combinatorial searching Chapter 8 – RecursionVolume 5 – Syntactic Algorithms Chapter 9 – Lexical scanning Chapter 10 – Parsing techniquesVolume 6 – The Theory of Context-Free Languages Volume 7 – Compiler Techniques Chapter 1 – Basic concepts 1.1. Algorithms 1.2. Mathematical Preliminaries 1.2.1. Mathematical Induction 1.2.2. Numbers and Logarithms 1.2.3. Sums and Products 1.2.4. Integer Functions and Elementary Number Theory 1.2.5. Permutations and Factorials 1.2.6. Binomial Coefficients 1.2.7. Harmonic Numbers 1.2.8. Fibonacci Numbers 1.2.9. Generating Functions 1.2.10.
Analysis of an