1.
Theoretical computer science
–
It is not easy to circumscribe the theoretical areas precisely. Work in this field is often distinguished by its emphasis on mathematical technique, despite this broad scope, the theory people in computer science self-identify as different from the applied people. Some characterize themselves as doing the science underlying the field of computing, other theory-applied people suggest that it is impossible to separate theory and application. This means that the theory people regularly use experimental science done in less-theoretical areas such as software system research. It also means there is more cooperation than mutually exclusive competition between theory and application. These developments have led to the study of logic and computability. Information theory was added to the field with a 1948 mathematical theory of communication by Claude Shannon, in the same decade, Donald Hebb introduced a mathematical model of learning in the brain. With mounting biological data supporting this hypothesis with some modification, the fields of neural networks, in 1971, Stephen Cook and, working independently, Leonid Levin, proved that there exist practically relevant problems that are NP-complete – a landmark result in computational complexity theory. With the development of mechanics in the beginning of the 20th century came the concept that mathematical operations could be performed on an entire particle wavefunction. In other words, one could compute functions on multiple states simultaneously, modern theoretical computer science research is based on these basic developments, but includes many other mathematical and interdisciplinary problems that have been posed. An algorithm is a procedure for calculations. Algorithms are used for calculation, data processing, and automated reasoning, an algorithm is an effective method expressed as a finite list of well-defined instructions for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, a data structure is a particular way of organizing data in a computer so that it can be used efficiently. Different kinds of structures are suited to different kinds of applications. For example, databases use B-tree indexes for small percentages of data retrieval and compilers, data structures provide a means to manage large amounts of data efficiently for uses such as large databases and internet indexing services. Usually, efficient data structures are key to designing efficient algorithms, some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Storing and retrieving can be carried out on data stored in main memory and in secondary memory. A problem is regarded as inherently difficult if its solution requires significant resources, the theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage
2.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
3.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
4.
Computer
–
A computer is a device that can be instructed to carry out an arbitrary set of arithmetic or logical operations automatically. The ability of computers to follow a sequence of operations, called a program, such computers are used as control systems for a very wide variety of industrial and consumer devices. The Internet is run on computers and it millions of other computers. Since ancient times, simple manual devices like the abacus aided people in doing calculations, early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century, the first digital electronic calculating machines were developed during World War II. The speed, power, and versatility of computers has increased continuously and dramatically since then, conventionally, a modern computer consists of at least one processing element, typically a central processing unit, and some form of memory. The processing element carries out arithmetic and logical operations, and a sequencing, peripheral devices include input devices, output devices, and input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and this usage of the term referred to a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century, from the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations. The Online Etymology Dictionary gives the first attested use of computer in the 1640s, one who calculates, the Online Etymology Dictionary states that the use of the term to mean calculating machine is from 1897. The Online Etymology Dictionary indicates that the use of the term. 1945 under this name, theoretical from 1937, as Turing machine, devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick, later record keeping aids throughout the Fertile Crescent included calculi which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example, the abacus was initially used for arithmetic tasks. The Roman abacus was developed from used in Babylonia as early as 2400 BC. Since then, many forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, the Antikythera mechanism is believed to be the earliest mechanical analog computer, according to Derek J. de Solla Price. It was designed to calculate astronomical positions and it was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC
5.
Harvard architecture
–
The Harvard architecture is a computer architecture with physically separate storage and signal pathways for instructions and data. The term originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape and these early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator, the processor could not initialize itself, in a Harvard architecture, there is no need to make the two memories share characteristics. In particular, the width, timing, implementation technology. In some systems, instructions for pre-programmed tasks can be stored in memory while data memory generally requires read-write memory. In some systems, there is much more memory than data memory so instruction addresses are wider than data addresses. Under pure von Neumann architecture the CPU can be either reading an instruction or reading/writing data from/to the memory, both cannot occur at the same time since the instructions and data use the same bus system. In a computer using the Harvard architecture, the CPU can both read an instruction and perform a data memory access at the time, even without a cache. A Harvard architecture computer can thus be faster for a given circuit complexity because instruction fetches, also, a Harvard architecture machine has distinct code and data address spaces, instruction address zero is not the same as data address zero. Instruction address zero might identify a twenty-four bit value, while data address zero might indicate an eight-bit byte that is not part of that twenty-four bit value, the most common modification includes separate instruction and data caches backed by a common address space. While the CPU executes from cache, it acts as a pure Harvard machine, when accessing backing memory, it acts like a von Neumann machine. This modification is widespread in modern processors, such as the ARM architecture and it is sometimes loosely called a Harvard architecture, overlooking the fact that it is actually modified. Another modification provides a pathway between the memory and the CPU to allow words from the instruction memory to be treated as read-only data. This technique is used in some microcontrollers, including the Atmel AVR and this allows constant data, such as text strings or function tables, to be accessed without first having to be copied into data memory, preserving scarce data memory for read/write variables. Special machine language instructions are provided to read data from the instruction memory, in recent years, the speed of the CPU has grown many times in comparison to the access speed of the main memory. Care needs to be taken to reduce the number of main memory is accessed in order to maintain performance. If, for instance, every run in the CPU requires an access to memory. It is possible to make extremely fast memory, but this is only practical for small amounts of memory for cost, power, the solution is to provide a small amount of very fast memory known as a CPU cache which holds recently accessed data
6.
Pointer machine
–
In theoretical computer science a pointer machine is an atomistic abstract computational machine model akin to the Random access machine. Depending on the type, a machine may be called a linking automaton, a KU-machine, an SMM, an atomistic LISP machine. At least three major varieties exist in the literature—the Kolmogorov-Uspenskii model, the Knuth linking automaton, and the Schönhage Storage Modification Machine model, the SMM seems to be the most common. From its read-only tape a pointer machine receives input—bounded symbol-sequences made of at least two symbols e. g. -- and it writes output symbol-sequences on an output write-only tape, to transform a symbol-sequence to an output symbol-sequence the machine is equipped with a program—a finite-state machine. Via its state machine the program reads the symbols, operates on its storage structure—a collection of nodes interconnected by edges. Computation proceeds only by reading input symbols, modifying and doing various tests on its storage structure—the pattern of nodes and pointers, information is in the storage structure. Both Gurevich and Ben-Amram list a number of very similar models of abstract machines. Its attractiveness as a model for complexity theory is questionable. Its time measure is based on time in a context where this measure is known to underestimate the true time complexity. Potential uses for the model, However, Schönhage demonstrates in his §6, and Gurevich wonders whether or not the parallel KU machine resembles somewhat the human brain Schönhages SMM model seems to be the most common and most accepted. It is quite unlike the machine model and other common computational models e. g. the tape-based Turing machine or the labeled holes. The computer consists of an alphabet of input symbols. Each node of the graph has one outgoing arrow labelled with each symbol. One fixed node of the graph is identified as the start or active node, the path can in turn be identified with the resulting node, but this identification will change as the graph changes during the computation. The machine can receive instructions which change the layout of the graph. The basic instructions are the new w instruction, which creates a new node which is the result of following the string w, here w and v represent words. A previously-created string of symbols—so that the edge will point backwards to an old node that is the result of that string. New w, creates a new node, W represents the new word that creates the new node
7.
Universal Turing machine
–
In computer science, a universal Turing machine is a Turing machine that can simulate an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input thereof from its own tape, Alan Turing introduced this machine in 1936–1937. It is also known as universal computing machine, universal machine, machine U, U, in terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. Every Turing machine computes a certain fixed partial computable function from the strings over its alphabet. In that sense it behaves like a computer with a fixed program, however, we can encode the action table of any Turing machine in a string. Turing described such a construction in complete detail in his 1936 paper, is working on an incarnation of a Turing machine, and that John von Neumann on the work of Alan Turing. Davis makes a case that Turings Automatic Computing Engine computer anticipated the notions of microprogramming, Knuth cites Turings work on the ACE computer as designing hardware to facilitate subroutine linkage, Davis also references this work as Turings use of a hardware stack. As the Turing Machine was encouraging the construction of computers, the UTM was encouraging the development of the computer sciences. An early, if not the very first, assembler was proposed by a young hot-shot programmer for the EDVAC, Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that The first interpretive routine may be said to be the Universal Turing Machine, interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946. Turing took part in development also, interpretive systems for the Pilot ACE computer were written under his direction. Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data, some, however, might raise issues with this assessment. At the time a small cadre of researchers were intimately involved with the architecture of the new digital computers. These two aspects of theory and practice have been developed almost entirely independently of each other, the main reason is undoubtedly that logicians are interested in questions radically different from those with which the applied mathematicians and electrical engineers are primarily concerned. It cannot, however, fail to strike one as rather strange that often the same concepts are expressed by different terms in the two developments. Wang hoped that his paper would connect the two approaches, indeed, Minsky confirms this, that the first formulation of Turing-machine theory in computer-like models appears in Wang. Minsky goes on to demonstrate Turing equivalence of a counter machine, with respect to the reduction of computers to simple Turing equivalent models, Minskys designation of Wang as having made the first formulation is open to debate. The names of mathematicians Hermes and Kaphenst appear in the bibliographies of both Sheperdson-Sturgis and Elgot-Robinson, Two other names of importance are Canadian researchers Melzak and Lambek
8.
Von Neumann architecture
–
The meaning has evolved to be any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the von Neumann bottleneck and often limits the performance of the system, a stored-program digital computer is one that keeps its program instructions, as well as its data, in read-write, random-access memory. The earliest computing machines had fixed programs, some very simple computers still use this design, either for simplicity or training purposes. For example, a calculator is a fixed program computer. It can do mathematics, but it cannot be used as a word processor or a gaming console. Changing the program of a machine requires rewiring, restructuring. The earliest computers were not so much programmed as they were designed and it could take three weeks to set up a program on ENIAC and get it working. With the proposal of the computer, this changed. A stored-program computer includes, by design, an instruction set, a stored-program design also allows for self-modifying code. One early motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions and this became less important when index registers and indirect addressing became usual features of machine architecture. Another use was to embed frequently used data in the stream using immediate addressing. Self-modifying code has largely fallen out of favor, since it is hard to understand and debug. On a large scale, the ability to treat instructions as data is what makes assemblers, compilers, linkers, loaders, one can write programs which write programs. This has allowed a sophisticated self-hosting computing ecosystem to flourish around von Neumann architecture machines, on a smaller scale, some repetitive operations such as BITBLT or pixel & vertex shaders could be accelerated on general purpose processors with just-in-time compilation techniques. This is one use of self-modifying code that has remained popular, in it he described a hypothetical machine which he called a universal computing machine, and which is now known as the Universal Turing machine. The hypothetical machine had a store that contained both instructions and data. Whether he knew of Turings paper of 1936 at that time is not clear, in 1936, Konrad Zuse also anticipated in two patent applications that machine instructions could be stored in the same storage used for data. In planning a new machine, EDVAC, Eckert wrote in January 1944 that they would store data and programs in a new addressable memory device and this was the first time the construction of a practical stored-program machine was proposed
9.
Reduced instruction set computer
–
A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing, although a number of systems from the 1960s and 70s have been identified as being forerunners of RISC, the modern version of the design dates to the 1980s. In particular, two projects at Stanford University and University of California, Berkeley are most associated with the popularization of this concept. Stanfords design would go on to be commercialized as the successful MIPS architecture, while Berkeleys RISC gave its name to the entire concept, another success from this era were IBMs efforts that eventually led to the Power Architecture. RISC families include DEC Alpha, AMD Am29000, ARC, ARM, Atmel AVR, Blackfin, Intel i860 and i960, MIPS, Motorola 88000, PA-RISC, Power, RISC-V, SuperH, and SPARC. In the 21st century, the use of ARM architecture processors in smart phones and tablet computers such as the iPad, a number of systems, going back to the 1960s, have been credited as the first RISC architecture, partly based on their use of load/store approach. The term RISC was coined by David Patterson of the Berkeley RISC project, michael J. Flynn views the first RISC system as the IBM801 design which began in 1975 by John Cocke, and completed in 1980. The 801 was eventually produced in a form as the ROMP in 1981. As the name implies, this CPU was designed for tasks, and was also used in the IBM RT-PC in 1986. But the 801 inspired several research projects, including new ones at IBM that would lead to the IBM POWER instruction set architecture. The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program, the VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics. The Berkeley RISC project started in 1980 under the direction of David Patterson, Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing. In a traditional CPU, one has a number of registers. In a CPU with register windows, there are a number of registers, e. g.128. The Berkeley RISC project delivered the RISC-I processor in 1982, consisting of only 44,420 transistors RISC-I had only 32 instructions, and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor,39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I. The MIPS architecture grew out of a course by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983. The MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, the MIPS system was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems
10.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm
11.
Emil Leon Post
–
Emil Leon Post was a Polish-born American mathematician and logician. He is best known for his work in the field eventually became known as computability theory. Post was born in Augustów, Suwałki Governorate, Russian Empire into a Polish-Jewish family that immigrated to New York City in May 1904 and his parents were Arnold and Pearl Post. Post had been interested in astronomy, but at the age of twelve lost his arm in a car accident. This loss was a significant obstacle to being a professional astronomer and he decided to pursue mathematics, rather than astronomy. Post attended the Townsend Harris High School and continued on to graduate from City College of New York in 1917 with a B. S. in Mathematics. After completing his Ph. D. in mathematics at Columbia University, supervised by Cassius Jackson Keyser, Post then became a high school mathematics teacher in New York City. Post married Gertrude Singer in 1929, with whom he had a daughter, Post spent at most three hours a day on research on the advice of his doctor in order to avoid manic attacks, which he had been experiencing since his year at Princeton. In 1936, he was appointed to the department at the City College of New York. He died in 1954 of an attack following electroshock treatment for depression. Post also devised truth tables independently of Wittgenstein and C. S. Peirce, jean Van Heijenoorts well-known source book on mathematical logic reprinted Posts classic article setting out these results. While at Princeton, Post came very close to discovering the incompleteness of Principia Mathematica, Post initially failed to publish his ideas as he believed he needed a complete analysis for them to be accepted. In 1936, Post developed, independently of Alan Turing, a model of computation that was essentially equivalent to the Turing machine model. Intending this as the first of a series of models of equivalent power but increasing complexity, Post devised a method of auxiliary symbols by which he could canonically represent any Post-generative language, and indeed any computable function or set at all. The unsolvability of his Post correspondence problem turned out to be exactly what was needed to obtain unsolvability results in the theory of formal languages and this question, which became known as Posts problem, stimulated much research. It was solved in the affirmative in the 1950s by the introduction of the priority method in recursion theory. Post made a fundamental and still-influential contribution to the theory of polyadic, or n-ary and his major theorem showed that a polyadic group is the iterated multiplication of elements of a normal subgroup of a group, such that the quotient group is cyclic of order n −1. He also demonstrated that a group operation on a set can be expressed in terms of a group operation on the same set
12.
Diophantine equation
–
In mathematics, a Diophantine equation is a polynomial equation, usually in two or more unknowns, such that only the integer solutions are sought or studied. A linear Diophantine equation is an equation between two sums of monomials of degree zero or one, an exponential Diophantine equation is one in which exponents on terms can be unknowns. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations, in more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis, the solutions are described by the following theorem, This Diophantine equation has a solution if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if is a solution, then the solutions have the form, where k is an arbitrary integer. Proof, If d is this greatest common divisor, Bézouts identity asserts the existence of integers e and f such that ae + bf = d, If c is a multiple of d, then c = dh for some integer h, and is a solution. On the other hand, for pair of integers x and y. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution, we have a + b = ax + by + k = ax + by + k = ax + by, showing that is another solution. Finally, given two solutions such that ax1 + by1 = ax2 + by2 = c, one deduces that u + v =0. As u and v are coprime, Euclids lemma shows that exists a integer k such that x2 − x1 = kv. Therefore, x2 = x1 + kv and y2 = y1 − ku, the system to be solved may thus be rewritten as B = UC. Calling yi the entries of V−1X and di those of D = UC and it follows that the system has a solution if and only if bi, i divides di for i ≤ k and di =0 for i > k. If this condition is fulfilled, the solutions of the system are V. Hermite normal form may also be used for solving systems of linear Diophantine equations, however, Hermite normal form does not directly provide the solutions, to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form is more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, the Hermite normal form is substantially easier to compute than the Smith normal form. Integer linear programming amounts to finding some integer solutions of systems that include also inequations
13.
Hans Hermes
–
Hans Hermes was a German mathematician and logician, who made significant contributions to the foundations of mathematical logic. Hermes was born in Neunkirchen, Germany, from 1931, Hermes studied mathematics, physics, chemistry, biology and philosophy at the University of Freiburg. In 1937 he passed the examination in Münster and was attending there in 1938 when the physicist Adolf Kratzer was present. After that he went on a scholarship to the University of Göttingen and then became an assistant at the University of Bonn. During World War II he was a soldier on the Channel Island of Jersey until 1943, at the end of the war he moved to Toplitzsee, where he was tasked with working on new encryption methods. In 1947, he became a lecturer at the University of Bonn where he took his habilitation, in 1949 he became a Professor at the University of Münster, where he turned back to the subject of mathematical logic. Hans Herman was a pioneer of the Turing machine as the concept of predictability. In 1952, he published together with Heinrich Scholz, an encyclopedia, in 1953 he took over management of the influential Institute for Mathematical logic and basic research at the University of Münster, from Heinrich Scholz. Under his leadership, the Institute became a center for attracting young researchers. With Hermes there, among others, were Wilhelm Ackermann and Gisbert Hasenjaeger, Hermes textbooks, as well as his scientific work, persuaded Heinz-Dieter Ebbinghaus to note the originality, accuracy and intuitive clarity of his textbooks. He was also an academic teacher who knew how to convey difficult issues and complicated proofs. Hermes was also worked on the compilation and publication of the papers of Gottlob Frege, in 1962 he was one of the founding members of the German Association for mathematical logic and for basic research of the exact sciences. In 1950, he was with Arnold Schmidt and Jürgen von Kempski, co-founder of the Archive for Mathematical Logic, in 1967, he became a member of the Heidelberg Academy of Sciences. Semester reports for the care of the relationship between university and school from the seminars, Münster 1937, 110–123. Research on logic and the foundations of the sciences, Issue 3. Machines for decision of mathematical problems, Mathematics and Physical semester reports, 179–189. The universality of program-controlled computing machines, Mathematics and Physical semester reports 4, 42–53. Berlin – Göttingen – Heidelberg 19552 Advanced edition 1967 Enumerability – Decidability – predictability, introduction to the theory of recursive functions
14.
Joachim Lambek
–
Joachim Jim Lambek was Peter Redpath Emeritus Professor of Pure Mathematics at McGill University, where he earned his Ph. D. degree in 1950 with Hans Zassenhaus as advisor. Lambek supervised 16 doctoral students, and has 51 doctoral descendants and he has over 100 publications listed in the Mathematical Reviews, including 6 books. His earlier work was mostly in module theory, especially torsion theories, non-commutative localization, one of his earliest papers, proved the Lambek-Moser theorem about integer sequences. His more recent work is in pregroups and formal languages, his earliest work in field were probably. His last works were on pregroup grammar, Lambek, Joachim, Completions of categories, Seminar lectures given in 1966 in Zürich. Lecture Notes in Mathematics, Vol.177, Berlin, New York, Springer-Verlag, MR0284459 Lambek, J. Scott, introduction to Higher Order Categorical Logic, Cambridge University Press, ISBN 978-0-521-35653-4, MR856915 Anglin, W. S. 7,61, 454–458, doi,10. 2307/2308078, ISSN 0002-9890, JSTOR2308078, MR0062777 Lambek, the Mathematics of Sentence Structure, The American Mathematical Monthly, The American Mathematical Monthly, Vol.65, No
15.
Marvin Minsky
–
Marvin Lee Minsky was born in New York City, to an eye surgeon father, Henry, and to a mother, Fannie, who was an activist of Zionist affairs. He attended the Ethical Culture Fieldston School and the Bronx High School of Science and he later attended Phillips Academy in Andover, Massachusetts. He then served in the US Navy from 1944 to 1945 and he received a B. A. in mathematics from Harvard University and a Ph. D. in mathematics from Princeton University. He was on the MIT faculty from 1958 to his death, in 1959 he and John McCarthy initiated what is known now as the MIT Computer Science and Artificial Intelligence Laboratory. He was the Toshiba Professor of Media Arts and Sciences, and professor of electrical engineering, Minskys inventions include the first head-mounted graphical display and the confocal microscope. He developed, with Seymour Papert, the first Logo turtle, Minsky also built, in 1951, the first randomly wired neural network learning machine, SNARC. Minsky wrote the book Perceptrons, which became the work in the analysis of artificial neural networks. He also founded several other famous AI models and his book A framework for representing knowledge created a new paradigm in programming. While his Perceptrons is now more a historical than practical book, Minsky has also written on the possibility that extraterrestrial life may think like humans, permitting communication. Clarkes derivative novel of the name, Probably no one would ever know this. In the 1980s, Minsky and Good had shown how neural networks could be generated automatically—self replicated—in accordance with any arbitrary learning program, Artificial brains could be grown by a process strikingly analogous to the development of a human brain. In any given case, the details would never be known. In the early 1970s, at the MIT Artificial Intelligence Lab, Minsky, the theory attempts to explain how what we call intelligence could be a product of the interaction of non-intelligent parts. Minsky says that the biggest source of ideas about the theory came from his work in trying to create a machine that uses an arm, a video camera. In 1986, Minsky published The Society of Mind, a book on the theory which. Recent drafts of the book are available from his webpage. In 1952, Minsky married pediatrician Dr. Gloria Rudisch, together they had three children, Minsky was a talented improvisational pianist who published musings on the relations between music and psychology. Minsky was an atheist and a signatory to the Scientists Open Letter on Cryonics and he was a critic of the Loebner Prize for conversational robots
16.
Yuri Matiyasevich
–
Yuri Vladimirovich Matiyasevich, is a Russian mathematician and computer scientist. He is best known for his solution of Hilberts tenth problem. As a winner of IMO Yuri Matiyasevich was accepted without exams to LSU, in 1966, he presented a talk at International Congress of Mathematicians held in Moscow. He was an undergraduate student at that time. In 1969–1970, he pursued Ph. D. studies at Leningrad Department of Steklov Institute of Mathematics under supervision of Sergey Maslov, in 1970, he received his Ph. D. degree at LOMI. In 1970–1974, he was a researcher at LOMI, in 1972, he obtained a second doctoral degree. In 1974–1980, he was a researcher at LOMI. Since 1980, Yuri Matiyasevich has been the head of Laboratory of mathematical logic at LOMI, since 1995, he has been a professor of Saint-Petersburg State University, initially at the chair of software engineering, later at the chair of algebra and number theory. In 1997, he was elected as a member of Russian Academy of Sciences. Since 1998, Yuri Matiyasevich has been a vice-president of St. Petersburg Mathematical Society, since 2002, he has been a head of St. Petersburg City Mathematical Olympiad. Since 2003, Matiyasevich has been a co-director of annual German–Russian student school JASS, in 2008, he was elected as a full member of Russian Academy of Sciences. 1964, Gold medal at the International Mathematical Olympiad held in Moscow,1970, Young mathematician prize of the Leningrad Mathematical Society. 1980, Markov Prize of Academy of Sciences of the USSR,1998, He received Humboldt Research Award to Outstanding Scholars. 2003, Honorary Degree, Université Pierre et Marie Curie,2007, Member of the Bayern Academy of Sciences. A polynomial related to the colorings of a triangulation of a sphere was named after Matiyasevich, see The Matiyasevich polynomial, four colour theorem, notable students include, Eldar Musayev, Maxim Vsemirnov, Alexei Pastor, Dmitri Karpov. Yuri Matiyasevich Hilberts 10th Problem, Foreword by Martin Davis and Hilary Putnam, real-time recognition of the inclusion relation. Reduction of an arbitrary Diophantine equation to one in 13 unknowns, decision Problems for Semi-Thue Systems with a Few Rules. Yuri Matiyasevich, Proof Procedures as Bases for Metamathematical Proofs in Discrete Mathematics, Yuri Matiyasevich, Elimination of bounded universal quantifiers standing in front of a quantifier-free arithmetical formula, Personal Journal of Yuri Matiyasevich
17.
Martin Davis
–
Martin David Davis is an American mathematician, known for his work on Hilberts tenth problem. Daviss parents were Jewish immigrants to the US from Łódź, Poland, Davis grew up in the Bronx, where his parents encouraged him to obtain a full education. He received his Ph. D. from Princeton University in 1950 and he is Professor Emeritus at New York University. Davis is the co-inventor of the Davis–Putnam algorithm and the DPLL algorithms and he is also known for his model of Post–Turing machines. In 1975, Davis won the Leroy P. Steele Prize, the Chauvenet Prize and he became a fellow of the American Academy of Arts and Sciences in 1982, and in 2012, he was selected as one of the inaugural fellows of the American Mathematical Society. Davis, Martin, Weyuker, Elaine J. Sigal, Ron, computability, complexity, and languages, fundamentals of theoretical computer science. Engines of logic, mathematicians and the origin of the computer, review of Engines of logic, Wallace, Richard S. Mathematicians who forget the mistakes of history, a review of Engines of Logic by Martin Davis, ALICE A. I. Hardcover edition published as, The Universal Computer Articles Davis, Martin, Is mathematical insight algorithmic, Behavioral and Brain Sciences,13, criticism of non-standard analysis Halting problem Influence of non-standard analysis Martin Davis website
18.
Turing machine examples
–
The following are examples to supplement the article Turing machine. The following table is Turings very first example,1, a machine can be constructed to compute the sequence 010101. He makes this clear when he reduces the above table to a single instruction called b. Instruction b has three different symbol possibilities, as stated in the article Turing machine, Turing proposed that his table be further atomized by allowing only a single print/erase followed by a single tape movement L/R/N. He gives us this example of the first little table converted, thus when printing he skips every other square. The printed-on squares are called F-squares, the squares in between may be used for markers and are called E-squares as in liable to erasure. The F-squares in turn are his Figure squares and will bear the symbols 1 or 0 — symbols he called figures. In this example the tape starts out blank, and the figures are then printed on it, for example, suppose his tape was not initially blank. The Turing machine would read different values than the intended values and this is a very important subroutine used in the multiply routine. The example Turing machine handles a string of 0s and 1s and its task is to double any series of 1s encountered on the tape by writing a 0 between them. For example, when the head reads 111, it will write a 0, in order to accomplish its task, this Turing machine will need only 5 states of operation, which are called. S3 then skips over the sequence of 1s and replaces the first 0 it finds with a 1. S4 moves back to the left, skipping over 1s until it finds a 0, s5 then moves to the left, skipping over 1s until it finds the 0 that was originally written by s1. It replaces that 0 with a 1, moves one position to the right and this continues until s1 finds a 0 at which time the machine halts. Another description sees the problem as how to track of how many 1s there are. The basic way it works is by copying each 1 to the side, by moving back. In more detail, it carries each 1 across to the side, by recognizing the separating 0 in the middle. It comes back using the method, detecting the middle 0
19.
Partial function
–
In mathematics, a partial function from X to Y is a function f, X ′ → Y, for some subset X ′ of X. It generalizes the concept of an f, X → Y by not forcing f to map every element of X to an element of Y. If X ′ = X, then f is called a function and is equivalent to a function. Partial functions are used when the exact domain, X, is not known. Specifically, we say that for any x ∈ X, either. For example, we can consider the square root function restricted to the g, Z → Z g = n. Thus g is defined for n that are perfect squares. So, g =5, but g is undefined, there are two distinct meanings in current mathematical usage for the notion of the domain of a partial function. Most mathematicians, including recursion theorists, use the domain of f for the set of all values x such that f is defined. But some, particularly category theorists, consider the domain of a function f, X → Y to be X. Similarly, the range can refer to either the codomain or the image of a function. Occasionally, a function with domain X and codomain Y is written as f, X ⇸ Y. A partial function is said to be injective or surjective when the function given by the restriction of the partial function to its domain of definition is. A partial function may be both injective and surjective, because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to a partial function. Furthermore, a function which is injective may be inverted to an injective partial function. The notion of transformation can be generalized to functions as well. A partial transformation is a function f, A → B, total function is a synonym for function
20.
John von Neumann
–
John von Neumann was a Hungarian-American mathematician, physicist, inventor, computer scientist, and polymath. He made major contributions to a number of fields, including mathematics, physics, economics, computing, and statistics. He published over 150 papers in his life, about 60 in pure mathematics,20 in physics, and 60 in applied mathematics and his last work, an unfinished manuscript written while in the hospital, was later published in book form as The Computer and the Brain. His analysis of the structure of self-replication preceded the discovery of the structure of DNA, also, my work on various forms of operator theory, Berlin 1930 and Princeton 1935–1939, on the ergodic theorem, Princeton, 1931–1932. During World War II he worked on the Manhattan Project, developing the mathematical models behind the lenses used in the implosion-type nuclear weapon. After the war, he served on the General Advisory Committee of the United States Atomic Energy Commission, along with theoretical physicist Edward Teller, mathematician Stanislaw Ulam, and others, he worked out key steps in the nuclear physics involved in thermonuclear reactions and the hydrogen bomb. Von Neumann was born Neumann János Lajos to a wealthy, acculturated, Von Neumanns place of birth was Budapest in the Kingdom of Hungary which was then part of the Austro-Hungarian Empire. He was the eldest of three children and he had two younger brothers, Michael, born in 1907, and Nicholas, who was born in 1911. His father, Neumann Miksa was a banker, who held a doctorate in law and he had moved to Budapest from Pécs at the end of the 1880s. Miksas father and grandfather were both born in Ond, Zemplén County, northern Hungary, johns mother was Kann Margit, her parents were Jakab Kann and Katalin Meisels. Three generations of the Kann family lived in apartments above the Kann-Heller offices in Budapest. In 1913, his father was elevated to the nobility for his service to the Austro-Hungarian Empire by Emperor Franz Joseph, the Neumann family thus acquired the hereditary appellation Margittai, meaning of Marghita. The family had no connection with the town, the appellation was chosen in reference to Margaret, Neumann János became Margittai Neumann János, which he later changed to the German Johann von Neumann. Von Neumann was a child prodigy, as a 6 year old, he could multiply and divide two 8-digit numbers in his head, and could converse in Ancient Greek. When he once caught his mother staring aimlessly, the 6 year old von Neumann asked her, formal schooling did not start in Hungary until the age of ten. Instead, governesses taught von Neumann, his brothers and his cousins, Max believed that knowledge of languages other than Hungarian was essential, so the children were tutored in English, French, German and Italian. A copy was contained in a private library Max purchased, One of the rooms in the apartment was converted into a library and reading room, with bookshelves from ceiling to floor. Von Neumann entered the Lutheran Fasori Evangelikus Gimnázium in 1911 and this was one of the best schools in Budapest, part of a brilliant education system designed for the elite
21.
MIT Lincoln Laboratory
–
Research and development activities focus on long-term technology development as well as rapid system prototyping and demonstration. These efforts are aligned within key mission areas, the laboratory works with industry to transition new concepts and technology for system development and deployment. The laboratory also maintains several field sites around the world, the laboratorys inception was prompted by the Air Defense Systems Engineering Committees 1950 report that concluded the United States was unprepared for the threat of an air attack. S. Air Force suggested that MIT could provide the research needed to develop an air defense that could detect, identify, james R. Killian, then president of MIT, was not eager for MIT to become involved in air defense. He asked the Air Force if MIT could first conduct a study to evaluate the need for a new laboratory, killians proposal was approved, and a study named Project Charles was carried out between February and August 1951. The Semi-Automatic Ground Environment Air Defense System is the beginning of MIT Lincoln Laboratorys history of developing innovative technology, the system was conceived to meet the challenge of providing air defense to the continental United States. SAGE was designed to collect, analyze, and finally relay data from a multitude of radars, the key to this system was a computer that could perform reliably in real time. MITs Whirlwind computer, built in the 1940s, looked to be a candidate for the system. However, the Whirlwind was not reliable or fast enough for the processing needed for analyzing data coming in from dozens of, perhaps even 100, the magnetic core memory revolutionized computing. Computers became machines that were not just large and fast calculators, industry followed this development closely, adopting the magnetic core memory that expanded the capabilities of computers. Lincoln Laboratory quickly established a reputation for pioneering advanced electronics in air defense systems, many of the technical developments that later evolved into improved systems for the airborne detection and tracking of aircraft and ground vehicles have formed the basis for current research. Lincoln Laboratory conducts research and development pertinent to national security on behalf of the services, the Office of the Secretary of Defense. Projects focus on the development and prototyping of new technologies and capabilities, program activities extend from fundamental investigations, through simulation and analysis, to design and field testing of prototype systems. Emphasis is placed on transitioning technology to industry, the dissemination of information to the government, academia, and industry is a principal focus of Lincoln Laboratorys technical mission. Wide dissemination of information is achieved through annual technical workshops, seminars. Current news about Laboratory technical milestones is featured on the laboratorys website, MIT Lincoln Laboratory maintains a strong relationship with the MIT campus. Approximately 1700 technical staff members work on research, prototype building, the technical staff come from a broad range of scientific and engineering fields, with electrical engineering, physics, computer science and mathematics being among the most prevalent. Two-thirds of the staff hold advanced degrees, and 60% of those degrees are at the doctoral level
22.
Turing machine gallery
–
The following article is a supplement to the article Turing machine. The Turing machine shown here consists of a paper tape that can be erased as well as written with a tally mark. Perhaps the TABLE is made out of a read only paper tape reader. Turings biographer Andrew Hodges has written that Turing as a child liked typewriters, a miraculous machine -- a mechanical process which could work on Hilberts decision problem had been suggested by G. H. Hardy, one of Turings teachers. Davis says that Turing built a binary multiplier out of electromechanical relays, as noted in the history section of algorithm punched or printed paper tape and punched paper cards were commonplace in the 1930s. Boolos and Jeffrey note that being in one state or another might be a matter of having one or another cog of a certain gear uppermost and we are not concerned with the mechanics or the electronics of the matter. Perhaps the simplest way to picture the thing is quite crudely, Inside the box there is a man, the man has a list of m instructions written down on a piece of paper. Posts formulation was the first of its type to be published, both descriptions—Posts, and Boolos and Jeffreys — use simpler 4-tuples rather than 5-tuples to define the m-configurations of their processes. This model was suggested by Stone, Let us adopt the point of view that a computer is a robot that will perform any task that can be described as a sequence of instructions, Stone uses the robot to develop his notion of algorithm. See the main article Turing machine for references
23.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
24.
Stack machine
–
In computer science, computer engineering and programming language implementations, a stack machine is a type of computer. In some cases, the term refers to a scheme that simulates a stack machine. The main difference from other computers is that most of its instructions operate on a stack of numbers rather than numbers in registers. A stack computer is programmed with a reverse Polish notation instruction set, most computer systems implement a stack in some form to pass parameters and link to subroutines. This does not make these computers stack machines, the common alternatives to stack machines are register machines, in which each instruction explicitly names specific registers for its operands and result. A Stack machine is a computer uses a Last-in, First-out stack to hold short-lived temporary values. Most of its instructions assume that operands will be from the stack, for a typical instruction like Add, the computer takes both operands from the topmost values of the stack. The computer replaces those two values by the sum, calculated by the computer when it performs the Add instruction, the instructions operands are popped off the stack, and its result are then pushed back onto the stack, ready for the next instruction. Most stack instructions have only an opcode commanding an operation, with no additional fields to identify a constant, the stack easily holds more than two inputs or more than one result, so a richer set of operations can be computed. Integer constant operands are often pushed by separate Load Immediate instructions, memory is often accessed by separate Load or Store instructions containing a memory address or calculating the address from values in the stack. For speed, a stack machine often implements some part of its stack with registers, to execute quickly, operands of the arithmetic logic unit may be the top two registers of the stack and the result from the ALU is stored in the top register of the stack. Some stack machines have a stack of limited size, implemented as a register file, the ALU will access this with an index. Some machines have a stack of unlimited size, implemented as an array in RAM accessed by a top of stack address register and this is slower, but the number of flip-flops is less, making a less-expensive, more compact CPU. Its topmost N values may be cached for speed, a few machines have both an expression stack in memory and a separate register stack. In this case, software, or an interrupt may move data between them, the instruction set carries out most ALU actions with postfix operations that work only on the expression stack, not on data registers or main memory cells. This can be convenient for executing high-level languages, because most arithmetic expressions can be easily translated into postfix notation. In contrast, register machines hold temporary values in a small, accumulator machines have only one general-purpose register. Belt machines use a FIFO queue to hold temporary values, memory-to-memory machines that have no temporary registers usable by a programmer
25.
Alan Turing
–
Alan Mathison Turing OBE FRS was an English computer scientist, mathematician, logician, cryptanalyst and theoretical biologist. Turing is widely considered to be the father of computer science. During the Second World War, Turing worked for the Government Code and Cypher School at Bletchley Park, for a time he led Hut 8, the section responsible for German naval cryptanalysis. After the war, he worked at the National Physical Laboratory and he wrote a paper on the chemical basis of morphogenesis, and predicted oscillating chemical reactions such as the Belousov–Zhabotinsky reaction, first observed in the 1960s. Turing was prosecuted in 1952 for homosexual acts, when by the Labouchere Amendment and he accepted chemical castration treatment, with DES, as an alternative to prison. Turing died in 1954,16 days before his 42nd birthday, an inquest determined his death as suicide, but it has been noted that the known evidence is also consistent with accidental poisoning. In 2009, following an Internet campaign, British Prime Minister Gordon Brown made a public apology on behalf of the British government for the appalling way he was treated. Queen Elizabeth II granted him a pardon in 2013. The Alan Turing law is now a term for a 2017 law in the United Kingdom that retroactively pardons men cautioned or convicted under historical legislation that outlawed homosexual acts. Turings father was the son of a clergyman, the Rev. John Robert Turing, from a Scottish family of merchants that had based in the Netherlands. Turings mother, Julius wife, was Ethel Sara, daughter of Edward Waller Stoney, the Stoneys were a Protestant Anglo-Irish gentry family from both County Tipperary and County Longford, while Ethel herself had spent much of her childhood in County Clare. Julius work with the ICS brought the family to British India and he had an elder brother, John. At Hastings, Turing stayed at Baston Lodge, Upper Maze Hill, St Leonards-on-Sea, very early in life, Turing showed signs of the genius that he was later to display prominently. His parents purchased a house in Guildford in 1927, and Turing lived there during school holidays, the location is also marked with a blue plaque. Turings parents enrolled him at St Michaels, a day school at 20 Charles Road, St Leonards-on-Sea, the headmistress recognised his talent early on, as did many of his subsequent educators. From January 1922 to 1926, Turing was educated at Hazelhurst Preparatory School, in 1926, at the age of 13, he went on to Sherborne School, an independent school in the market town of Sherborne in Dorset. Turings natural inclination towards mathematics and science did not earn him respect from some of the teachers at Sherborne and his headmaster wrote to his parents, I hope he will not fall between two stools. If he is to stay at school, he must aim at becoming educated
26.
Hal Abelson
–
Harold Hal Abelson is a Professor of Electrical Engineering and Computer Science at MIT, a fellow of the IEEE, and a founding director of both Creative Commons and the Free Software Foundation. He directed the first implementation of Logo for the Apple II, which made the widely available on personal computers beginning in 1981. Abelson and Sussman also cooperate in codirecting the MIT Project on Mathematics, the MIT OpenCourseWare project was spearheaded by Hal Abelson and other MIT faculty. Abelson holds an AB degree from Princeton University and obtained a PhD degree in mathematics from MIT under the tutelage of mathematician Dennis Sullivan, Abelson has a longstanding interest in using computation as a conceptual framework in teaching. He directed the first implementation of Logo for the Apple II, which made the widely available on personal computers beginning in 1981. In March 2015, a copy of Abelsons 1969 implementation of Turtle graphics was sold at The Algorithm Auction and he is the co-author of the book App Inventor with David Wolber, Ellen Spertus, and Liz Looney, published by OReilly Media in 2011. Abelson and Sussman also cooperate in codirecting the MIT Project on Mathematics and Computation, the goal of the project is to create better computational tools for scientists and engineers. But even with powerful computers, exploring complex physical systems still requires substantial human effort and human judgement to prepare simulations. Programs such as these could form the basis for intelligent scientific instruments that monitor physical systems based upon high-level behavioral descriptions, at the same time, these programs incorporate computational formulations of scientific knowledge that can form the foundations of better ways to teach science and engineering. Abelson and Sussman have also been a part of the Free Software Movement, the MIT OpenCourseWare project was spearheaded by Hal Abelson and other MIT faculty. In January 2013, open access activist Aaron Swartz committed suicide and he had previously been arrested near MIT and was facing up to 35 years imprisonment for the alleged crime of downloading JSTOR articles through MITs open access campus network. In response, MIT appointed professor Hal Abelson to lead an investigation of the schools choices. The report was delivered on July 26,2013 and it concluded that MIT did nothing legally wrong, but recommended that MIT consider changing some of its internal policies. Abelson is also a director of Creative Commons and Public Knowledge. Booth Education Award given by IEEE Computer Society, cited for his contributions to the pedagogy. Structure and Interpretation of Computer Programs, cambridge, Mass, MIT Press, Second Edition 1996. Turtle Geometry, The Computer As a Medium for Exploring Mathematics, ISBN 978-0-262-01063-4 With Harry R. Lewis. Blown to Bits, Your Life, Liberty, and Happiness After the Digital Explosion, Upper Saddle River, NJ, Addison-Wesley,2008