1.
Binary number
–
The base-2 system is a positional notation with a radix of 2. Because of its implementation in digital electronic circuitry using logic gates. Each digit is referred to as a bit, the modern binary number system was devised by Gottfried Leibniz in 1679 and appears in his article Explication de lArithmétique Binaire. Systems related to binary numbers have appeared earlier in multiple cultures including ancient Egypt, China, Leibniz was specifically inspired by the Chinese I Ching. The scribes of ancient Egypt used two different systems for their fractions, Egyptian fractions and Horus-Eye fractions, the method used for ancient Egyptian multiplication is also closely related to binary numbers. This method can be seen in use, for instance, in the Rhind Mathematical Papyrus, the I Ching dates from the 9th century BC in China. The binary notation in the I Ching is used to interpret its quaternary divination technique and it is based on taoistic duality of yin and yang. Eight trigrams and a set of 64 hexagrams, analogous to the three-bit and six-bit binary numerals, were in use at least as early as the Zhou Dynasty of ancient China. The Song Dynasty scholar Shao Yong rearranged the hexagrams in a format that resembles modern binary numbers, the Indian scholar Pingala developed a binary system for describing prosody. He used binary numbers in the form of short and long syllables, Pingalas Hindu classic titled Chandaḥśāstra describes the formation of a matrix in order to give a unique value to each meter. The binary representations in Pingalas system increases towards the right, the residents of the island of Mangareva in French Polynesia were using a hybrid binary-decimal system before 1450. Slit drums with binary tones are used to encode messages across Africa, sets of binary combinations similar to the I Ching have also been used in traditional African divination systems such as Ifá as well as in medieval Western geomancy. The base-2 system utilized in geomancy had long been applied in sub-Saharan Africa. Leibnizs system uses 0 and 1, like the modern binary numeral system, Leibniz was first introduced to the I Ching through his contact with the French Jesuit Joachim Bouvet, who visited China in 1685 as a missionary. Leibniz saw the I Ching hexagrams as an affirmation of the universality of his own beliefs as a Christian. Binary numerals were central to Leibnizs theology and he believed that binary numbers were symbolic of the Christian idea of creatio ex nihilo or creation out of nothing. Is not easy to impart to the pagans, is the ex nihilo through Gods almighty power. In 1854, British mathematician George Boole published a paper detailing an algebraic system of logic that would become known as Boolean algebra
2.
Signed zero
–
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0 and this occurs in the sign and magnitude and ones complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0, the IEEE754 standard for floating-point arithmetic requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of the real number line such that 1/−0 = −∞ and 1/+0 = +∞, division is only undefined for ±0/±0. Negatively signed zero echoes the mathematical concept of approaching 0 from below as a one-sided limit. The notation −0 may be used informally to denote a negative number that has been rounded to zero. The concept of zero also has some theoretical applications in statistical mechanics. On the other hand, the concept of signed zero runs contrary to the assumption made in most mathematical fields that negative zero is the same thing as zero. The widely used twos complement encoding does not allow a negative zero, in a 1+7-bit sign-and-magnitude representation for integers, negative zero is represented by the bit string 10000000. In an 8-bit ones complement representation, negative zero is represented by the bit string 11111111, in all three encodings, positive zero is represented by 00000000. In IEEE754 binary floating point numbers, zero values are represented by the biased exponent, negative zero has the sign bit set to one. One may obtain negative zero as the result of certain computations, for instance as the result of arithmetic underflow on a number, or −1. 0×0.0. The IEEE754 floating point standard specifies the behavior of positive zero, the outcome may depend on the current IEEE rounding mode settings. In systems that include both signed and unsigned zeros, notation 0 + and 0 − is sometimes used for signed zeros. Addition and multiplication are commutative, but there are special rules that have to be followed. The = sign below shows the result of the operations. However + x can be replaced by x with rounding to nearest, an exception handler is called if enabled for the corresponding flag. According to the IEEE754 standard, negative zero and positive zero should compare as equal with the comparison operators, like the == operators of C
3.
Numeral system
–
A numeral system is a writing system for expressing numbers, that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. It can be seen as the context that allows the symbols 11 to be interpreted as the symbol for three, the decimal symbol for eleven, or a symbol for other numbers in different bases. The number the numeral represents is called its value, ideally, a numeral system will, Represent a useful set of numbers Give every number represented a unique representation Reflect the algebraic and arithmetic structure of the numbers. For example, the decimal representation of whole numbers gives every nonzero whole number a unique representation as a finite sequence of digits. Etc. all of which have the same meaning except for some scientific, such systems are, however, not the topic of this article. The most commonly used system of numerals is the Hindu–Arabic numeral system, two Indian mathematicians are credited with developing it. Aryabhata of Kusumapura developed the notation in the 5th century. The numeral system and the concept, developed by the Hindus in India, slowly spread to other surrounding countries due to their commercial. The Arabs adopted and modified it, even today, the Arabs call the numerals which they use Rakam Al-Hind or the Hindu numeral system. The Arabs translated Hindu texts on numerology and spread them to the world due to their trade links with them. The Western world modified them and called them the Arabic numerals, hence the current western numeral system is the modified version of the Hindu numeral system developed in India. It also exhibits a great similarity to the Sanskrit–Devanagari notation, which is used in India. The simplest numeral system is the numeral system, in which every natural number is represented by a corresponding number of symbols. If the symbol / is chosen, for example, then the seven would be represented by ///////. Tally marks represent one such system still in common use, the unary system is only useful for small numbers, although it plays an important role in theoretical computer science. Elias gamma coding, which is used in data compression. The unary notation can be abbreviated by introducing different symbols for new values. The ancient Egyptian numeral system was of type, and the Roman numeral system was a modification of this idea
4.
Bitwise operation
–
In digital computer programming, a bitwise operation operates on one or more bit patterns or binary numerals at the level of their individual bits. It is a fast, simple action directly supported by the processor, on simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. In the explanations below, any indication of a position is counted from the right side. For example, the binary value 0001 has zeroes at every position, the bitwise NOT, or complement, is a unary operation that performs logical negation on each bit, forming the ones complement of the given binary value. Bits that are 0 become 1, and those that are 1 become 0, for example, NOT0111 =1000 NOT10101011 =01010100 The bitwise complement is equal to the twos complement of the value minus one. If twos complement arithmetic is used, then NOT x = -x −1, for unsigned integers, the bitwise complement of a number is the mirror reflection of the number across the half-way point of the unsigned integers range. A simple but illustrative use is to invert a grayscale image where each pixel is stored as an unsigned integer. A bitwise AND takes two equal-length binary representations and performs the logical AND operation on each pair of the corresponding bits, thus, if both bits in the compared position are 1, the bit in the resulting binary representation is 1, otherwise, the result is 0. For example,0101 AND0011 =0001 The operation may be used to determine whether a bit is set or clear. This is often called bit masking, the bitwise AND may be used to clear selected bits of a register in which each bit represents an individual Boolean state. This technique is an efficient way to store a number of Boolean values using as little memory as possible, for example,0110 can be considered a set of four flags, where the first and fourth flags are clear, and the second and third flags are set. Using the example above,0110 AND0001 =0000 Because 6 AND1 is zero,6 is divisible by two and therefore even, a bitwise OR takes two bit patterns of equal length and performs the logical inclusive OR operation on each pair of corresponding bits. The result in each position is 0 if both bits are 0, while otherwise the result is 1, for example,0101 OR0011 =0111 The bitwise OR may be used to set to 1 the selected bits of the register described above. The result in each position is 1 if only the first bit is 1 or only the bit is 1. In this we perform the comparison of two bits, being 1 if the two bits are different, and 0 if they are the same, for example,0101 XOR0011 =0110 The bitwise XOR may be used to invert selected bits in a register. Any bit may be toggled by XORing it with 1, assembly language programmers sometimes use XOR as a short-cut to setting the value of a register to zero. Performing XOR on a value against itself always yields zero, and on many architectures this operation requires fewer clock cycles and memory than loading a zero value, in these operations the digits are moved, or shifted, to the left or right. In an arithmetic shift, the bits that are shifted out of either end are discarded
5.
CDC 6600
–
The CDC6600 was the flagship mainframe supercomputer of the 6000 series of computer systems manufactured by Control Data Corporation. It was notable in several respects, exceptionally fast for its day, it anticipated the RISC design philosophy and employed ones-complement representation of integers. The University of Texas at Austin had one delivered and installed underground on its campus, tucked into a hillside with one side exposed, for its Computer Science. The CDC6600 is generally considered to be the first successful supercomputer, outperforming its fastest predecessor, with performance of up to three megaFLOPS, the CDC6600 was the worlds fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC7600. A CDC6600 is on display at the Computer History Museum in Mountain View, the only running CDC6000 series machine has been restored by Living Computers, Museum + Labs. CDCs first products were based on the machines designed at ERA, after an experimental machine known as the Little Character, they delivered the CDC1604, one of the first commercial transistor-based computers, and one of the fastest machines on the market. Management was delighted, and made plans for a new series of machines that were tailored to business use, they would include instructions for character handling. Cray was not interested in such a project, and set himself the goal of producing a new machine that would be 50 times faster than the 1604. Taking his core team to new offices nearby the original CDC headquarters, after much experimentation, they found that there was simply no way the germanium-based transistors could be run much faster than those used in the 1604. The business machine that management had originally wanted, now forming as the CDC3000 series, during this period, CDC grew from a startup to a large company and Cray became increasingly frustrated with what he saw as ridiculous management requirements. Things became considerably more tense in 1962 when the new CDC3600 started to near production quality, and appeared to be exactly what management wanted, Cray eventually told CDCs CEO, William Norris that something had to change, or he would leave the company. Norris felt he was too important to lose, and gave Cray the green light to set up a new lab wherever he wanted. After a short search, Cray decided to return to his town of Chippewa Falls, WI. Although this process introduced a fairly lengthy delay in the design of his new machine, once in the new lab, without management interference, by this time, the new transistors were becoming quite reliable, and modules built with them tended to work properly on the first try. With Cray working with Jim Thornton, who was the architect and the hidden genius behind the 6600. More than 100 CDC 6600s were sold over the machines lifetime, many of these went to various nuclear bomb-related labs, and quite a few found their way into university computing labs. Cray immediately turned his attention to its replacement, this setting a goal of 10 times the performance of the 6600. The later CDC Cyber 70 and 170 computers were similar to the CDC6600 in overall design and were nearly completely backwards compatible
6.
LINC
–
The LINC is a 12-bit, 2048-word computer. The LINC is considered the first minicomputer and a forerunner to the personal computer, originally named the Linc, suggesting the projects origins at MITs Lincoln Laboratory, it was renamed LINC after the project moved from the Lincoln Laboratory. The LINC was designed by Wesley A. Clark and Charles Molnar, the LINC and other MIT Group machines were designed at MIT and eventually built by Digital Equipment Corporation and Spear Inc. of Waltham, Massachusetts. The LINC sold for more than $40,000 at the time, a typical configuration included an enclosed 6X20 rack, four boxes holding tape drives, a small display, a control panel, and a keyboard. Although the LINCs instruction set was small, it was larger than the tiny PDP-8 instruction set, the LINC interfaced well with laboratory experiments. Analog inputs and outputs were part of the basic design and it was designed in 1962 by Charles Molnar and Wesley Clark at Lincoln Laboratory, Massachusetts, for NIH researchers. The LINCs design was literally in the domain, perhaps making it unique in the history of computers. The number of LINCs and who built them is a subject of debate in the 12-bit-word community. One account states that a dozen LINC computers were assembled by their eventual biologist users in a 1963 summer workshop at MIT, Digital Equipment Corporation and Spear Inc. of Waltham, MA. manufactured them commercially. DECs pioneer C. Gordon Bell states that the LINC project began in 1961, with first delivery in March 1962, a total of 50 were built, most at Lincoln Labs, housing the desktop instruments in four wooden racks. The first LINC included two oscilloscope displays, twenty-one were sold by DEC at $43,600, delivered in the Production Model design. The standard program development software was designed by Mary Allen Wilkes, the LINC control panel was used for single-stepping through programs and for program debugging. Execution could be stopped when the program counter matched a set of switches, another function allowed execution to be stopped when a particular address was accessed. The single-step and the functions could be automatically repeated. The repetition rate could be varied over four orders of magnitude by means of an analog knob, running a program at one step per second and gradually accelerating it to full speed provided an extremely dramatic way to experience and appreciate the speed of the computer. A noteworthy feature of the LINC was the LINCtape and it was a fundamental part of the machine design, not an optional peripheral, and the machines OS relied on it. The LINCtape can be compared to a linear diskette with a slow seek time, the magnetic tape drives on large machines of the day stored large quantities of data, took minutes to spool from end to end, but could not reliably update blocks of data in place. The tape was formatted in fixed-sized blocks, and was used to hold a directory, a single hardware instruction could seek and then read or write multiple tape blocks all in one operation
7.
PDP-1
–
The PDP-1 was the first computer in Digital Equipment Corporations PDP series and was first produced in 1959. It is famous for being the computer most important in the creation of culture at MIT, BBN. The PDP-1 was also the hardware for playing historys first game on a minicomputer. The PDP-1 used an 18-bit word size and had 4096 words as standard main memory, signed numbers were represented in ones complement. The PDP-1 had computing power roughly equivalent to a 1996 pocket organizer, the PDP-1 used 2,700 transistors and 3,000 diodes. It was built mostly of DEC 1000-series System Building Blocks, using micro-alloy and micro-alloy diffused transistors with a rated switching speed of 5 MHz, the System Building Blocks were packaged into several 19-inch racks. The racks were themselves packaged into a large mainframe case, with a hexagonal control panel containing switches. Above the control panel was the systems standard input/output solution, a tape reader and writer. The design of the PDP-1 was based on the pioneering TX-0 computer, designed, benjamin Gurley was the lead engineer on the project. After building prototype models in December 1959, DEC delivered the first PDP-1 to Bolt, Beranek and Newman in November 1960, and it was formally accepted the next April. In 1961, DEC donated the engineering prototype PDP-1 to MIT, where it was placed in the next to its ancestor, the TX-0 computer. In this setting, the PDP-1 quickly replaced the TX-0 as the favorite machine among the hacker culture. Perhaps best known among these is one of the first computerized video games, but among the list are the first text editor, word processor, interactive debugger, the first credible computer chess program, and some of the earliest computerized music. The PDP-1 sold in form for US$120,000. BBNs system was followed by orders from Lawrence Livermore and Atomic Energy of Canada. All of these machines were still being used in 1970. MITs example was donated to The Computer Museum, Boston, on paper tape was still tucked into the case. AECLs computer was sent to Science North, but was later scrapped, the PDP-1 used punched paper tape as its primary storage medium
8.
UNIVAC 1100/2200 series
–
The UNIVAC 1100/2200 series is a series of compatible 36-bit computer systems, beginning with the UNIVAC1107 in 1962, initially made by Sperry Rand. The series continues to be supported today by Unisys Corporation as the ClearPath Dorado Series, the solid-state 1107 model number was in the same sequence as the earlier vacuum-tube computers, but the early computers were not compatible with the solid-state successors. The model numbers are based on the number system. The 128 registers of the general register stack, map to the current data space in main storage starting at memory address zero. These registers include both user and executive copies of the A, X, R, and J registers, the table on the right shows the addresses of the user registers. There are 15 index registers,16 accumulators, and 15 special function user registers, the 4 J registers and 3 staging registers are uses of some of the special function R registers. One interesting feature is that the last four index registers and the first four accumulators overlap and this also results in four unassigned accumulators that can only be accessed by their memory address. Prior to the UNIVAC1107, UNIVAC produced several vacuum-tube-based machines with model numbers from 1101 to 1105 and these machines had different architectures and word sizes and were not compatible with each other. They all used vacuum tubes and many used drum memory as their main memory, some were designed by Engineering Research Associates which was later purchased and merged with the UNIVAC company. The UNIVAC1101, or ERA1101, was a system designed by ERA. It was developed under Navy Project 13, which is 1101 in binary, the UNIVAC1102 or ERA1102 was designed by Engineering Research Associates for the United States Air Force. The 36-bit UNIVAC1103 was introduced in 1953 and a version was released in 1956. This was the first commercial computer to use core memory instead of the Williams tube, the UNIVAC1105 was the successor to the 1103A, and was introduced in 1958. The UNIVAC1104 system was a 30-bit version of the 1103 built for Westinghouse Electric, in 1957, however, by the time the BOMARC was deployed in the 1960s, a more modern computer had replaced the UNIVAC1104. These machines had an architecture and word size. They all used transistorized electronics and integrated circuits, early machines used core memory until that was replaced with semiconductor memory in 1975. The UNIVAC1107 was the first solid-state member of Sperry Univacs UNIVAC1100 series of computers and it was also known as the Thin-Film Computer because of its use of thin-film memory for its register storage. It represented a change of architecture, unlike previous models, it was not a strict two-address machine
9.
IEEE 754
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
10.
Donald Knuth
–
Donald Ervin Knuth is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the author of the multi-volume work The Art of Computer Programming and he contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process he also popularized the asymptotic notation, Knuth strongly opposes granting software patents, having expressed his opinion to the United States Patent and Trademark Office and European Patent Organization. Knuth was born in Milwaukee, Wisconsin, to German-Americans Ervin Henry Knuth and his father had two jobs, running a small printing company and teaching bookkeeping at Milwaukee Lutheran High School. Donald, a student at Milwaukee Lutheran High School, received academic accolades there, for example, in eighth grade, he entered a contest to find the number of words that the letters in Zieglers Giant Bar could be rearranged to create. Although the judges only had 2,500 words on their list, Donald found 4,500 words, as prizes, the school received a new television and enough candy bars for all of his schoolmates to eat. Knuth had a time choosing physics over music as his major at Case Institute of Technology. He also joined Beta Nu Chapter of the Theta Chi fraternity, while studying physics at the Case Institute of Technology, Knuth was introduced to the IBM650, one of the early mainframes. After reading the manual, Knuth decided to rewrite the assembly and compiler code for the machine used in his school. In 1958, Knuth created a program to help his schools basketball team win their games and he assigned values to players in order to gauge their probability of getting points, a novel approach that Newsweek and CBS Evening News later reported on. Knuth was one of the editors of the Engineering and Science Review. In 1963, with mathematician Marshall Hall as his adviser, he earned a PhD in mathematics from the California Institute of Technology, after receiving his PhD, Knuth joined Caltechs faculty as an associate professor. He accepted a commission to write a book on computer programming language compilers and he originally planned to publish this as a single book. As Knuth developed his outline for the book, he concluded that he required six volumes and he published the first volume in 1968. Knuth then left this position to join the Stanford University faculty, Knuth is a writer as well as a computer scientist. Knuth has been called the father of the analysis of algorithms, in the 1970s, Knuth described computer science as a totally new field with no real identity. And the standard of available publications was not that high, a lot of the papers coming out were quite simply wrong. So one of my motivations was to put straight a story that had been very badly told, by 2013, the first three volumes and part one of volume four of his series had been published
11.
The Art of Computer Programming
–
The Art of Computer Programming is a comprehensive monograph written by Donald Knuth that covers many kinds of programming algorithms and their analysis. Knuth began the project, originally conceived as a book with twelve chapters. The first three volumes of what was expected to be a seven-volume set were published in 1968,1969. The first installment of Volume 4 was published in 2005, the hardback Volume 4A, combining Volume 4, Fascicles 0–4, was published in 2011. Additional fascicle installments are planned for release approximately biannually, Volume 4, during his summer vacations, Knuth was hired by Burroughs to write compilers, earning more in his summer months than full professors did for an entire year. Such exploits made Knuth a topic of discussion among the mathematics department, Knuth started to write a book about compiler design in 1962, and soon realized that the scope of the book needed to be much larger. In June 1965, Knuth finished the first draft of what was planned to be a single volume of twelve chapters. This meant the book would be approximately 2,000 pages in length, the publisher was nervous about accepting such a project from a graduate student. At this point, Knuth received support from Richard S. Varga, Varga was visiting Olga Taussky-Todd and John Todd at Caltech. With Vargas enthusiastic endorsement, the publisher accepted Knuths expanded plans, in its expanded version, the book would be published in seven volumes, each with just one or two chapters. Due to the growth in the material, the plan for Volume 4 has since expanded to include Volumes 4A, 4B, 4C, 4D, and possibly more. In 1976, Knuth prepared an edition of Volume 2, requiring it to be typeset again. In 1977, he decided to spend some time creating something more suitable, eight years later, he returned with TEX, which is currently used for all volumes. Another characteristic of the volumes is the variation in the difficulty of the exercises, the level of difficulty ranges from warm-up exercises to unsolved research problems. Knuths dedication reads, This series of books is dedicated to the Type 650 computer once installed atCase Institute of Technology. All examples in the use a language called MIX assembly language. Currently, the MIX computer is being replaced by the MMIX computer, software such as GNU MDK exists to provide emulation of the MIX architecture. Knuth considers the use of assembly language necessary for the speed, covers of the third edition of Volume 1 quote Bill Gates as saying, If you think youre a really good programmer