1.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely

2.
Computer data storage
–
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. It is a function and fundamental component of computers. The central processing unit of a computer is what manipulates data by performing computations, in practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away. In the Von Neumann architecture, the CPU consists of two parts, The control unit and the arithmetic logic unit. The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data, without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior and this is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions, most modern computers are von Neumann machines. A modern digital computer represents data using the numeral system. Text, numbers, pictures, audio, and nearly any form of information can be converted into a string of bits, or binary digits. The most common unit of storage is the byte, equal to 8 bits, a piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the works of Shakespeare, about 1250 pages in print. Data is encoded by assigning a bit pattern to each character, digit, by adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. A random bit flip is typically corrected upon detection, the cyclic redundancy check method is typically used in communications and storage for error detection. A detected error is then retried, data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed. This utilizes substantially less storage for many types of data at the cost of more computation, analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and this traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM or other forms of fast but temporary storage

3.
Field-programmable gate array
–
A field-programmable gate array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence field-programmable. The FPGA configuration is generally specified using a description language. Logic blocks can be configured to perform complex functions, or merely simple logic gates like AND. In most FPGAs, logic blocks also include elements, which may be simple flip-flops or more complete blocks of memory. Contemporary field-programmable gate arrays have large resources of logic gates and RAM blocks to implement complex digital computations, as FPGA designs employ very fast I/Os and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resources allocation within FPGAs to meet time constraints. FPGAs can be used to implement any logical function that an ASIC could perform, some FPGAs have analog features in addition to digital functions. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels, the FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in batches in a factory or in the field, however, programmable logic was hard-wired between logic gates. In the late 1980s, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates, Casselman was successful and a patent related to the system was issued in 1992. Some of the foundational concepts and technologies for programmable logic arrays, gates. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064, the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables, more than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention. Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s, by 1993, Actel was serving about 18 percent of the market. By 2010, Altera, Actel and Xilinx together represented approximately 77 percent of the FPGA market, the 1990s were an explosive period of time for FPGAs, both in sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications, by the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications. This work mirrors the architecture by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group which combined a reconfigurable CPU architecture on a chip called the SB24. That work was done in 1982, the Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmels programmable logic architecture

4.
Reduced instruction set computer
–
A computer based on this strategy is a reduced instruction set computer, also called RISC. The opposing architecture is called complex instruction set computing, although a number of systems from the 1960s and 70s have been identified as being forerunners of RISC, the modern version of the design dates to the 1980s. In particular, two projects at Stanford University and University of California, Berkeley are most associated with the popularization of this concept. Stanfords design would go on to be commercialized as the successful MIPS architecture, while Berkeleys RISC gave its name to the entire concept, another success from this era were IBMs efforts that eventually led to the Power Architecture. RISC families include DEC Alpha, AMD Am29000, ARC, ARM, Atmel AVR, Blackfin, Intel i860 and i960, MIPS, Motorola 88000, PA-RISC, Power, RISC-V, SuperH, and SPARC. In the 21st century, the use of ARM architecture processors in smart phones and tablet computers such as the iPad, a number of systems, going back to the 1960s, have been credited as the first RISC architecture, partly based on their use of load/store approach. The term RISC was coined by David Patterson of the Berkeley RISC project, michael J. Flynn views the first RISC system as the IBM801 design which began in 1975 by John Cocke, and completed in 1980. The 801 was eventually produced in a form as the ROMP in 1981. As the name implies, this CPU was designed for tasks, and was also used in the IBM RT-PC in 1986. But the 801 inspired several research projects, including new ones at IBM that would lead to the IBM POWER instruction set architecture. The most public RISC designs, however, were the results of university research programs run with funding from the DARPA VLSI Program, the VLSI Program, practically unknown today, led to a huge number of advances in chip design, fabrication, and even computer graphics. The Berkeley RISC project started in 1980 under the direction of David Patterson, Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing. In a traditional CPU, one has a number of registers. In a CPU with register windows, there are a number of registers, e. g.128. The Berkeley RISC project delivered the RISC-I processor in 1982, consisting of only 44,420 transistors RISC-I had only 32 instructions, and yet completely outperformed any other single-chip design. They followed this up with the 40,760 transistor,39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I. The MIPS architecture grew out of a course by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983. The MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, the MIPS system was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems

5.
Pointer machine
–
In theoretical computer science a pointer machine is an atomistic abstract computational machine model akin to the Random access machine. Depending on the type, a machine may be called a linking automaton, a KU-machine, an SMM, an atomistic LISP machine. At least three major varieties exist in the literature—the Kolmogorov-Uspenskii model, the Knuth linking automaton, and the Schönhage Storage Modification Machine model, the SMM seems to be the most common. From its read-only tape a pointer machine receives input—bounded symbol-sequences made of at least two symbols e. g. -- and it writes output symbol-sequences on an output write-only tape, to transform a symbol-sequence to an output symbol-sequence the machine is equipped with a program—a finite-state machine. Via its state machine the program reads the symbols, operates on its storage structure—a collection of nodes interconnected by edges. Computation proceeds only by reading input symbols, modifying and doing various tests on its storage structure—the pattern of nodes and pointers, information is in the storage structure. Both Gurevich and Ben-Amram list a number of very similar models of abstract machines. Its attractiveness as a model for complexity theory is questionable. Its time measure is based on time in a context where this measure is known to underestimate the true time complexity. Potential uses for the model, However, Schönhage demonstrates in his §6, and Gurevich wonders whether or not the parallel KU machine resembles somewhat the human brain Schönhages SMM model seems to be the most common and most accepted. It is quite unlike the machine model and other common computational models e. g. the tape-based Turing machine or the labeled holes. The computer consists of an alphabet of input symbols. Each node of the graph has one outgoing arrow labelled with each symbol. One fixed node of the graph is identified as the start or active node, the path can in turn be identified with the resulting node, but this identification will change as the graph changes during the computation. The machine can receive instructions which change the layout of the graph. The basic instructions are the new w instruction, which creates a new node which is the result of following the string w, here w and v represent words. A previously-created string of symbols—so that the edge will point backwards to an old node that is the result of that string. New w, creates a new node, W represents the new word that creates the new node

6.
Finite-state machine
–
A finite-state machine or finite-state automaton, finite automaton, or simply a state machine, is a mathematical model of computation. It is a machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to external inputs. A FSM is defined by a list of its states, its state. The behavior of machines can be observed in many devices in modern society that perform a predetermined sequence of actions depending on a sequence of events with which they are presented. The finite state machine has less power than some other models of computation such as the Turing machine. The computational power distinction means there are tasks that a Turing machine can do. This is because a FSMs memory is limited by the number of states it has, FSMs are studied in the more general field of automata theory. An example of a mechanism that can be modeled by a machine is a turnstile. A turnstile, used to access to subways and amusement park rides, is a gate with three rotating arms at waist height, one across the entryway. Initially the arms are locked, blocking the entry, preventing patrons from passing through, depositing a coin or token in a slot on the turnstile unlocks the arms, allowing a single customer to push through. After the customer passes through, the arms are locked again until another coin is inserted, considered as a state machine, the turnstile has two possible states, Locked and Unlocked. There are two inputs that affect its state, putting a coin in the slot and pushing the arm. In the locked state, pushing on the arm has no effect, no matter how many times the input push is given, putting a coin in – that is, giving the machine a coin input – shifts the state from Locked to Unlocked. In the unlocked state, putting additional coins in has no effect, however, a customer pushing through the arms, giving a push input, shifts the state back to Locked. Each state is represented by a node, edges show the transitions from one state to another. Each arrow is labeled with the input that triggers that transition, an input that doesnt cause a change of state is represented by a circular arrow returning to the original state. The arrow into the Locked node from the dot indicates it is the initial state

7.
Logic gate
–
Depending on the context, the term may refer to an ideal logic gate, one that has for instance zero rise time and unlimited fan-out, or it may refer to a non-ideal physical device. In modern practice, most gates are made from field-effect transistors, compound logic gates AND-OR-Invert and OR-AND-Invert are often employed in circuit design because their construction using MOSFETs is simpler and more efficient than the sum of the individual gates. In reversible logic, Toffoli gates are used, to build a functionally complete logic system, relays, valves, or transistors can be used. The simplest family of logic gates using bipolar transistors is called resistor-transistor logic, unlike simple diode logic gates, RTL gates can be cascaded indefinitely to produce more complex logic functions. RTL gates were used in integrated circuits. For higher speed and better density, the used in RTL were replaced by diodes resulting in diode-transistor logic. As integrated circuits became more complex, bipolar transistors were replaced with smaller field-effect transistors, to reduce power consumption still further, most contemporary chip implementations of digital systems now use CMOS logic. CMOS uses complementary MOSFET devices to achieve a high speed with low power dissipation, increasingly, these fixed-function logic gates are being replaced by programmable logic devices, which allow designers to pack a large number of mixed logic gates into a single integrated circuit. Other types of logic gates include, but are not limited to Electronic logic gates differ significantly from their relay-and-switch equivalents and they are much faster, consume much less power, and are much smaller. Also, there is a structural difference. The switch circuit creates a continuous path for current to flow between its input and its output. The semiconductor logic gate, on the hand, acts as a high-gain voltage amplifier. It is not possible for current to flow between the output and the input of a logic gate. Another important advantage of standardized integrated circuit logic families, such as the 7400 and 4000 families, is that they can be cascaded and this means that the output of one gate can be wired to the inputs of one or several other gates, and so on. The output of one gate can drive a finite number of inputs to other gates. Also, there is always a delay, called the propagation delay, when gates are cascaded, the total propagation delay is approximately the sum of the individual delays, an effect which can become a problem in high-speed circuits. The binary number system was refined by Gottfried Wilhelm Leibniz and he established that by using the binary system. In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits, eventually, vacuum tubes replaced relays for logic operations

8.
Auxiliary memory
–
In RAM devices data can be directly deleted or changed. It is used to store a large amount of data at lesser cost per byte than primary memory, secondary storage is two orders of magnitude less expensive than primary storage, the most common forms of auxiliary memory are flash memory, optical discs, magnetic disks and magnetic tape. The latest addition to the auxiliary memory family is flash memory and this form is much faster as compared to its predecessors, as this form of auxiliary memory does not involve any moving parts. In some laptops, this type of memory is referred to as a solid state drive. Flash memory, An electronic non-volatile computer storage device that can be erased and reprogrammed. Examples of this are flash drives, memory cards and solid state drives, a version of this is implemented in many notebook and some desktop computers. Optical disc, A storage medium from which data is read, optical disks can store much more data — up to 6 gigabytes more than most portable magnetic media, such as floppies. There are three types of optical disks, CD/DVD/BD-ROM, WORM & EO. Magnetic Disk, A magnetic disk is a circular plate constructed of metal or plastic coated with magnetized material, both sides of the disk are used and several disks may be stacked on one spindle with read/write heads available on each surface. Bits are stored on the surface in spots along concentric circles called tracks. Tracks are commonly divided into sections called sectors, Disk that are permanently attached and cannot be removed by the occasional user are called hard disks. A disk drive with removable disks is called a disk drive. Magnetic tapes, A magnetic tape transport consists of electric, mechanical and electronic components to provide the parts, the tape itself is a strip of plastic coated with a magnetic recording medium. Bits are recorded as magnetic spots on tape along several tracks, seven or nine bits are recorded to form a character together with a parity bit. R/W heads are mounted in each track so that data can be recorded, Computer data storage Secondary storage on the Computer data storage article Data storage device External storage Mass storage