In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components and software, including communication protocols. Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, can be wired in either a multidrop or daisy chain topology, or connected by switched hubs, as in the case of USB. Computer systems consist of three main parts: the central processing unit that processes data, memory that holds the programs and data to be processed, I/O devices as peripherals that communicate with the outside world. An early computer might contain a hand-wired CPU of vacuum tubes, a magnetic drum for main memory, a punch tape and printer for reading and writing data respectively.
A modern system might have a multi-core CPU, DDR4 SDRAM for memory, a solid-state drive for secondary storage, a graphics card and LCD as a display system, a mouse and keyboard for interaction, a Wi-Fi connection for networking. In both examples, computer buses of one form or another move data between all of these devices. In most traditional computer architectures, the CPU and main memory tend to be coupled. A microprocessor conventionally is a single chip which has a number of electrical connections on its pins that can be used to select an "address" in the main memory and another set of pins to read and write the data stored at that location. In most cases, the CPU and memory share signalling operate in synchrony; the bus connecting the CPU and memory is one of the defining characteristics of the system, referred to as the system bus. It is possible to allow peripherals to communicate with memory in the same fashion, attaching adaptors in the form of expansion cards directly to the system bus.
This is accomplished through some sort of standardized electrical connector, several of these forming the expansion bus or local bus. However, as the performance differences between the CPU and peripherals varies some solution is needed to ensure that peripherals do not slow overall system performance. Many CPUs feature a second set of pins similar to those for communicating with memory, but able to operate at different speeds and using different protocols. Others use smart controllers to place the data directly in memory, a concept known as direct memory access. Most modern systems combine both solutions; as the number of potential peripherals grew, using an expansion card for every peripheral became untenable. This has led to the introduction of bus systems designed to support multiple peripherals. Common examples are the SATA ports in modern computers, which allow a number of hard drives to be connected without the need for a card. However, these high-performance systems are too expensive to implement in low-end devices, like a mouse.
This has led to the parallel development of a number of low-performance bus systems for these solutions, the most common example being the standardized Universal Serial Bus. All such examples may be referred to as peripheral buses, although this terminology is not universal. In modern systems the performance difference between the CPU and main memory has grown so great that increasing amounts of high-speed memory is built directly into the CPU, known as a cache. In such systems, CPUs communicate using high-performance buses that operate at speeds much greater than memory, communicate with memory using protocols similar to those used for peripherals in the past; these system buses are used to communicate with most other peripherals, through adaptors, which in turn talk to other peripherals and controllers. Such systems are architecturally more similar to multicomputers, communicating over a bus rather than a network. In these cases, expansion buses are separate and no longer share any architecture with their host CPU.
What would have been a system bus is now known as a front-side bus. Given these changes, the classical terms "system", "expansion" and "peripheral" no longer have the same connotations. Other common categorization systems are based on the bus's primary role, connecting devices internally or externally, PCI vs. SCSI for instance. However, many common modern bus systems can be used for both. Other examples, like InfiniBand and I²C were designed from the start to be used both internally and externally; the internal bus known as internal data bus, memory bus, system bus or Front-Side-Bus, connects all the internal components of a computer, such as CPU and memory, to the motherboard. Internal data buses are referred to as a local bus, because they are intended to connect to local devices; this bus is rather quick and is independent of the rest of the computer operations. The external bus, or expansion bus, is made up of the electronic pathways that connect the different external devices, such as printer etc. to the computer.
Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential
In computer architecture, 36-bit integers, memory addresses, or other data units are those that are 36 bits wide. 36-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. Prior to the introduction of computers, the state of the art in precision scientific and engineering calculation was the ten-digit, electrically powered, mechanical calculator, such as those manufactured by Friden and Monroe; these calculators had a column of keys for each digit, operators were trained to use all their fingers when entering numbers, so while some specialized calculators had more columns, ten was a practical limit. Computers, as the new competitor, had to match that accuracy. Decimal computers sold in that era, such as the IBM 650 and the IBM 7070, had a word length of ten digits, as did ENIAC, one of the earliest computers. Early binary computers aimed at the same market therefore used a 36-bit word length; this was long enough to represent positive and negative integers to an accuracy of ten decimal digits.
It allowed the storage of six alphanumeric characters encoded in a six-bit character code. Computers with 36-bit words included the MIT Lincoln Laboratory TX-2, the IBM 701/704/709/7090/7094, the UNIVAC 1103/1103A/1105 and 1100/2200 series, the General Electric GE-600/Honeywell 6000, the Digital Equipment Corporation PDP-6/PDP-10, the Symbolics 3600 series. Smaller machines like the PDP-1/PDP-9/PDP-15 used 18-bit words, so a double word was 36 bits; these computers had addresses 12 to 18 bits in length. The addresses referred to 36-bit words, so the computers were limited to addressing between 4,096 and 262,144 words; the older 36-bit computers were limited to a similar amount of physical memory as well. Architectures that survived evolved over time to support larger virtual address spaces using memory segmentation or other mechanisms; the common character packings included: six 5.32-bit DEC Radix-50 characters, plus four spare bits six 6-bit Fieldata or IBM BCD characters six 6-bit ASCII characters, supporting the upper-case unaccented letters, digits and most ASCII punctuation characters.
It was used on the PDP-10 under the name sixbit. Five 7-bit characters and 1 unused bit four 8-bit characters, plus four spare bits four 9-bit characters. Characters were extracted from words either using machine code shift and mask operations or with special-purpose hardware supporting 6-bit, 9-bit, or variable-length characters; the Univac 1100/2200 used the partial word designator of the instruction, the "J" field, to access characters. The GE-600 used special indirect words to access 6- and 9-bit characters; the PDP-6/10 had special instructions to access arbitrary-length byte fields. The standard C programming language requires that the size of the char data type be at least 8 bits, that all data types other than bitfields have a size, a multiple of the character size, so standard C implementations on 36-bit machines would use 9-bit chars, although 12-bit, 18-bit, or 36-bit would satisfy the requirements of the standard. By the time IBM introduced System/360 with 32-bit full words, scientific calculations had shifted to floating point, where double-precision formats offered more than 10-digit accuracy.
The 360s included instructions for variable-length decimal arithmetic for commercial applications, so the practice of using word lengths that were a power of two became commonplace, though at least one line of 36-bit computer systems are still sold as of 2019, the Unisys ClearPath Dorado series, the continuation of the UNIVAC 1100/2200 series of mainframe computers. CompuServe was launched using 36-bit PDP-10 computers in the late 1960s, it continued using PDP-10 and DECSYSTEM-10-compatible hardware and retired the service in the late 2000s. The LatticeECP3 FPGAs from Lattice Semiconductor include multiplier slices that can be configured to support the multiplication of two 36-bit numbers; the DSP block in Altera Stratix FPGAs can do 36-bit multiplications. Physical Address Extension PSE-36 UTF-9 and UTF-18
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
In computer architecture, 60-bit integers, memory addresses, or other data units are those that are 60 bits wide. 60-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size. Computers designed with 60-bit words are quite rare with Control Data Corporation being one of the few or only manufacturer to use this size. Examples include the CDC 6000 series, the CDC 7600, the CDC Cyber 70 and 170 series. Museum examples of 60-bit CDC machines exist. There exists an emulator for the series which will simulate the CDC 60-bit machines on commodity hardware and operating systems
The Hewlett-Packard Company or Hewlett-Packard was an American multinational information technology company headquartered in Palo Alto, California. It developed and provided a wide variety of hardware components as well as software and related services to consumers, small- and medium-sized businesses and large enterprises, including customers in the government and education sectors; the company was founded in a one-car garage in Palo Alto by Bill Hewlett and David Packard, produced a line of electronic test equipment. HP was the world's leading PC manufacturer from 2007 to Q2 2013, at which time Lenovo ranked ahead of HP. HP specialized in developing and manufacturing computing, data storage, networking hardware, designing software and delivering services. Major product lines included personal computing devices and industry standard servers, related storage devices, networking products, software and a diverse range of printers and other imaging products. HP directly marketed its products to households, small- to medium-sized businesses and enterprises as well as via online distribution, consumer-electronics and office-supply retailers, software partners and major technology vendors.
HP had services and consulting business around its products and partner products. Hewlett-Packard company events included the spin-off of its electronic and bio-analytical measurement instruments part of its business as Agilent Technologies in 1999, its merger with Compaq in 2002, the acquisition of EDS in 2008, which led to combined revenues of $118.4 billion in 2008 and a Fortune 500 ranking of 9 in 2009. In November 2009, HP announced the acquisition of 3Com, with the deal closing on April 12, 2010. On April 28, 2010, HP announced the buyout of Inc. for $1.2 billion. On September 2, 2010, HP won its bidding war for 3PAR with a $33 a share offer, which Dell declined to match. Hewlett-Packard spun off its enterprise products and services business as Hewlett Packard Enterprise on November 1, 2015. Hewlett-Packard held onto the PC and printer businesses, was renamed to HP Inc. Bill Hewlett and David Packard graduated with degrees in electrical engineering from Stanford University in 1935; the company originated in a garage in nearby Palo Alto during a fellowship they had with a past professor, Frederick Terman at Stanford during the Great Depression.
They considered Terman a mentor in forming Hewlett-Packard. In 1938, Packard and Hewlett begin part-time work in a rented garage with an initial capital investment of US$538. In 1939 Hewlett and Packard decided to formalize their partnership, they tossed a coin to decide whether the company they founded would be called Hewlett-Packard or Packard-Hewlett. HP incorporated on August 18, 1947, went public on November 6, 1957. Of the many projects they worked on, their first financially successful product, was a precision audio oscillator, the Model HP200A, their innovation was the use of a small incandescent light bulb as a temperature dependent resistor in a critical portion of the circuit, the negative feedback loop which stabilized the amplitude of the output sinusoidal waveform. This allowed them to sell the Model 200A for $89.40 when competitors were selling less stable oscillators for over $200. The Model 200 series of generators continued production until at least 1972 as the 200AB, still tube-based but improved in design through the years.
One of the company's earliest customers was Walt Disney Productions, which bought eight Model 200B oscillators for use in certifying the Fantasound surround sound systems installed in theaters for the movie Fantasia. They worked on counter-radar technology and artillery shell fuses during World War II, which allowed Packard to be exempt from the draft. HP is recognized as the symbolic founder of Silicon Valley, although it did not investigate semiconductor devices until a few years after the "traitorous eight" had abandoned William Shockley to create Fairchild Semiconductor in 1957. Hewlett-Packard's HP Associates division, established around 1960, developed semiconductor devices for internal use. Instruments and calculators were some of the products using these devices. During the 1960s, HP partnered with Sony and the Yokogawa Electric companies in Japan to develop several high-quality products; the products were not a huge success, as there were high costs in building HP-looking products in Japan.
HP and Yokogawa formed a joint venture in 1963 to market HP products in Japan. HP bought Yokogawa Electric's share of Hewlett-Packard Japan in 1999. HP spun off Dynac, to specialize in digital equipment; the name was picked so that the HP logo "hp" could be turned upside down to be a reverse reflect image of the logo "dy" of the new company. Dynac changed to Dymec, was folded back into HP in 1959. HP experimented with using Digital Equipment Corporation minicomputers with its instruments, but after deciding that it would be easier to build another small design team than deal with DEC, HP entered the computer market in 1966 with the HP 2100 / HP 1000 series of minicomputers; these had a simple accumulator-based design, with two accumulator registers and, in the HP 1000 models, two index registers. The series was produced for 20 years, in spite of several attempts to replace it, was a forerunner of the HP 9800 and HP 250 series of desktop and business computers; the HP 3000 was an advanced stack-based design for a business computing server redesigned with RISC technology.
The HP 2640 series of smart and intelligent terminals introduced forms-based interfaces to ASCII terminals, introduced screen labeled functio
A 1-bit computer architecture is an instruction set architecture for a processor that has datapath widths and data register widths of 1 bit wide. An example of a 1-bit computer built from discrete logic SSI chips were the Wang 700 and Wang 500 calculator as well as the Wang 1200 word processor series of Wang Laboratories. An example of a 1-bit architecture, marketed as a CPU is the Motorola MC14500B Industrial Control Unit, introduced in 1977 and manufactured at least up into the mid 1990s. One of the computers known to be based on this CPU was the WDR 1-bit computer. A typical sequence of instructions from a program for a 1-bit architecture might be: load digital input 1 into a 1-bit register; this architecture was considered superior for programs making decisions rather than performing arithmetic computations, for ladder logic as well as for serial data processing. There are several design studies for 1-bit architectures in academia, corresponding 1-bit logic can be found in programming. Other examples of 1-bit architectures are programmable logic controllers, programmed in instruction list.
Several early massively parallel computers used 1-bit architectures for the processors as well. Examples include the Connection Machine. By using a 1-bit architecture for the individual processors a large array could be constructed with the chip technology available at the time. In this case the slow computation of a 1-bit processor was traded off against the large number of processors. 1-bit CPUs can meanwhile be considered obsolete, not many kinds have been produced and none are known to be available in the major computer component stores. Bit-serial architecture Bit slicing Turing machine Mueller, Dieter. "The Wang 700 ALU". Archived from the original on 2017-09-08. Retrieved 2018-07-18. Mueller, Dieter. "Wang 700 BCD correction". Archived from the original on 2017-09-10. Retrieved 2018-07-18. Mueller, Dieter. "The famous/infamous MC14500". Archived from the original on 2017-08-03. Retrieved 2018-07-18. Mueller, Dieter. "MC14500 and arithmetic". Archived from the original on 2017-05-20. Retrieved 2018-07-18.
Mueller, Dieter. "A MC14500 modification". Archived from the original on 2017-03-20. Retrieved 2018-07-18. Schembri, Thierry. "WDR-1-Bit Computer". OLD-COMPUTERS. COM. Archived from the original on 2017-05-20. Retrieved 2017-05-20