Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
The Intel 80186 known as the iAPX 186, or just 186, is a microprocessor and microcontroller introduced in 1982. It was based on the Intel 8086 and, like it, had a 16-bit external data bus multiplexed with a 20-bit address bus, it was available as the 80188, with an 8-bit external data bus. The 80186 series was intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of integrated circuits required, it included features such as clock generator, interrupt controller, wait state generator, DMA channels, external chip select lines; the initial clock rate of the 80186 was 6 MHz, but due to more hardware available for the microcode to use for address calculation, many individual instructions ran faster than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was faster than on the 8086 when a memory location was both operand and the destination. Multiply and divide showed great improvement being several times as fast as on the original 8086 and multi-bit shifts were done four times as as in the 8086.
A few new instructions were introduced with the 80186: enter/leave, pusha/popa and ins/outs. A useful immediate mode was added for the push and multi-bit shift instructions; these instructions were included in the contemporary 80286 and in successor chips. The CMOS version, 80C186, introduced DRAM refresh, a power-save mode, a direct interface to the 8087 or 80187 floating point numeric coprocessor; the 80186 would have been a natural successor to the 8086 in personal computers. However, because its integrated hardware was incompatible with the hardware used in the original IBM PC, the 80286 was used as the successor instead in the IBM PC/AT. A few notable personal computers used the 80186: the Australian Dulmont Magnum laptop, one of the first laptops. Acorn created a plug-in for the BBC Master range of computers containing an 80186-10 with 512 KB of RAM, the BBC Master 512 system. In addition to the above examples of stand-alone implementations of the 80186 for personal computers, there was at least one example of an "add-in" accelerator card implementation: the Orchid Technology PC Turbo 186, released in 1985.
It was intended for use with the original Intel 8088-based IBM PC. The Intel 80186 is intended to be embedded in electronic devices that are not computers. For example: the 80186 was used to control the Microtek 8086 in-circuit emulator its offshoot, Intel 80188 was embedded inside the Intel 14.4EX modem released in 1991. The 16 MHz processor was used to perform complex algorithms needed for forward error correction, Trellis modulation, echo cancellation in the modem. In May 2006, Intel announced that production of the 186 would cease at the end of September 2007. Pin- and instruction-compatible replacements might still be manufactured by various third party sources. IAPX, for the iAPX name Attribution: This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later. Intel Datasheet Scan of the Intel 80186 data book at datasheetarchive.com Intel 80186/80188 images and descriptions at cpu-collection.de Chipdb.org
X86 is a family of instruction set architectures based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address; the term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Many additions and extensions have been added to the x86 instruction set over the years consistently with full backward compatibility; the architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many other companies. Of those, only Intel, AMD, VIA hold x86 architectural licenses, are producing modern 64-bit designs; the term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. As of 2018, the majority of personal computers and laptops sold are based on the x86 architecture, while other categories—especially high-volume mobile categories such as smartphones or tablets—are dominated by ARM.
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 represented any 8086 compatible CPU. Today, however, x86 implies a binary compatibility with the 32-bit instruction set of the 80386; this is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, as well as simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were terms iRMX, iSBC, iSBX – all together under the heading Microsystem 80. However, this naming scheme was quite temporary.
Although the 8086 was developed for embedded systems and small multi-user or single-user computers as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, is used in midrange computers, workstations and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware. Modern x86 is uncommon in embedded systems and small low power applications as well as low-cost microprocessor markets, such as home appliances and toys, lack any significant x86 presence. Simple 8-bit and 16-bit based architectures are common here, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some low power and low cost segments. There have been several attempts, including by Intel itself, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors.
Examples of this are the iAPX 432, the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips such as the eight-core Intel Xeon and 12-core AMD Opteron is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from new architectures; the table below lists processor models and model series implementing variations of the x86 instruction set, in chronological order. Each line item is characterized by improved or commercially successful processor microarchitecture designs. At various times, companies such as IBM, NEC, AMD, TI, STM, Fujitsu, OKI, Cyrix, Intersil, C&T, NexGen, UMC, DM&P started to design or manufacture x86 processors intended for personal computers as well as embedded systems; such x86 implementations are simple copies but employ different internal microarchitectures as well as different solutions at the electronic and physical levels.
Quite early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors named to Intel's original chips. Other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, Weitek. Following the pipelined i486, Intel introduced the Pentium brand name for their new set of superscalar x86 designs.
The 8086 is a 16-bit microprocessor chip designed by Intel between early 1976 and June 8, 1978, when it was released. The Intel 8088, released July 1, 1979, is a modified chip with an external 8-bit data bus, is notable as the processor used in the original IBM PC design, including the widespread version called IBM PC XT; the 8086 gave rise to the x86 architecture, which became Intel's most successful line of processors. On June 5, 2018, Intel released a limited edition CPU celebrating the anniversary of the Intel 8086, called the Intel Core i7-8086K. In 1972, Intel launched the first 8-bit microprocessor, it implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, which proved to be general-purpose. The device needed several additional ICs to produce a functional computer, in part due to it being packaged in a small 18-pin "memory package", which ruled out the use of a separate address bus. Two years Intel launched the 8080, employing the new 40-pin DIL packages developed for calculator ICs to enable a separate address bus.
It has an extended instruction set, source-compatible with the 8008 and includes some 16-bit instructions to make programming easier. The 8080 device, was replaced by the depletion-load-based 8085, which sufficed with a single +5 V power supply instead of the three different operating voltages of earlier chips. Other well known 8-bit microprocessors that emerged during these years are Motorola 6800, General Instrument PIC16X, MOS Technology 6502, Zilog Z80, Motorola 6809; the 8086 project started in May 1976 and was intended as a temporary substitute for the ambitious and delayed iAPX 432 project. It was an attempt to draw attention from the less-delayed 16- and 32-bit processors of other manufacturers and at the same time to counter the threat from the Zilog Z80, which became successful. Both the architecture and the physical chip were therefore developed rather by a small group of people, using the same basic microarchitecture elements and physical implementation techniques as employed for the older 8085.
Marketed as source compatible, the 8086 was designed to allow assembly language for the 8008, 8080, or 8085 to be automatically converted into equivalent 8086 source code, with little or no hand-editing. The programming model and instruction set is based on the 8080. However, the 8086 design was expanded to support full 16-bit processing, instead of the limited 16-bit capabilities of the 8080 and 8085. New kinds of instructions were added as well. Instructions directly supporting nested ALGOL-family languages such as Pascal and PL/M were added. According to principal architect Stephen P. Morse, this was a result of a more software-centric approach than in the design of earlier Intel processors. Other enhancements included microcoded multiply and divide instructions and a bus structure better adapted to future coprocessors and multiprocessor systems; the first revision of the instruction set and high level architecture was ready after about three months, as no CAD tools were used, four engineers and 12 layout people were working on the chip.
The 8086 took a little more than two years from idea to working product, considered rather fast for a complex design in 1976–1978. The 8086 was sequenced using a mixture of random logic and microcode and was implemented using depletion-load nMOS circuitry with 20,000 active transistors, it was soon moved to a new refined nMOS manufacturing process called HMOS that Intel developed for manufacturing of fast static RAM products. This was followed by HMOS-II, HMOS-III versions, a static CMOS version for battery powered devices, manufactured using Intel's CHMOS processes; the original chip measured minimum feature size was 3.2 μm. The architecture was defined by Stephen P. Morse with some help and assistance by Bruce Ravenel in refining the final revisions. Logic designer Jim McKevitt and John Bayliss were the lead engineers of the hardware-level development team and Bill Pohlman the manager for the project; the legacy of the 8086 is enduring in the basic instruction set of today's personal computers and servers.
All internal registers, as well as internal and external data buses, are 16 bits wide, which established the "16-bit microprocessor" identity of the 8086. A 20-bit external address bus provides a 1 MB physical address space; this address space is addressed by means of internal memory "segmentation". The data bus is multiplexed with the address bus in order to fit all of the control lines into a standard 40-pin dual in-line package, it provides a 16-bit I/O address bus. The maximum line
Harris Corporation is an American technology company, defense contractor and information technology services provider that produces wireless equipment, tactical radios, electronic systems, night vision equipment and both terrestrial and spaceborne antennas for use in the government and commercial sectors. They specialize in surveillance solutions, microwave weaponry, electronic warfare. Headquartered in Melbourne, the company has $7 billion of annual revenue, it is the largest private-sector employer in Florida. The company was the parent of Intersil. Most of the wireless start-ups in South Brevard County were founded and are staffed by former Harris Corporation engineers and technicians; the company's Digital Telephone Systems division was sold to Teltronics. In 2016, Harris was named one of the top hundred federal contractors by Defense News. In January 2015, Wired Magazine ranked Harris Corporation—tied with U. S. Marshals Service—as the number two threat to privacy and communications on the Internet.
The "Harris Automatic Press Company" was founded by Alfred S. Harris in Niles, Ohio, in 1895; the company spent the next 60 years developing lithographic processes and printing presses before acquiring typesetting company Intertype Corporation. In 1957, Harris acquired Gates Radio, a producer of broadcast transmitters and associated electronics gear, but kept the Gates brand name alive by putting the Gates sticker on the back of numerous transmitters that were labeled Harris on the front panels. In 1959, they acquired microwave technology company PRD Electronics of New York. In 1967, they merged with Radiation, Inc. of Melbourne, Florida, a developer of antenna, integrated circuit and modem technology used in the space race. The company headquarters was moved from Cleveland to Melbourne in 1978. In 1969, Harris Corporation acquired RF Communications and Farinon Electric Corporation, furthering its microwave assets; the printing operations are now known as GSS Printing Equipment. GSS Printing Equipment acquired Lanier Worldwide, which itself was spun off from Harris Corporation in the late 1990s.
In 1979, Harris formed a semiconductor joint venture Matra Harris Semiconductors, from which Harris withdrew in 1989. After further changes MHS was taken over by Atmel. In 1988, Harris acquired GE's semiconductor business, which at this time incorporated the Intersil and RCA semiconductor businesses; these were combined with Harris' existing semiconductor businesses. In 1996, Harris Corporation formed a joint venture with Shenzhen Telecom Company to produce and sell Harris' digital microwave radios and integrate them with other systems. In November 1998, Harris sold its commercial and standard military logic product lines to Texas Instruments, which included the HC/HCT, CD4000, AC/ACT and FCT product families. Harris retained production of the Radiation Hardened versions of these products. In 1999, Harris spun off their remaining semiconductor business as an independent company, under the Intersil name. In 2005, the corporation spent $870 million on development. Harris Corporation developed a Hand Held Computer for use during the address canvassing portion of the 2010 United States Census.
Secured access via a fingerprint swipe guaranteed that only the verified user had access to the unit. A GPS capacity was integral to the daily address management and the transfer of information, gathered. Of major importance was the security and integrity of the personal and private information of the populace. In January 2011, Harris re-opened its Calgary, Alberta avionics operation, Harris Canada Inc.. The expanded facility's operations include among others the support of the work to be completed under the company's six-year, $273 million services contract with the Government of Canada for the CF-18 Avionics Optimized Weapon System Support program. On December 2012, Harris Corporation sold its broadcast equipment operations to the Gores Group which operated as Harris Broadcast and is now GatesAir. Harris received $225M for the transaction half of what it paid seven years earlier for Leitch Technology, its final acquisition for the Broadcast division. On May 29, 2015, the purchase of competitor Exelis Inc. was finalized doubling the size of the original company.
In July 2015, Harris Corporation sold its healthcare division, Harris Healthcare Solutions, to NantHealth. In January 2017, Harris sold off its government IT services division to Veritas Capital for $690 million. In October 2018 Harris announced an all-stock "merger of equals" with New York-based L3 Technologies, to be closed in mid-2019; the new company, tentatively called L3 Harris Technologies, Inc. will be based in Melbourne, where Harris is headquartered. In 2019, Elbit Systems of America, the American division of the Israli Elbit Systems, agreed to purchase Harris's night vision product line for $350 million, contingent on the completion of the merger with L3; the Harris Communication Systems segment serves markets in tactical and airborne radios, night vision technology and defense and public safety networks. The Harris Electronic Systems segment provides products and services in electronic warfare, air traffic management, wireless technology, C4I, undersea systems and aerostructures.
Electronic Systems division provides the "ALQ-214" radio frequency jamming equipment for the U. S. Navy's F/A-18 Hornet aircraft; the ALQ-214 was developed by Exelis ES, which Harris acquired in 2015. ES is a provider of components in the avionics package and targeting systems for the U. S. Navy's EA-18 Growlers; the Harris Space and Intelligence Systems segment, formed when
The Intel 80188 microprocessor was a variant of the Intel 80186. The 80188 had an 8-bit external data bus instead of the 16-bit bus of the 80186; the 16-bit registers and the one megabyte address range were however. It had a throughput of 1 million instructions per second; the 80188 series was intended for embedded systems, as microcontrollers with external memory. Therefore, to reduce the number of chips required, it included features such as clock generator, interrupt controller, wait state generator, DMA channels, external chip select lines. While the N80188 was compatible with the 8087 numerics co-processor, the 80C188 was not, it didn't have the ESC control codes integrated. The initial clock rate of the 80188 was 6 MHz, but due to more hardware available for the microcode to use for address calculation, many individual instructions ran faster than on an 8086 at the same clock frequency. For instance, the common register+immediate addressing mode was faster than on the 8086 when a memory location was both operand and the destination.
Multiply and divide showed great improvement, being several times as fast as on the original 8086 and multi-bit shifts were done four times as as in the 8086. Along with hundreds of other processor models, Intel discontinued the 80188 processor 30 March 2006, after a life of about 24 years. Intel 80186/80188 images and descriptions at cpu-collection.de Scan of the Intel 80188 data book at datasheetarchive.com
Colorburst is an analog video, composite video signal generated by a video-signal generator used to keep the chrominance subcarrier synchronized in a color television signal. By synchronizing an oscillator with the colorburst at the back porch of each scan line, a television receiver is able to restore the suppressed carrier of the chrominance signals, in turn decode the color information; the most common use of colorburst is to genlock equipment together as a common reference with a vision mixer in a television studio using a multi-camera setup. In NTSC, its frequency is 315/88 = 3.57954 MHz with a phase of 180°. PAL uses a frequency of 4.43361875 MHz, with its phase alternating between 135° and 225° from line to line. Since the colorburst signal has a known amplitude, it is sometimes used as a reference level when compensating for amplitude variations in the overall signal. SECAM is unique in not having a colorburst signal, since the chrominance signals are encoded using FM rather than QAM, thus the signal phase is immaterial and no reference point is needed.
The original black and white NTSC television standard specified a frame rate of 30 Hz and 525 lines per frame, or 15750 lines per second. The audio was frequency modulated 4.5 MHz above the video signal. Because this was black and white, the video consisted only of luminance information. Although all of the space in between was occupied, the line-based nature of the video information meant that the luminance data was not spread uniformly across the frequency domain. Plotting the video signal on a spectrogram gave a signature that looked like the teeth of a comb or a gear, rather than smooth and uniform. RCA discovered that if the chrominance information, which had a similar spectrum, was modulated on a carrier, a half-integer multiple of the line rate, its signal peaks would fit neatly between the peaks of the luminance data and interference was minimized, it was not eliminated, but what remained was not apparent to human eyes. To provide sufficient bandwidth for the chrominance signal, yet interfere only with the highest-frequency portions of the luminance signal, a chrominance subcarrier near 3.6 MHz was desirable.
227.5 = 455/2 times the line rate was close to the right number, 455's small factors make a divider easy to construct. However, additional interference could come from the audio signal. To minimize interference there, it was desirable to make the distance between the chrominance carrier frequency and the audio carrier frequency a half-integer multiple of the line rate; the sum of these two half-integers implies that the distance between the frequency of the luminance carrier and audio carrier must be an integer multiple of the line rate. However, the original NTSC standard, with a 4.5 MHz carrier spacing and a 15750 Hz line rate, did not meet this requirement: the audio was 285.714 times the line rate. While existing black and white receivers could not decode a signal with a different audio carrier frequency, they could use the copious timing information in the video signal to decode a slower line rate. Thus, the new color television standard reduced the line rate by a factor of 1.001 to 1/286 of the 4.5 MHz audio subcarrier frequency, or about 15734.2657 Hz.
This reduced the frame rate to 30/1.001 ≈ 29.9700 Hz, placed the color subcarrier at 227.5/286 = 455/572 = 35/44 of the 4.5 MHz audio subcarrier. An NTSC or PAL television's color decoder contains a colorburst crystal oscillator; because so many analog color TVs were produced from the 1960s to the early 2000s, economies of scale drove down the cost of colorburst crystals, so they were used in various other applications, such as oscillators for microprocessors or for amateur radio.. Camera control unit Color framing Color killer Sync pulse Glossary of video terms