IBM PC compatible
IBM PC compatible computers are computers similar to the original IBM PC, XT, AT, able to use the same software and expansion cards. Such computers used to be referred to as PC clones, or IBM clones, they duplicate exactly all the significant features of the PC architecture, facilitated by IBM's choice of commodity hardware components and various manufacturers' ability to reverse engineer the BIOS firmware using a "clean room design" technique. Columbia Data Products built the first clone of the IBM personal computer by a clean room implementation of its BIOS. Early IBM PC compatibles used the same computer bus as AT models; the IBM AT compatible bus was named the Industry Standard Architecture bus by manufacturers of compatible computers. The term "IBM PC compatible" is now a historical description only, since IBM has ended its personal computer sales. Descendants of the IBM PC compatibles comprise the majority of personal computers on the market presently with the dominant operating system being Microsoft Windows, although interoperability with the bus structure and peripherals of the original PC architecture may be limited or non-existent.
Some computers ran MS-DOS but had enough hardware differences that IBM compatible software could not be used. Only the Macintosh kept significant market share without compatibility with the IBM PC. IBM decided in 1980 to market a low-cost single-user computer as as possible in response to Apple Computer's success in the burgeoning microcomputer market. On 12 August 1981, the first IBM PC went on sale. There were three operating systems available for it; the least expensive and most popular was PC DOS made by Microsoft. In a crucial concession, IBM's agreement allowed Microsoft to sell its own version, MS-DOS, for non-IBM computers; the only component of the original PC architecture exclusive to IBM was the BIOS. IBM at first asked developers to avoid writing software that addressed the computer's hardware directly, to instead make standard calls to BIOS functions that carried out hardware-dependent operations; this software would run on any machine using MS-DOS or PC-DOS. Software that directly addressed the hardware instead of making standard calls was however.
Software addressing IBM PC hardware in this way would not run on MS-DOS machines with different hardware. The IBM PC was sold in high enough volumes to justify writing software for it, this encouraged other manufacturers to produce machines which could use the same programs, expansion cards, peripherals as the PC; the 808x computer marketplace excluded all machines which were not hardware- and software-compatible with the PC. The 640 KB barrier on "conventional" system memory available to MS-DOS is a legacy of that period. Rumors of "lookalike", compatible computers, created without IBM's approval, began immediately after the IBM PC's release. InfoWorld wrote on the first anniversary of the IBM PC that The dark side of an open system is its imitators. If the specs are clear enough for you to design peripherals, they are clear enough for you to design imitations. Apple... has patents on two important components of its systems... IBM, which has no special patents on the PC, is more vulnerable. Numerous PC-compatible machines—the grapevine says 60 or more—have begun to appear in the marketplace.
By June 1983 PC Magazine defined "PC'clone'" as "a computer accommodate the user who takes a disk home from an IBM PC, walks across the room, plugs it into the'foreign' machine". Because of a shortage of IBM PCs that year, many customers purchased clones instead. Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first IBM PC compatible in the Compaq Portable; the Compaq was the first sewing machine-sized portable computer, 100% PC-compatible. The company could not copy the BIOS directly as a result of the court decision in Apple v. Franklin, but it could reverse-engineer the IBM BIOS and write its own BIOS using clean room design. At the same time, many manufacturers such as Tandy/RadioShack, Hewlett-Packard, Digital Equipment Corporation, Texas Instruments, Tulip and Olivetti introduced personal computers that supported MS-DOS, but were not software- or hardware-compatible with the IBM PC.
Tandy described the Tandy 2000, for example, as having a "'next generation' true 16-bit CPU", with "More speed. More disk storage. More expansion" than the IBM PC or "other MS-DOS computers". While admitting in 1984 that many MS-DOS programs did not support the computer, the company stated that "the most popular, sophisticated software on the market" was available, either or "over the next six months". Like IBM, Microsoft's intention was that application writers would write to the application programming interfaces in MS-DOS or the firmware BIOS, that this would form what would now be termed a hardware abstraction layer; each computer would have its own Original Equipment Manufacturer version of MS-DOS, customized to its hardware. Any software written for MS-DOS would operate on any MS-DOS computer, despite variations in hardware design; this expectation seemed reasonable in the computer marketplace of the time. Until Microsoft was based on computer languages such as BASIC; the established small system operating software was CP/M from Digital Research, in use both at the hobbyist level and by the more professional of t
In telecommunications, RS-232, Recommended Standard 232 refers to a standard introduced in 1960 for serial communication transmission of data. It formally defines signals connecting between a DTE such as a computer terminal, a DCE, such as a modem; the standard defines the electrical characteristics and timing of signals, the meaning of signals, the physical size and pinout of connectors. The current version of the standard is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997; the RS-232 standard had been used in computer serial ports. A serial port complying with the RS-232 standard was once a standard feature of many types of computers. Personal computers used them for connections not only to modems, but to printers, computer mice, data storage, uninterruptible power supplies, other peripheral devices. RS-232, when compared to interfaces such as RS-422, RS-485 and Ethernet, has lower transmission speed, short maximum cable length, large voltage swing, large standard connectors, no multipoint capability and limited multidrop capability.
In modern personal computers, USB has displaced RS-232 from most of its peripheral interface roles. Many computers no longer come equipped with RS-232 ports and must use either an external USB-to-RS-232 converter or an internal expansion card with one or more serial ports to connect to RS-232 peripherals. Thanks to their simplicity and past ubiquity, RS-232 interfaces are still used—particularly in industrial machines, networking equipment, scientific instruments where a short-range, point-to-point, low-speed wired data connection is adequate; the Electronic Industries Association standard RS-232-C as of 1969 defines: Electrical signal characteristics such as voltage levels, signaling rate and slew-rate of signals, voltage withstand level, short-circuit behavior, maximum load capacitance. Interface mechanical characteristics, pluggable connectors and pin identification. Functions of each circuit in the interface connector. Standard subsets of interface circuits for selected telecom applications.
The standard does not define such elements as the character encoding, the framing of characters, transmission order of bits, or error detection protocols. The character format and transmission bit rate are set by the serial port hardware a UART, which may contain circuits to convert the internal logic levels to RS-232 compatible signal levels; the standard does not define bit rates for transmission, except that it says it is intended for bit rates lower than 20,000 bits per second. RS-232 was first introduced in 1960 by the Electronic Industries Association as a Recommended Standard; the original DTEs were electromechanical teletypewriters, the original DCEs were modems. When electronic terminals began to be used, they were designed to be interchangeable with teletypewriters, so supported RS-232; because the standard did not foresee the requirements of devices such as computers, test instruments, POS terminals, so on, designers implementing an RS-232 compatible interface on their equipment interpreted the standard idiosyncratically.
The resulting common problems were non-standard pin assignment of circuits on connectors, incorrect or missing control signals. The lack of adherence to the standards produced a thriving industry of breakout boxes, patch boxes, test equipment and other aids for the connection of disparate equipment. A common deviation from the standard was to drive the signals at a reduced voltage; some manufacturers therefore built transmitters that supplied +5 V and −5 V and labeled them as "RS-232 compatible". Personal computers started to make use of the standard so that they could connect to existing equipment. For many years, an RS-232-compatible port was a standard feature for serial communications, such as modem connections, on many computers, it remained in widespread use into the late 1990s. In personal computer peripherals, it has been supplanted by other interface standards, such as USB. RS-232 is still used to connect older designs of peripherals, industrial equipment, console ports, special purpose equipment.
The standard has been renamed several times during its history as the sponsoring organization changed its name, has been variously known as EIA RS-232, EIA 232, most as TIA 232. The standard continued to be revised and updated by the Electronic Industries Association and since 1988 by the Telecommunications Industry Association. Revision C was issued in a document dated August 1969. Revision D was issued in 1986; the current revision is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997. Changes since Revision C have been in timing and details intended to improve harmonization with the CCITT standard V.24, but equipment built to the current standard will interoperate with older versions. Related ITU-T standards include V.24 and V.28. In revision D of EIA-232, the D-subminiature connector was formally included as part of the standard; the voltage range was extended to ±25 volts, the circuit capacitance limit was expressly stated as 2500 pF.
Revision E of EIA-232 introduced a new, standard D-shell 26-pin "Alt A" connector, made other changes to improve compatibility w
The Motorola 68000 is a 16/32-bit CISC microprocessor, which implements a 32-bit instruction set, with 32-bit registers and 32-bit internal data bus, but with a 16-bit data ALU and two 16-bit arithmetic ALUs and a 16-bit external data bus and marketed by Motorola Semiconductor Products Sector. Introduced in 1979 with HMOS technology as the first member of the successful 32-bit Motorola 68000 series, it is software forward-compatible with the rest of the line despite being limited to a 16-bit wide external bus. After 39 years in production, the 68000 architecture is still in use; the 68000 grew out of the MACSS project, begun in 1976 to develop an new architecture without backward compatibility. It would be a higher-power sibling complementing the existing 8-bit 6800 line rather than a compatible successor. In the end, the 68000 did retain a bus protocol compatibility mode for existing 6800 peripheral devices, a version with an 8-bit data bus was produced. However, the designers focused on the future, or forward compatibility, which gave the 68000 design a head start against 32-bit instruction set architectures.
For instance, the CPU registers are 32 bits wide, though few self-contained structures in the processor itself operate on 32 bits at a time. The MACSS team drew on the influence of minicomputer processor design, such as the PDP-11 and VAX systems, which were microcode-based. In the mid 1970s, the 8-bit microprocessor manufacturers raced to introduce the 16-bit generation. National Semiconductor had been first with its IMP-16 and PACE processors in 1973–1975, but these had issues with speed. Intel had worked on their advanced 16/32-bit Intel iAPX 432 since 1975 and their Intel 8086 since 1976. Arriving late to the 16-bit arena afforded the new processor more transistors, 32-bit macroinstructions, acclaimed general ease of use; the original MC68000 was fabricated using an HMOS process with a 3.5 µm feature size. Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades were 4, 6, 8 MHz. 10 MHz chips became available during 1981, 12.5 MHz chips by June 1982.
The 16.67 MHz "12F" version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s. The 68k instruction set was well suited to implement Unix, the 68000 and its successors became the dominant CPUs for Unix-based workstations including Sun workstations and Apollo/Domain workstations; the 68000 was used for mass-market computers such as the Apple Lisa, Macintosh and Atari ST. The 68000 was used in Microsoft Xenix systems, as well as an early NetWare Unix-based Server; the 68000 was used in the first generation of desktop laser printers, including the original Apple Inc. LaserWriter and the HP LaserJet. In 1982, the 68000 received an update to its ISA allowing it to support virtual memory and to conform to the Popek and Goldberg virtualization requirements; the updated chip was called the 68010. A further extended version, which exposed 31 bits of the address bus, was produced in small quantities as the 68012. To support lower-cost systems and control applications with smaller memory sizes, Motorola introduced the 8-bit compatible MC68008 in 1982.
This was a 68000 with a smaller address bus. After 1982, Motorola devoted more attention to the 88000 projects. Several other companies were second-source manufacturers of the HMOS 68000; these included Hitachi, who shrank the feature size to 2.7 µm for their 12.5 MHz version, Rockwell, Thomson/SGS-Thomson, Toshiba. Toshiba was a second-source maker of the CMOS 68HC000. Encrypted variants of the 68000, being the Hitachi FD1089 and FD1094, store decryption keys for opcodes and opcode data in battery-backed memory and were used in certain Sega arcade systems including System 16 to prevent piracy and illegal bootleg games; the 68HC000, the first CMOS version of the 68000, was designed by Hitachi and jointly introduced in 1985. Motorola's version was called the MC68HC000, while Hitachi's was the HD68HC000; the 68HC000 was offered at speeds of 8–20 MHz. Except for using CMOS circuitry, it behaved identically to the HMOS MC68000, but the change to CMOS reduced its power consumption; the original HMOS MC68000 consumed around 1.35 watts at an ambient temperature of 25 °C, regardless of clock speed, while the MC68HC000 consumed only 0.13 watts at 8 MHz and 0.38 watts at 20 MHz.
Apple selected the 68HC000 for use in the Macintosh Portable. Motorola replaced the MC68008 with the MC68HC001 in 1990; this chip resembled the 68HC000 in most respects, but its data bus could operate in either 16-bit or 8-bit mode, depending on the value of an input pin at reset. Thus, like the 68008, it could be used in systems with cheaper 8-bit memories; the evolution of the 68000 focused on more modern embedded control applications and on-chip peripherals. The 68EC000 chip and SCM68000 core expanded the address bus to 32 bits, removed the M6800 peripheral bus, excluded the MOVE from SR instruction from user mode programs. In 1996, Motorola updated the standalone core with static circuitry, drawing only 2 µW in l
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
The Sinclair QL, is a personal computer launched by Sinclair Research in 1984, as an upper-end counterpart to the Sinclair ZX Spectrum. The QL was aimed at the serious home user and professional and executive users markets from small to large businesses and higher educational establishments, but failed to achieve commercial success. Based on a Motorola 68008 processor clocked at 7.5 MHz, the QL included 128 KB of RAM, expandable to 640 KB and in practice, 896 KB. It could be connected to a TV for display. Two built-in Microdrive tape-loop cartridge drives provided mass storage, in place of the more expensive floppy disk drives found on similar systems of the era. Microdrives had been introduced for the Sinclair ZX Spectrum in July 1983, although the QL used a different logical tape format. Interfaces included an expansion slot, ROM cartridge socket, dual RS-232 ports, proprietary QLAN local area network ports, dual joystick ports and an external Microdrive bus. Two video modes were available, 256×256 pixels with 8 RGB colours and per-pixel flashing, or 512×256 pixels with four colours: black, red and white.
The supported colours could be stippled in 2×2 blocks to simulate up to 256 colours, an effect which did not copy reliably on a TV over an RF connection. Both screen modes used a 32 KB framebuffer in main memory; the hardware was capable of switching between two different areas of memory for the frame buffer, thus allowing double buffering. However, this would have used 64 KB of the standard machine's 128 KB of RAM and there was no support for this feature in the QL's original firmware; the alternative and much improved operating system Minerva does provide full support for the second frame buffer. When connected to a normally-adjusted TV or monitor, the QL's video output would overscan horizontally; this was reputed to have been due to the timing constants in the ZX8301 chip being optimised for the flat-screen CRT display intended for the QL. Internally, the QL comprised the CPU, two ULAs, an Intel 8049 microcontroller known as the IPC, or "Intelligent Peripheral Controller"; the ZX8301 or "Master Chip" implemented the video display generator and provided DRAM refresh.
The ZX8302, or "Peripheral Chip", interfaced to the RS-232 ports Microdrives, QLAN ports, real-time clock and the 8049 via a synchronous serial link. The 8049 included at late stage in the QL's design, the ZX8302 being intended to perform its functions ran at 11 MHz and acted as a keyboard/joystick interface, RS-232 receive buffer and audio generator; the first pre-emptive multitasking operating system for a microcomputer, QDOS designed by Tony Tebby, was included on ROM, as was an advanced structured BASIC interpreter, named SuperBASIC designed by Jan Jones, used as the command-line interpreter. The QL was bundled with an office suite, consisting of a word processor, spreadsheet and business graphics written by Psion. Physically, the QL was the same black colour as the preceding ZX81 and Sinclair ZX Spectrum models, but introduced a new angular styling theme and keyboard design which would be seen in the ZX Spectrum+; the QL used British Telecom type 631W plugs of similar design to British telephone sockets for serial cables except for QLs built by Samsung for export markets, which had DE-9 sockets.
Joysticks connected to the QL with similar type 630W plugs. The QL was conceived in 1981 under the code-name ZX83, as a portable computer for business users, with a built-in ultra-thin flat-screen CRT display similar to the TV80 pocket TV, printer and modem; as development progressed it became clear that the portability features were over-ambitious and the specification was reduced to a conventional desktop configuration. The electronics were designed by David Karlin, who joined Sinclair Research in summer 1982; the industrial design was done by Rick Dickinson, who designed the ZX81 and ZX Spectrum range of products. Sinclair had commissioned GST Computer Systems to produce the operating system for the machine, but switched to Domesdos, developed by Tony Tebby as an in-house alternative, before launch. GST's OS, designed by Tim Ward, was made available as 68K/OS, in the form of an add-on ROM card; the tools developed by GST for the QL would be used on the Atari ST, where GST object format became standard.
The QL was designed to be more powerful than the IBM Personal Computer, comparable to Apple's Macintosh. The QL was the first mass-market personal computer based on the Motorola 68000-series processor family. Rushed into production, the QL beat the Apple Macintosh by a month, the Atari ST by a year and the Commodore Amiga by a year and 2 months. While clock speeds were comparable, the 8-bit databus and cycle stealing of the ZX8301 gate array limited the QL's performance. However, at the time of launch, on January 12, 1984, the QL was far from being ready for production, there being no complete working prototype in existence. Although Sinclair started taking orders promising delivery within 28 days, first customer deliveries only started in April; this provoked much criticism of the company and the attention of the Advertising Standards Authority. Due to its premature launch, the QL was plagued by a number of problems from the start. Early production QLs were shipped with preliminary versions of firmware containing numerous bugs in SuperBASIC.
Part of the firmware was held on an external 16 KB ROM cartridge known as the "kludge" or "dongle", until the QL was redesigned to accommodate the necessary 48 KB of ROM internally, instead of the 32 KB specified. The QL suffered from reliability problems of its Microdrives; these problems were later
Field-programmable gate array
A Field-Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is specified using a hardware description language, similar to that used for an Application-Specific Integrated Circuit. Circuit diagrams were used to specify the configuration, but this is rare due to the advent of electronic design automation tools. FPGAs contain an array of programmable logic blocks, a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or simple logic gates like AND and XOR. In most FPGAs, logic blocks include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
Contemporary Field-Programmable Gate Arrays have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function; the ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design, offer advantages for many applications. Some FPGAs have analog features in addition to digital functions; the most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on loaded pins that would otherwise ring or couple unacceptably, to set higher rates on loaded pins on high-speed channels that would otherwise run too slowly. Common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer transmit clocks and receiver clock recovery.
Common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters and digital-to-analog converters with analog signal conditioning blocks allowing them to operate as a system-on-a-chip; such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, field-programmable analog array, which carries analog values on its internal programmable interconnect fabric. The FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in the field. However, programmable logic was hard-wired between logic gates. Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
In December 2015, Intel acquired Altera. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064; the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables. More than 20 years Freeman was entered into the National Inventors Hall of Fame for his invention. In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel was serving about 18 percent of the market. By 2013, Altera and Xilinx together represented 77 percent of the FPGA market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer and industrial applications. A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip"; this work mirrors the architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore.
The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Mic
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later. The Motorola 68040 is a 32-bit microprocessor from Motorola, released in 1990, it is the successor to the 68030 and is followed by the 68060. There was no 68050. In keeping with general Motorola naming, the 68040 is referred to as the'040. In Apple Macintosh computers, the 68040 was introduced in the Macintosh Quadra, named for the chip; the fastest 68040 processor was clocked at 40 MHz and it was used only in the Quadra 840AV. The more expensive models in the Macintosh Centris line used the 68040, while the cheaper Quadra and Macintosh Performa used the 68LC040; the 68040 was used in other personal computers, such as the Amiga 4000 and Amiga 4000T, as well as a number of workstations, Alpha Microsystems servers, the HP 9000/400 series, versions of the NeXT computer. The 68040 was the first 680x0 family member with an on-chip Floating-Point Unit.
It thus included all of the functionality that required external chips, namely the FPU and Memory Management Unit, added in the 68030. It had split instruction and data caches of 4 kilobytes each, it was pipelined, with six stages. The 68040 ran into the transistor budget limit early in design. While the MMU did not take many transistors—indeed, having it on the same die as the CPU saved on transistors—the FPU did. Motorola's 68882 external FPU was known as a high performance unit and Motorola did not wish to risk integrators using the "LC" version with a 68882 instead of the more profitable full "RC" unit; the FPU in the 68040 was thus made incapable of IEEE transcendental functions, supported by both the 68881 and 68882 and were used by the popular fractal generating software of the time and little else. The Motorola floating point support package emulated these instructions in software under interrupt; as this was an exception handler, heavy use of the transcendental functions caused severe performance penalties.
Heat was always a problem throughout the 68040's life. While it delivered over four times the per-clock performance of the 68020 and 68030, the chip's complexity and power requirements came from a large die and large caches; this affected the scaling of the processor and it was never able to run with a clock rate exceeding 40 MHz. A 50 MHz variant canceled. Overclocking enthusiasts reported success reaching 50 MHz using a 100 MHz oscillator instead of an 80 MHz part and the novel technique of adding oversized heat sinks with fans; the 68040 offered the same features as the Intel 80486, but on a clock-for-clock basis could outperform the Intel chip in integer and floating point instructions. However, the 80486 had the ability to be clocked faster without suffering from overheating problems. In late 1991, as the higher-end Macintosh desktop lineup transitioned to the'040, Apple was unable to offer the newer processor in their top-of-the-line PowerBooks until early 1994. With PowerBooks being restricted to 68030s for several years, Macworld reviewers conceded that the best choice for power users was the PC-compatible Texas Instruments 80486 notebook, rather than the top-of-the-line PowerBook 180.
Versions of the 68040 were created for specific market segments, including the 68LC040, which removed the FPU, the 68EC040, which removed both the FPU and MMU. Motorola had intended the EC variant for embedded use, but embedded processors during the 68040's time did not need the power of the 68040, so EC variants of the 68020 and 68030 continued to be common in designs. Motorola produced several speed grades; the 16 MHz and 20 MHz parts were never qualified and used as prototyping samples. 25 MHz and 33 MHz grades featured across the whole line, but until around 2000 the 40 MHz grade was only for the "full" 68040. A planned 50 MHz grade was canceled. For more information on the instructions and architecture, see Motorola 68000; the 68EC040 is a version of the Motorola 68040 microprocessor, intended for embedded controllers. It differs from the 68040 in that it has neither an FPU nor an MMU; this makes it less expensive and it draws less power. The 68EC040 was used in Cisco switch Supervisor Engine I, the heart of models 2900, 2948G, 2980G, 4000, 4500, 5000, 5500, 6000, 6500 and 7600.
The 68LC040 is a low cost version of the Motorola 68040 microprocessor with no FPU. This makes it less expensive and it draws less power. Although the CPU now fits into a feature chart more like the Motorola 68020, it continues to include the 68040's caches and pipeline and is thus faster than the 68020; some mask revisions of the 68LC040 contained a bug that prevents the chip from operating when a software FPU emulator is used. According to Motorola's errata, any chip with a mask set 2E71M or does not contain the bug; this new mask was converted the 68LC040 chip to MC status. The buggy revisions are found in 68LC040-based Apple Macintosh computers. Chips with mask set; the fault relates to pending writes being lost. The 68040 cannot update its microcode in the manner of modern x86 chips; this means that the only way to use software that requires floating-point functionality is to replace the buggy 68LC040 with a revision, or a full 68040. ATC = Add