Static random-access memory
Static random-access memory is a type of semiconductor memory that uses bistable latching circuitry to store each bit. SRAM exhibits data remanence, but it is still volatile in the conventional sense that data is lost when the memory is not powered; the term static differentiates SRAM from DRAM. SRAM is faster and more expensive than DRAM. Advantages: Simplicity – a refresh circuit is not needed Performance Reliability Low idle power consumptionDisadvantages: Price Density High operational power consumption The power consumption of SRAM varies depending on how it is accessed. On the other hand, static RAM used at a somewhat slower pace, such as in applications with moderately clocked microprocessors, draws little power and can have a nearly negligible power consumption when sitting idle – in the region of a few micro-watts. Several techniques have been proposed to manage power consumption of SRAM-based memory structures. General purpose products with asynchronous interface, such as the ubiquitous 28-pin 8K × 8 and 32K × 8 chips, as well as similar products up to 16 Mbit per chip with synchronous interface used for caches and other applications requiring burst transfers, up to 18 Mbit per chip integrated on chip as RAM or cache memory in micro-controllers as the primary caches in powerful microprocessors, such as the x86 family, many others to store the registers and parts of the state-machines used in some microprocessors on application specific ICs, or ASICs in Field Programmable Gate Array and Complex Programmable Logic Device Many categories of industrial and scientific subsystems, automotive electronics, similar, contain static RAM.
Some amount is embedded in all modern appliances, etc. that implement an electronic user interface. Several megabytes may be used in complex products such as digital cameras, cell phones, etc. SRAM in its dual-ported form is sometimes used for realtime digital signal processing circuits. SRAM is used in personal computers, workstations and peripheral equipment: CPU register files, internal CPU caches and external burst mode SRAM caches, hard disk buffers, router buffers, etc. LCD screens and printers normally employ static RAM to hold the image displayed. Static RAM was used for the main memory of some early personal computers such as the ZX80, TRS-80 Model 100 and Commodore VIC-20. Hobbyists home-built processor enthusiasts prefer SRAM due to the ease of interfacing, it is much easier to work with than DRAM as there are no refresh cycles and the address and data buses are directly accessible rather than multiplexed. In addition to buses and power connections, SRAM requires only three controls: Chip Enable, Write Enable and Output Enable.
In synchronous SRAM, Clock is included. Non-volatile SRAMs, or nvSRAMs, have standard SRAM functionality, but they save the data when the power supply is lost, ensuring preservation of critical information. NvSRAMs are used in a wide range of situations – networking and medical, among many others – where the preservation of data is critical and where batteries are impractical. PSRAMs have a DRAM storage core, combined with a self refresh circuit, they appear externally as a slower SRAM. They have a density/cost advantage over true SRAM, without the access complexity of DRAM. Bipolar junction transistor – fast but consumes a lot of power MOSFET – low power and common today Asynchronous – independent of clock frequency. Address, data in and other control signals are associated with the clock signalsIn 1990s, asynchronous SRAM used to be employed for fast access time. Asynchronous SRAM was used as main memory for small cache-less embedded processors used in everything from industrial electronics and measurement systems to hard disks and networking equipment, among many other applications.
Nowadays, synchronous SRAM is rather employed like Synchronous DRAM – DDR SDRAM memory is rather used than asynchronous DRAM. Synchronous memory interface is much faster as access time can be reduced by employing pipeline architecture. Furthermore, as DRAM is much cheaper than SRAM, SRAM is replaced by DRAM in the case when large volume of data is required. SRAM memory is however much faster for random access. Therefore, SRAM memory is used for CPU cache, small on-chip memory, FIFOs or other small buffers. Zero bus turnaround – the turnaround is the number of clock cycles it takes to change access to the SRAM from write to read and vice versa; the turnaround for ZBT SRAMs or the latency between read and write cycle is zero. SyncBurst – features synchronous burst write access to the SRAM to increase write operation to the SRAM DDR SRAM – Synchronous, single read/write port, double data rate I/O Quad Data Rate SRAM – Synchronous, separate read and write ports, quadruple data rate I/O Binary SRAM Ternary SRAM A typical SRAM cell is mad
The Intel 80387SX is the math coprocessor for the Intel 80386SX microprocessor. It was used to perform floating point arithmetic operations directly in hardware; the coprocessor was designed only to work with the SX variant of the i386, rather than the standard 80386. This was because of the 80386SX's 16 bit data bus, modified from the original i386's 32 bit data bus. Coprocessor.info: 80387 manufacturers overview and pictures
The Intel 8080 was the second 8-bit microprocessor designed and manufactured by Intel and was released in April 1974. It is an extended and enhanced variant of the earlier 8008 design, although without binary compatibility; the initial specified clock frequency limit was 2 MHz, with common instructions using 4, 5, 7, 10, or 11 cycles this meant that it operated at a typical speed of a few hundred thousand instructions per second. A faster variant 8080A-1 became available with clock frequency limit up to 3.125 MHz. The 8080 requires two support chips to function in most applications, the i8224 clock generator/driver and the i8228 bus controller, it is implemented in NMOS using non-saturated enhancement mode transistors as loads therefore demanding a +12 V and a −5 V voltage in addition to the main TTL-compatible +5 V. Although earlier microprocessors were used for calculators, cash registers, computer terminals, industrial robots, other applications, the 8080 became one of the first widespread microprocessors.
Several factors contributed to its popularity: its 40-pin package made it easier to interface than the 18-pin 8008, made its data bus more efficient. It became the engine of the Altair 8800, subsequent S-100 bus personal computers, until it was replaced by the Z80 in this role, was the original target CPU for CP/M operating systems developed by Gary Kildall; the 8080 was successful enough that compatibility at the assembly language level became a design requirement for the 8086 when design for it was started in 1976. This means that the 8080 directly influenced the ubiquitous 32-bit and 64-bit x86 architectures of today; the Intel 8080 is the successor to the 8008. It uses the same basic instruction set and register model as the 8008 though it is not source-code compatible nor binary-compatible with its predecessor; every instruction in the 8008 has an equivalent instruction in the 8080. The 8080 adds a few 16-bit operations in its instruction set as well. Whereas the 8008 required the use of the HL register pair to indirectly access its 14-bit memory space, the 8080 added addressing modes to allow direct access to its full 16-bit memory space.
In addition, the internal 7-level push-down call stack of the 8008 was replaced by a dedicated 16-bit stack-pointer register. The 8080's large 40-pin DIP packaging permits it to provide a 16-bit address bus and an 8-bit data bus, allowing easy access to 64 KB of memory; the processor has seven 8-bit registers, where A is the primary 8-bit accumulator, the other six registers can be used as either individual 8-bit registers or as three 16-bit register pairs depending on the particular instruction. Some instructions enable the HL register pair to be used as a 16-bit accumulator, a pseudo-register M can be used anywhere that any other register can be used, referring to the memory address pointed to by the HL pair, it has a 16-bit stack pointer to memory, a 16-bit program counter. The processor maintains internal flag bits, which indicate the results of arithmetic and logical instructions; the flags are: Sign, set. Zero, set. Parity, set if the number of 1 bits in the result is even. Carry, set if the last addition operation resulted in a carry or if the last subtraction operation required a borrow Auxiliary carry, used for binary-coded decimal arithmetic.
The carry bit can be complemented by specific instructions. Conditional-branch instructions test the various flag status bits; the flags can be copied as a group to the accumulator. The A accumulator and the flags together are called program status word; as with many other 8-bit processors, all instructions are encoded for simplicity. Some of them are followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. Like larger processors, it has automatic CALL and RET instructions for multi-level procedure calls and returns and instructions to save and restore any 16-bit register pair on the machine stack. There are eight one-byte call instructions for subroutines located at the fixed addresses 00h, 08h, 10h... 38h. These were intended to be supplied by external hardware in order to invoke a corresponding interrupt service routine, but were often employed as fast system calls; the most sophisticated command is XTHL, used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer.
Most 8-bit operations can only be performed on the 8-bit accumulator. For 8-bit operations with two operands, the other operand can be either an immediate value, another 8-bit register, or a memory byte addressed by the 16-bit register pair HL. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory byte. Due to the regular encoding of the MOV instruction, there are redundant codes to copy a reg
Intel's i486SX was a modified Intel 486DX microprocessor with its floating-point unit disabled. It was intended as a lower-cost CPU for use in low-end systems. Computer manufacturers that used these processors include Packard Bell, Compaq, ZEOS and IBM. In the early 1990s, common applications did not need or benefit from an FPU. Among the rare exceptions were CAD applications, which could simulate floating point operations in software, but benefited from a hardware floating point unit immensely. AMD had begun manufacturing its 386DX clone, faster than Intel's. To respond to this new situation Intel wanted to provide a lower cost i486 CPU for system integrators, but without sacrificing the better profit margins of a "full" i486; this was accomplished through a debug feature called Disable Floating Point, by grounding a certain bond wire in the CPU package. The i486SX was introduced in mid-1991, 18 months after the i486DX. Versions of the i486SX had the FPU removed for cost cutting reasons; some systems allowed the user to upgrade the i486SX to a CPU with the FPU enabled.
The upgrade was shipped as the i487, a full blown i486DX chip with an extra pin. The extra pin prevents the chip from being installed incorrectly; the NC# pin, one of the standard 168 pins, was used to shut off the i486SX. Although i486SX devices were not used at all when the i487 was installed, they were hard to remove because the i486SX was installed in non-ZIF sockets or in a plastic package, surface mounted on the motherboard; this article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later. Intel 80486SX images and descriptions at cpu-collection.deIntel Datasheets Embedded i486SX Embedded Ultra-Low Power i486SX
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
The Intel 80486 known as the i486 or 486, is a higher performance follow-up to the Intel 80386 microprocessor. The 80486 was introduced in 1989 and was the first pipelined x86 design as well as the first x86 chip to use more than a million transistors, due to a large on-chip cache and an integrated floating-point unit, it represents a fourth generation of binary compatible CPUs since the original 8086 of 1978. A 50 MHz 80486 executes around 40 million instructions per second on average and is able to reach 50 MIPS peak performance; the 80486 was announced at Spring Comdex in April 1989. At the announcement, Intel stated that samples would be available in the third quarter of 1989 and production quantities would ship in the fourth quarter of 1989; the first 80486-based PCs were announced in late 1989, but some advised that people wait until 1990 to purchase an 80486 PC because there were early reports of bugs and software incompatibilities. The instruction set of the i486 is similar to its predecessor, the Intel 80386, with the addition of only a few extra instructions, such as CMPXCHG which implements a compare-and-swap atomic operation and XADD, a fetch-and-add atomic operation returning the original value.
From a performance point of view, the architecture of the i486 is a vast improvement over the 80386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions could sustain a single clock cycle throughput; these improvements yielded a rough doubling in integer ALU performance over the 386 at the same clock rate. A 16-MHz 80486 therefore had a performance similar to a 33-MHz 386, the older design had to reach 50 MHz to be comparable with a 25-MHz 80486 part. An 8 KB on-chip SRAM cache stores the most used instructions and data; the 386 supported a slower off-chip cache. An enhanced external bus protocol to enable cache coherency and a new burst mode for memory accesses to fill a cacheline of 16 bytes within 5 bus cycles; the 386 needed 8 bus cycles to transfer the same amount of data. Coupled pipelining completes a simple instruction like ALU reg,reg or ALU reg,im every clock cycle.
The 386 needed two clock cycles to do this. Integrated FPU with a dedicated local bus. Improved MMU performance. New instructions: XADD, BSWAP, CMPXCHG, INVD, WBINVD, INVLPG. Just as in the 80386, a simple flat 4 GB memory model could be implemented by setting all "segment selector" registers to a neutral value in protected mode, or setting "segment registers" to zero in real mode, using only the 32-bit "offset registers" as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were normally mapped onto physical addresses by the paging system except when it was disabled. Just as with the 80386, circumventing memory segmentation could improve performance in some operating systems and applications. On a typical PC motherboard, either four matched 30-pin SIMMs or one 72-pin SIMM per bank were required to fit the 80486's 32-bit data bus; the address bus used 30-bits complemented by four byte-select pins to allow for any 8/16/32-bit selection. This meant. There are several suffixes and variants..
Other variants include: Intel RapidCAD: a specially packaged Intel 486DX and a dummy floating-point unit designed as pin-compatible replacements for an Intel 80386 processor and 80387 FPU. i486SL-NM: i486SL based on i486SX. I487SX: i486DX with one extra pin sold as an FPU upgrade to i486SX systems. I486 OverDrive: i486SX, i486SX2, i486DX2 or i486DX4. Marked as upgrade processors, some models had different pinouts or voltage-handling abilities from "standard" chips of the same speed stepping. Fitted to a coprocessor or "OverDrive" socket on the motherboard, worked the same as the i487SX; the specified maximal internal clock frequency ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers. One of the few 80486 models specified for a 50 MHz bus had overheating problems and was moved to the 0.8-micrometre fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it rather unpopular with mainstream consumers, as local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems.
The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed, was slower due to the external bus running at only 25 MHz. The 486DX2 at 66 MHz was faster than the 486DX-50, overall. More powerful 80486 iterations such as the OverDrive and DX4 were less popular, as they came out after Intel had re
The Intel 80386 known as i386 or just 386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the CPU of many workstations and high-end personal computers of the time; as the original implementation of the 32-bit extension of the 80286 architecture, the 80386 instruction set, programming model, binary encodings are still the common denominator for all 32-bit x86 processors, termed the i386-architecture, x86, or IA-32, depending on context. The 32-bit 80386 can execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386. A 33 MHz 80386 was measured to operate at about 11.4 MIPS. The 80386 was introduced in October 1985, while manufacturing of the chips in significant quantities commenced in June 1986. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was rationalized upon the 80386's mainstream adoption.
The first personal computer to make use of the 80386 was designed and manufactured by Compaq and marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM. In May 2006, Intel announced that 80386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems; such systems using an 80386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones used the 80386 processor, such as BlackBerry 950 and Nokia 9000 Communicator; the processor was a significant evolution in the x86 architecture, extended a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system; the 80386 added a 32-bit architecture and a paging translation unit, which made it much easier to implement operating systems that used virtual memory.
It offered support for register debugging. The 80386 featured three operating modes: protected mode and virtual mode; the protected mode, which debuted in the 286, was extended to allow the 386 to address up to 4 GB of memory. The all new virtual 8086 mode made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible; the ability for a 386 to be set up to act like it had a flat memory model in protected mode despite the fact that it uses a segmented memory model in all modes would arguably be the most important feature change for the x86 processor family until AMD released x86-64 in 2003. Several new instructions have been added to 386: BSF, BSR, BT, BTS, BTR, BTC, CDQ, CWDE, LFS, LGS, LSS, MOVSX, MOVZX, SETcc, SHLD, SHRD. Two new segment registers have been added for general-purpose programs, single Machine Status Word of 286 grew into eight control registers CR0–CR7. Debug registers DR0–DR7 were added for hardware breakpoints. New forms of MOV instruction are used to access them.
Chief architect in the development of the 80386 was John H. Crawford, he was responsible for extending the 80286 architecture and instruction set to 32-bit, led the microprogram development for the 80386 chip. The 80486 and P5 Pentium line of processors were descendants of the 80386 design; the following data types are directly supported and thus implemented by one or more 80386 machine instructions. 8-bit integer, either signed or unsigned. 16-bit integer, either signed or unsigned. 32-bit integer, either signed or unsigned. 64-bit integer, either signed or unsigned. Offset, a 16- or 32-bit displacement referring to a memory location. Pointer, a 16-bit selector together with a 16- or 32-bit offset. Character. String, a sequence of 8-, 16- or 32-bit words. BCD, decimal digits represented by unpacked bytes. Packed BCD, two BCD digits in one byte; the following 80386 assembly source code is for a subroutine named _strtolower that copies a null-terminated ASCIIZ character string from one location to another, converting all alphabetic characters to lower case.
The string is copied one byte at a time. The example code uses the EBP register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine; this kind of calling convention supports reentrant and recursive code and has been used by Algol-like languages since the late 1950s. A flat memory model is assumed that the DS and ES segments address the same region of memory. In 1988, Intel introduced the 80386SX, most referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus intended for lower-cost PCs aimed at the home and small-business markets, while the 386DX would remain the high-end variant used in workstations and other demanding tasks; the CPU remained 32-bit internally, but the 16-bit