BIOS is non-volatile firmware used to perform hardware initialization during the booting process, to provide runtime services for operating systems and programs. The BIOS firmware comes pre-installed on a personal computer's system board, it is the first software to run when powered on; the name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS proprietary to the IBM PC has been reverse engineered by companies looking to create compatible systems; the interface of that original system serves as a de facto standard. The BIOS in modern PCs initializes and tests the system hardware components, loads a boot loader from a mass memory device which initializes an operating system. In the era of DOS, the BIOS provided a hardware abstraction layer for the keyboard and other input/output devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS after loading, instead accessing the hardware components directly.
Most BIOS implementations are designed to work with a particular computer or motherboard model, by interfacing with various devices that make up the complementary system chipset. BIOS firmware was stored in a ROM chip on the PC motherboard. In modern computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard; this allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails can brick the motherboard permanently, unless the system includes some form of backup for this case. Unified Extensible Firmware Interface is a successor to the legacy PC BIOS, aiming to address its technical shortcomings; the term BIOS was created by Gary Kildall and first appeared in the CP/M operating system in 1975, describing the machine-specific part of CP/M loaded during boot time that interfaces directly with the hardware.
Versions of MS-DOS, PC DOS or DR-DOS contain a file called variously "IO. SYS", "IBMBIO. COM", "IBMBIO. SYS", or "DRBIOS. SYS". Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides in ROM, it represents the analogue to the "CP/M BIOS". With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions; the real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, therefore was named "CBIOS", whereas the "ABIOS" provided new interfaces suited for multitasking operating systems such as OS/2. The BIOS of the original IBM PC and XT had no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when the power-on self-test had not proceeded to the point of initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and on expansion cards.
Starting around the mid-1990s, it became typical for the BIOS ROM to include a "BIOS configuration utility" or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type set using DIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCs—including the IBM AT—held configuration settings in battery-backed RAM and used a bootable configuration program on disk, not in the ROM, to set the configuration options contained in this memory; the disk was supplied with the computer, if it was lost the system settings could not be changed. The same applied in general to computers with an EISA bus, for which the configuration program was called an EISA Configuration Utility. A modern Wintel-compatible computer provides a setup routine unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; when errors occur at boot time, a modern BIOS displays user-friendly error messages presented as pop-up boxes in a TUI style, offers to enter the BIOS setup utility or to ignore the error and proceed if possible.
Instead of battery-backed RAM, the modern Wintel machine may store the BIOS configuration settings in flash ROM the same flash ROM that holds the BIOS itself. Early Intel processors started at physical address 000FFFF0h. Systems with processors provide logic to start running the BIOS from the system ROM. If the system has just been powered up or the reset button was pressed, the full power-on self-test is run. If Ctrl+Alt+Delete was pressed, a special flag value stored in nonvolatile BIOS memory tested by the BIOS allows bypass of the lengthy POST and memory detection; the POST identifies, initializes system devices such as the CPU, RAM, interrupt and DMA controllers and other parts of the chipset, video display card, hard disk drive, optical disc drive and other basic hardware. Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it; this featur
Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
A DIMM or dual in-line memory module comprises a series of dynamic random-access memory integrated circuits. These modules are mounted on a printed circuit board and designed for use in personal computers and servers. DIMMs began to replace SIMMs as the predominant type of memory module as Intel P5-based Pentium processors began to gain market share. While the contacts on SIMMs on both sides are redundant, DIMMs have separate electrical contacts on each side of the module. Another difference is that standard SIMMs have a 32-bit data path, while standard DIMMs have a 64-bit data path. Since Intel's Pentium, many processors have a 64-bit bus width, requiring SIMMs installed in matched pairs in order to populate the data bus; the processor would access the two SIMMs in parallel. DIMMs were introduced to eliminate this disadvantage. Variants of DIMM slots support DDR, DDR2, DDR3 and DDR4 RAM. Common types of DIMMs include the following: 70 to 200 pins 72-pin SO-DIMM, used for FPM DRAM and EDO DRAM 100-pin DIMM, used for printer SDRAM 144-pin SO-DIMM, used for SDR SDRAM 168-pin DIMM, used for SDR SDRAM 172-pin MicroDIMM, used for DDR SDRAM 184-pin DIMM, used for DDR SDRAM 200-pin SO-DIMM, used for DDR SDRAM and DDR2 SDRAM 200-pin DIMM, used for FPM/EDO DRAM in some Sun workstations and servers.201 to 300 pins 204-pin SO-DIMM, used for DDR3 SDRAM 214-pin MicroDIMM, used for DDR2 SDRAM 240-pin DIMM, used for DDR2 SDRAM, DDR3 SDRAM and FB-DIMM DRAM 244-pin MiniDIMM, used for DDR2 SDRAM 260-pin SO-DIMM, used for DDR4 SDRAM 260-pin SO-DIMM, with different notch position than on DDR4 SO-DIMMs, used for UniDIMMs that can carry either DDR3 or DDR4 SDRAM 278-pin DIMM, used for HP high density SDRAM.
288-pin DIMM, used for DDR4 SDRAM On the bottom edge of 168-pin DIMMs there are two notches, the location of each notch determines a particular feature of the module. The first notch is the DRAM key position, which represents RFU, unbuffered DIMM types; the second notch is the voltage key position, which represents 5.0 V, 3.3 V, RFU DIMM types. DDR, DDR2, DDR3 and DDR4 all have different pin counts, different notch positions; as of August, 2014, DDR4 SDRAM is a modern emerging type of dynamic random access memory with a high-bandwidth interface, has been in use since 2013. It is the higher-speed successor to DDR, DDR2 and DDR3. DDR4 SDRAM is neither forward nor backward compatible with any earlier type of random access memory because of different signalling voltages, timings, as well as other differing factors between the technologies and their implementation. A DIMM's capacity and other operational parameters may be identified with serial presence detect, an additional chip which contains information about the module type and timing for the memory controller to be configured correctly.
The SPD EEPROM connects to the System Management Bus and may contain thermal sensors. ECC DIMMs are those that have extra data bits which can be used by the system memory controller to detect and correct errors. There are numerous ECC schemes, but the most common is Single Error Correct, Double Error Detect which uses an extra byte per 64-bit word. ECC modules carry a multiple of 9 instead of a multiple of 8 chips. Sometimes memory modules are designed with two or more independent sets of DRAM chips connected to the same address and data buses. Ranks that share the same slot, only one rank may be accessed at any given time; the other ranks on the module are deactivated for the duration of the operation by having their corresponding CS signals deactivated. DIMMs are being manufactured with up to four ranks per module. Consumer DIMM vendors have begun to distinguish between single and dual ranked DIMMs. After a memory word is fetched, the memory is inaccessible for an extended period of time while the sense amplifiers are charged for access of the next cell.
By interleaving the memory, sequential memory accesses can be performed more because sense amplifiers have 3 cycles of idle time for recharging, between accesses. DIMMs are referred to as "single-sided" or "double-sided" to describe whether the DRAM chips are located on one or both sides of the module's printed circuit board. However, these terms may cause confusion, as the physical layout of the chips does not relate to how they are logically organized or accessed. JEDEC decided that the terms "dual-sided", "double-sided", or "dual-banked" were not correct when applied to registered DIMMs. Most DIMMs are built" × 8" memory chips with nine chips per side. In the case of "×4" registered DIMMs, the data width per side is 36 bits. In this case, the two-sided module is single-ranked. For "×8" registered DIMMs, each side is 72 bits wide, so the memory controller only addresses one side at a time; the above example applies to ECC memory that stores 72 bits instead of the more common 64. There would be one extra chip per group of eight, not counted.
For various technologies, there are certain bus and device clock frequencies that are standa
Synchronous dynamic random-access memory
Synchronous dynamic random-access memory is any dynamic random-access memory where the operation of its external pin interface is coordinated by an externally supplied clock signal. DRAM integrated circuits produced from the early 1970s to mid-1990s used an asynchronous interface, in which input control signals have a direct effect on internal functions only delayed by the trip across its semiconductor pathways. SDRAM has a synchronous interface, whereby changes on control inputs are recognised after a rising edge of its clock input. In SDRAM families standardized by JEDEC, the clock signal controls the stepping of an internal finite state machine that responds to incoming commands; these commands can be pipelined to improve performance, with started operations completing while new commands are received. The memory is divided into several sized but independent sections called banks, allowing the device to operate on a memory access command in each bank and speed up access in an interleaved fashion.
This allows SDRAMs to achieve greater concurrency and higher data transfer rates than asynchronous DRAMs could. Pipelining means that the chip can accept a new command before it has finished processing the previous one. For a pipelined write, the write command can be followed by another command without waiting for the data to be written into the memory array. For a pipelined read, the requested data appears a fixed number of clock cycles after the read command, during which additional commands can be sent. SDRAM is used in computers. Beyond the original SDRAM, further generations of double data rate RAM have entered the mass market – DDR, DDR2, DDR3 and DDR4, with the latest generation released in the second half of 2014. Although the concept of synchronous DRAM was well understood by the 1970s and was used with early Intel processors, it was only in 1993 that SDRAM began its path to universal acceptance in the electronics industry. In 1993, Samsung introduced its KM48SL2000 synchronous DRAM, by 2000, SDRAM had replaced all other types of DRAM in modern computers, because of its greater performance.
SDRAM latency is not inherently lower than asynchronous DRAM. Indeed, early SDRAM was somewhat slower than contemporaneous burst EDO DRAM due to the additional logic; the benefits of SDRAM's internal buffering come from its ability to interleave operations to multiple banks of memory, thereby increasing effective bandwidth. Today all SDRAM is manufactured in compliance with standards established by JEDEC, an electronics industry association that adopts open standards to facilitate interoperability of electronic components. JEDEC formally adopted its first SDRAM standard in 1993 and subsequently adopted other SDRAM standards, including those for DDR, DDR2 and DDR3 SDRAM. SDRAM is available in registered varieties, for systems that require greater scalability such as servers and workstations. Today, the world's largest manufacturers of SDRAM include: Samsung Electronics, Micron Technology, Hynix. There are several limits on DRAM performance. Most noted is the time between successive read operations to an open row.
This time decreased from 10 ns for 100 MHz SDRAM to 5 ns for DDR-400, but has remained unchanged through DDR2-800 and DDR3-1600 generations. However, by operating the interface circuitry at higher multiples of the fundamental read rate, the achievable bandwidth has increased rapidly. Another limit is the CAS latency, the time between supplying a column address and receiving the corresponding data. Again, this has remained constant at 10–15 ns through the last few generations of DDR SDRAM. In operation, CAS latency is a specific number of clock cycles programmed into the SDRAM's mode register and expected by the DRAM controller. Any value may be programmed, but the SDRAM will not operate if it is too low. At higher clock rates, the useful CAS latency in clock cycles increases. 10–15 ns is 2–3 cycles of the 200 MHz clock of DDR-400 SDRAM, CL4-6 for DDR2-800, CL8-12 for DDR3-1600. Slower clock cycles will allow lower numbers of CAS latency cycles. SDRAM modules have their own timing specifications, which may be slower than those of the chips on the module.
When 100 MHz SDRAM chips first appeared, some manufacturers sold "100 MHz" modules that could not reliably operate at that clock rate. In response, Intel published the PC100 standard, which outlines requirements and guidelines for producing a memory module that can operate reliably at 100 MHz; this standard was influential, the term "PC100" became a common identifier for 100 MHz SDRAM modules, modules are now designated with "PC"-prefixed numbers. Known as SDRAM, single data rate SDRAM can accept one command and transfer one word of data per clock cycle. Typical clock frequencies are 133 MHz. Chips are made with a variety of data bus sizes, but chips are assembled into 168-pin DIMMs that read or write 64 or 72 bits at a time. Use of the data bus thus requires a complex DRAM controller circuit; this is because data written to the DRAM must be presented in the same cycle as the write command, but reads produce output 2 or 3 cycles after the read command. The DRAM controller must ensure that the data bus is never required for a read and a write at the same time.
Typical SDR SDRAM clock rates are 66, 100, 133 MHz. Clock rates up to 200 MHz were available. All commands are timed relative to the rising edge of a clock signal. In addition to
IBM Personal Computer/AT
The IBM Personal Computer AT, more known as the IBM AT and sometimes called the PC AT or PC/AT, was IBM's second-generation PC, designed around the 6 MHz Intel 80286 microprocessor and released in 1984 as System Unit 5170. The name AT stood for "Advanced Technology," and was chosen because the AT offered various technologies that were new in personal computers. IBM released an 8 MHz version of the AT. IBM's 1984 introduction of the Personal Computer/AT was unusual for the company, which waited for others to release new products before producing its own version. Unlike the PCjr and Portable PC, the AT was advanced and, at $4,000-6,000, much less expensive than the few comparable available computers; the announcement surprised rival executives, who admitted that matching IBM's prices would be difficult. AT bus: The AT motherboard had a 16-bit data bus and 24-bit address bus, backward compatible with PC-style expansion cards. Fifteen IRQs and seven DMA channels, expanded from eight IRQs and four DMA channels for the PC.
The doubling of the IRQs was achieved by adding another 8259A. IRQs 8–15 are cascaded through IRQ 2 of the first 8259A, which leaves 15 available instead of 16; the number of DMA channels was increased by adding another 8237A in master-slave configuration. DMA channel 4 is reserved for cascading 0–3 leaving seven channels active; some IRQ and some DMA channels are used on the motherboard and not exposed on the expansion bus. 16 MB maximum memory, compared to the PC's 640 KB maximum. Battery backed real-time clock on motherboard with 50 bytes CMOS memory available for power-off storage of BIOS parameters. Additionally the AT RTC had a 1024-Hz timer, a much finer resolution compared to the 18-Hz RTC used by IBM PC XT; the AT timer was accessible via INT 70h. The RTC was implemented using a Motorola MC146818 integrated circuit. A disk-based BIOS setup program took the place of the DIP switches on PC XTs. Most AT clones would have the setup program in ROM rather than on a disk. 84-key AT keyboard layout: the 84th key being <SysRq> i.e. System request.
The AT keyboard uses the same 5-pin DIN connector as the PC keyboard, but it uses a different, bidirectional interface and generates different keyboard scan codes. The bidirectional interface allows the computer to set the LED indicators on the keyboard, reset the keyboard, set the typematic rate, etc. ATs had 101-key keyboards which featured integrated numeric keypad with Num Lock key. 1.2 MB 135 mm floppy disk drive stored over three times as much data as the 360 KB PC floppy disk. However, they had compatibility problems with 360k disks. 90mm floppy drives became available in ATs. A 20 MB hard disk drive, although the early drives manufactured by Computer Memories were unreliable; this was attributed to failure to automatically retract the read/write heads when the computer was powered off, to a bug in the DOS 3.0 FAT algorithm. ATs could PGA video cards; the 8250 UART from the XT was upgraded to the 16450, although this chip still had only a one byte buffer, so high-speed serial communication was just as problematic as with the XT.
PC DOS 3.0 was released to support the new AT features, including preliminary kernel support for networking The AT was equipped with a physical lock that could be used to prevent access to the computer by disabling the keyboard. Just like its IBM PC predecessor, the PC AT supported an optional math co-processor chip, the Intel 80287, for faster execution of floating point operations; the IBM PC AT came with a 192-watt switching power supply. According to IBM's documentation, in order to function properly, the AT power supply needed a load of at least 7.0 amperes on the +5V line and a minimum of 2.5 amperes was on its +12V line. In practice, the AT power supply would randomly fail to start unless these minimum load requirements were met; because the AT motherboard didn't provide much load on the +12V line, entry-level IBM AT models that didn't have a hard drive were shipped with a 5-ohm, 50-watt sandbar resistor connected on the +12V line of the hard disk power connector. In normal operation this resistor drew 2.4 amperes, getting hot.
In addition to the unreliable hard disk drive, the high-density floppy disk drives turned out to be problematic. Some ATs came with one double-density 360 kB drive. High-density floppy diskette media were compatible only with high-density drives. There was no way for the disk drive to detect what kind of floppy disk was inserted, the only clue the user had was the disk label and an asterisk molded into the 360 kB disk drive faceplate. If the user accidentally used a high-density diskette in the 360 kB drive, it would sometimes work, for a while, but
Double Data Rate Synchronous Dynamic Random-Access Memory abbreviated as DDR SDRAM, is a double data rate synchronous dynamic random-access memory class of memory integrated circuits used in computers. DDR SDRAM retroactively called DDR1 SDRAM, has been superseded by DDR2 SDRAM, DDR3 SDRAM and DDR4 SDRAM. None of its successors are forward or backward compatible with DDR1 SDRAM, meaning DDR2, DDR3, DDR4 memory modules will not work in DDR1-equipped motherboards, vice versa. Compared to single data rate SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. Implementations have to use schemes such as phase-locked loops and self-calibration to reach the required timing accuracy; the interface uses double pumping to double data bus bandwidth without a corresponding increase in clock frequency. One advantage of keeping the clock frequency down is that it reduces the signal integrity requirements on the circuit board connecting the memory to the controller.
The name "double data rate" refers to the fact that a DDR SDRAM with a certain clock frequency achieves nearly twice the bandwidth of a SDR SDRAM running at the same clock frequency, due to this double pumping. With data being transferred 64 bits at a time, DDR SDRAM gives a transfer rate of × 2 × 64 / 8. Thus, with a bus frequency of 100 MHz, DDR SDRAM gives a maximum transfer rate of 1600 MB/s. "Beginning in 1996 and concluding in June 2000, JEDEC developed the DDR SDRAM specification." JEDEC has set standards for data rates of DDR SDRAM, divided into two parts. The first specification is for memory chips, the second is for memory modules. Note: All above listed are specified by JEDEC as JESD79F. All RAM data rates in-between or above these listed specifications are not standardized by JEDEC—often they are manufacturer optimizations using tighter-tolerance or overvolted chips; the package sizes in which DDR SDRAM is manufactured are standardized by JEDEC. There is no architectural difference between DDR SDRAM designed for different clock frequencies, for example, PC-1600, designed to run at 100 MHz, PC-2100, designed to run at 133 MHz.
The number designates the data rate at which the chip is guaranteed to perform, hence DDR SDRAM is guaranteed to run at lower and can run at higher clock rates than those for which it was made. DDR SDRAM modules for desktop computers, dual in-line memory modules, have 184 pins, can be differentiated from SDRAM DIMMs by the number of notches. DDR SDRAM for notebook computers, SO-DIMMs, have 200 pins, the same number of pins as DDR2 SO-DIMMs; these two specifications are notched similarly and care must be taken during insertion if unsure of a correct match. Most DDR SDRAM operates at a voltage of 2.5 V, compared to 3.3 V for SDRAM. This can reduce power consumption. Chips and modules with DDR-400/PC-3200 standard have a nominal voltage of 2.6 V. JEDEC Standard No. 21–C defines three possible operating voltages for 184 pin DDR, as identified by the key notch position relative to its centreline. Page 4.5.10-7 defines 2.5V, 1.8V, TBD, while page 4.20.5–40 nominates 3.3V for the right notch position.
The orientation of the module for determining the key notch position is with 52 contact positions to the left and 40 contact positions to the right. Increasing operating voltage can increase maximum speed, at the cost of higher power dissipation and heating, at the risk of malfunctioning or damage. Many new chipsets use these memory types in multi-channel configurations. DRAM density Size of the chip is measured in megabits. Most motherboards recognize only 1 GB modules. If 128M×4 1 GB modules are used, they most will not work; the JEDEC standard allows 128M×4 only for slower buffered/registered modules designed for some servers, but some generic manufacturers do not comply. Organization The notation like 64M×4 means that the memory matrix has 64 million 4-bit storage locations. There are ×4, ×8, ×16 DDR chips; the ×4 chips allow the use of advanced error correction features like Chipkill, memory scrubbing and Intel SDDC in server environments, while the ×8 and ×16 chips are somewhat less expensive.
X8 chips are used in desktops/notebooks but are making entry into the server market. There are 4 banks and only one row can be active in each bank. Ranks To increase memory capacity and bandwidth, chips are combined on a module. For instance, the 64-bit data bus for DIMM requires eight 8-bit chips, addressed in parallel. Multiple chips with the common address lines are called a memory rank; the term was introduced to avoid confusion with banks. A memory module may bear more than one rank; the term sides would be confusing because it incorrectly suggests the physical placement of chips on the module. All ranks are connected to the same memory bus; the Chip Select signal is used to issue commands to specific rank. Adding modules to the single memory bus creates additional electrical load on its drivers. To mitigate the resulting bus signaling rate drop and overcome the memory bottleneck, new chipsets employ the multi-channel architecture. Capacity Number of DRAM devices The number of chips is a multiple of 8 for non-ECC modules and a multiple of 9 for ECC modules.
Chips can oc
Error-correcting code memory is a type of computer data storage that can detect and correct the most common kinds of internal data corruption. ECC memory is used in most computers where data corruption cannot be tolerated under any circumstances, such as for scientific or financial computing. ECC memory maintains a memory system immune to single-bit errors: the data, read from each word is always the same as the data, written to it if one of the bits stored has been flipped to the wrong state. Most non-ECC memory cannot detect errors, although some non-ECC memory with parity support allows detection but not correction. Electrical or magnetic interference inside a computer system can cause a single bit of dynamic random-access memory to spontaneously flip to the opposite state, it was thought that this was due to alpha particles emitted by contaminants in chip packaging material, but research has shown that the majority of one-off soft errors in DRAM chips occur as a result of background radiation, chiefly neutrons from cosmic ray secondaries, which may change the contents of one or more memory cells or interfere with the circuitry used to read or write to them.
Hence, the error rates increase with rising altitude. As a result, systems operating at high altitudes require special provision for reliability; as an example, the spacecraft Cassini–Huygens, launched in 1997, contained two identical flight recorders, each with 2.5 gigabits of memory in the form of arrays of commercial DRAM chips. Thanks to built-in EDAC functionality, spacecraft's engineering telemetry reported the number of single-bit-per-word errors and double-bit-per-word errors. During the first 2.5 years of flight, the spacecraft reported a nearly constant single-bit error rate of about 280 errors per day. However, on November 6, 1997, during the first month in space, the number of errors increased by more than a factor of four for that single day; this was attributed to a solar particle event, detected by the satellite GOES 9. There was some concern that as DRAM density increases further, thus the components on chips get smaller, while at the same time operating voltages continue to fall, DRAM chips will be affected by such radiation more frequently—since lower-energy particles will be able to change a memory cell's state.
On the other hand, smaller cells make smaller targets, moves to technologies such as SOI may make individual cells less susceptible and so counteract, or reverse, this trend. Recent studies show that single event upsets due to cosmic radiation have been dropping with process geometry and previous concerns over increasing bit cell error rates are unfounded. Work published between 2007 and 2009 showed varying error rates with over 7 orders of magnitude difference, ranging from 10−10 error/bit·h to 10−17 error/bit·h. A large-scale study based on Google's large number of servers was presented at the SIGMETRICS/Performance’09 conference; the actual error rate found was several orders of magnitude higher than the previous small-scale or laboratory studies, with between 25,000 and 70,000 errors per billion device hours per megabit. More than 8% of DIMM memory modules were affected by errors per year; the consequence of a memory error is system-dependent. In systems without ECC, an error can lead either to corruption of data.
Memory errors can cause security vulnerabilities. A memory error can have no consequences if it changes a bit which neither causes observable malfunctioning nor affects data used in calculations or saved. A 2010 simulation study showed that, for a web browser, only a small fraction of memory errors caused data corruption, although, as many memory errors are intermittent and correlated, the effects of memory errors were greater than would be expected for independent soft errors; some tests conclude that the isolation of DRAM memory cells can be circumvented by unintended side effects of specially crafted accesses to adjacent cells. Thus, accessing data stored in DRAM causes memory cell to leak their charges and interact electrically, as a result of high cell density in modern memory, altering the content of nearby memory rows that were not addressed in the original memory access; this effect is known as row hammer, it has been used in some privilege escalation computer security exploits. An example of a single-bit error that would be ignored by a system with no error-checking, would halt a machine with parity checking, or would be invisibly corrected by ECC: a single bit is stuck at 1 due to a faulty chip, or becomes changed to 1 due to background or cosmic radiation.
As a result, the "8" has silently become a "9". Several approaches have been developed to deal with unwanted bit-flips, including immunity-aware programming, RAM parity memory, ECC memory; this problem can be mitigated by using DRAM modules that include extra memory bits and memory controllers that exploit these bits. These extra bit