Semiconductor device fabrication
Semiconductor device fabrication is the process used to create the integrated circuits that are present in everyday electrical and electronic devices. It is a multiple-step sequence of photolithographic and chemical processing steps during which electronic circuits are created on a wafer made of pure semiconducting material. Silicon is always used, but various compound semiconductors are used for specialized applications; the entire manufacturing process, from start to packaged chips ready for shipment, takes six to eight weeks and is performed in specialized facilities referred to as foundries or fabs. In more advanced semiconductor devices, such as modern 14/10/7 nm nodes, fabrication can take up to 15 weeks with 11–13 weeks being the industry average. Production in advanced fabrication facilities is automated, carried out in a hermetically sealed, nitrogen environment to improve yield with FOUPs and automated material handling systems taking care of the transport of wafers from machine to machine.
By industry standard, each generation of the semiconductor manufacturing process known as "technology node", is designated by the process’s minimum feature size. Technology nodes known as "process technologies" or "nodes", are indicated by the size in nanometers of the process's gate length; as of 2019, 14 nanometer and 10 nanometer process chips are in mass production, with 7 nanometer process chips in mass production by TSMC and Samsung, although their node definition is similar to Intel's 10 nanometer process. Semiconductor device manufacturing has spread from Texas and California in the 1960s to the rest of the world, including Europe, the Middle East, Asia, it is a global business today. The leading semiconductor manufacturers have facilities all over the world. Intel, the second largest manufacturer, has facilities in Europe and Asia as well as the U. S. Samsung, the world's largest manufacturer of semiconductors has facilities in South Korea and the US, TSMC, the world's largest pure play foundry, has facilities in Taiwan, China and the US.
Qualcomm, Broadcom are among the biggest fabless semiconductor companies, outsourcing their production to companies like TSMC. have facilities spread in different countries. When feature widths were far greater than about 10 micrometres, semiconductor purity was not as big an issue as it is today in device manufacturing; as devices became more integrated, cleanrooms became cleaner. Today, fabrication plants are pressurized with filtered air to remove the smallest particles, which could come to rest on the wafers and contribute to defects; the workers in a semiconductor fabrication facility are required to wear cleanroom suits to protect the devices from human contamination. A typical wafer is made out of pure silicon, grown into mono-crystalline cylindrical ingots up to 300 mm in diameter using the Czochralski process; these ingots are sliced into wafers about 0.75 mm thick and polished to obtain a regular and flat surface. In semiconductor device fabrication, the various processing steps fall into four general categories: deposition, removal and modification of electrical properties.
Deposition is any process that coats, or otherwise transfers a material onto the wafer. Available technologies include physical vapor deposition, chemical vapor deposition, electrochemical deposition, molecular beam epitaxy and more atomic layer deposition among others. Removal is any process. Patterning is the shaping or altering of deposited materials, is referred to as lithography. For example, in conventional lithography, the wafer is coated with a chemical called a photoresist. After etching or other processing, the remaining photoresist is removed by plasma ashing. Modification of electrical properties has entailed doping transistor sources and drains; these doping processes are followed by furnace annealing or, in advanced devices, by rapid thermal annealing. Modification of electrical properties now extends to the reduction of a material's dielectric constant in low-k insulators via exposure to ultraviolet light in UV processing. Modification is achieved by oxidation, which can be carried out to create semiconductor-insulator junctions, such as in the local oxidation of silicon to fabricate metal oxide field effect transistors.
Modern chips have up to eleven metal levels produced in over 300 sequenced processing steps. FEOL processing refers to the formation of the transistors directly in the silicon; the raw wafer is engineered by the growth of an ultrapure defect-free silicon layer through epitaxy. In the most advanced logic devices, prior to the silicon epitaxy step, tricks are performed to improve the performance of the transistors to be built. One method involves introducing a straining step wherein a silicon variant such as silicon-germanium is deposited. Once the epitaxial silicon is deposited, the crystal lattice becomes stretched somewhat, resulting in improved electronic mobility. Another method, called silicon on insulator technology involve
The instruction cycle is the cycle which the central processing unit follows from boot-up until the computer has shut down in order to process instructions. It is composed of three main stages: the fetch stage, the decode stage, the execute stage. In simpler CPUs the instruction cycle is executed sequentially, each instruction being processed before the next one is started. In most modern CPUs the instruction cycles are instead executed concurrently, in parallel, through an instruction pipeline: the next instruction starts being processed before the previous instruction has finished, possible because the cycle is broken up into separate steps; the program counter is a special register that holds the memory address of the next instruction to be executed. During the fetch stage, the address stored in the PC is copied into the memory address register and the PC is incremented in order to "point" to the memory address of the next instruction to be executed; the CPU takes the instruction at the memory address described by the MAR and copies it into the memory data register.
The MDR acts as a two-way register that holds data fetched from memory or data waiting to be stored in memory. The instruction in the MDR is copied into the current instruction register which acts as a temporary holding ground for the instruction that has just been fetched from memory. During the decode stage, the control unit will decode the instruction in the CIR; the CU sends signals to other components within the CPU such as the arithmetic and logic unit and the floating point unit. The ALU performs arithmetic operations such as addition and subtraction and multiplication via repeated addition and division via repeated subtraction, it performs logic operations such as AND, OR, NOT and binary shifts as well. The FLU is reserved for performing floating-point operations; each computer's CPU can have different cycles based on different instruction sets, but will be similar to the following cycle: Fetch Stage: The next instruction is fetched from the memory address, stored in the program counter and stored into the instruction register.
At the end of the fetch operation, the PC points to the next instruction that will be read at the next cycle. Decode Stage: During this stage the encoded instruction present in the instruction register is interpreted by the decoder. Read the effective address: In the case of a memory instruction the execution phase will be during the next clock pulse. If the instruction has an indirect address, the effective address is read from main memory, any required data is fetched from main memory to be processed and placed into data registers. If the instruction is direct, nothing is done during this clock pulse. If this is an I/O instruction or a register instruction, the operation is performed during the clock pulse. Execute Stage: The control unit of the CPU passes the decoded information as a sequence of control signals to the relevant function units of the CPU to perform the actions required by the instruction such as reading values from registers, passing them to the ALU to perform mathematical or logic functions on them, writing the result back to a register.
If the ALU is involved, it sends a condition signal back to the CU. The result generated by the operation is sent to an output device. Based on the feedback from the ALU, the PC may be updated to a different address from which the next instruction will be fetched. Repeat Cycle The cycle begins as soon as power is applied to the system, with an initial PC value, predefined by the system's architecture; this address points to a set of instructions in read-only memory, which begins the process of loading the operating system. The fetch step is the same for each instruction: The CPU sends the contents of the PC to the MAR and sends a read command on the address bus In response to the read command, the memory returns the data stored at the memory location indicated by PC on the data bus The CPU copies the data from the data bus into its MDR A fraction of a second the CPU copies the data from the MDR to the instruction register for instruction decoding The PC is incremented so that it points to the next instruction.
This step prepares the CPU for the next cycle. The control unit fetches the instruction's address from the memory unit; the decoding process allows the CPU to determine what instruction is to be performed so that the CPU can tell how many operands it needs to fetch in order to perform the instruction. The opcode fetched from the memory is decoded for the next steps and moved to the appropriate registers; the decoding is done by the CPU's Control Unit. This step evaluates. If it is a memory operation, the computer checks whether it's a direct or indirect memory operation: Direct memory operation - Nothing is done. Indirect memory operation - The effective address is read from memory. If it is an I/O or register instruction, the computer executes the instruction; the function of the instruction is performed. If the instruction involves arithmetic or logic, the ALU is utilized; this is the only stage of the instruction cycle, useful from the perspective of the end user. Everything else is overhead required to make the execute step happen.
Time slice, unit of operating system scheduling Classic RISC pipeline Cycles p
Field-programmable gate array
A Field-Programmable Gate Array is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence the term "field-programmable". The FPGA configuration is specified using a hardware description language, similar to that used for an Application-Specific Integrated Circuit. Circuit diagrams were used to specify the configuration, but this is rare due to the advent of electronic design automation tools. FPGAs contain an array of programmable logic blocks, a hierarchy of "reconfigurable interconnects" that allow the blocks to be "wired together", like many logic gates that can be inter-wired in different configurations. Logic blocks can be configured to perform complex combinational functions, or simple logic gates like AND and XOR. In most FPGAs, logic blocks include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
Contemporary Field-Programmable Gate Arrays have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time. Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function; the ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design, offer advantages for many applications. Some FPGAs have analog features in addition to digital functions; the most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on loaded pins that would otherwise ring or couple unacceptably, to set higher rates on loaded pins on high-speed channels that would otherwise run too slowly. Common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management and for high-speed serializer-deserializer transmit clocks and receiver clock recovery.
Common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters and digital-to-analog converters with analog signal conditioning blocks allowing them to operate as a system-on-a-chip; such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, field-programmable analog array, which carries analog values on its internal programmable interconnect fabric. The FPGA industry sprouted from programmable read-only memory and programmable logic devices. PROMs and PLDs both had the option of being programmed in the field. However, programmable logic was hard-wired between logic gates. Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
In December 2015, Intel acquired Altera. Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064; the XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks, with two three-input lookup tables. More than 20 years Freeman was entered into the National Inventors Hall of Fame for his invention. In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992. Altera and Xilinx continued unchallenged and grew from 1985 to the mid-1990s, when competitors sprouted up, eroding significant market share. By 1993, Actel was serving about 18 percent of the market. By 2013, Altera and Xilinx together represented 77 percent of the FPGA market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer and industrial applications. A recent trend has been to take the coarse-grained architectural approach a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip"; this work mirrors the architecture created by Ron Perlof and Hana Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 All Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore.
The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Mic
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
A patch is a set of changes to a computer program or its supporting data designed to update, fix, or improve it. This includes fixing security vulnerabilities and other bugs, with such patches being called bugfixes or bug fixes, improving the usability or performance. Although meant to fix problems, poorly designed patches can sometimes introduce new problems. In some special cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Patches for proprietary software are distributed as executable files instead of source code; this type of patch modifies the program executable—the program the user runs—either by modifying the binary file to include the fixes or by replacing it. On early 8-bit microcomputers, for example the Radio Shack TRS-80, the operating system included a PATCH utility which accepted patch data from a text file and applied the fixes to the target program's executable binary file.
Small in-memory patches could be manually applied with the system debug utility, such as CP/M's DDT or MS-DOS's DEBUG debuggers. Programmers working in interpreted BASIC used the POKE command to temporarily alter the functionality of a system service routine. Patches can circulate in the form of source code modifications. In this case, the patches consist of textual differences between two source code files, called "diffs"; these types of patches come out of open-source software projects. In these cases, developers expect users to compile the changed files themselves; because the word "patch" carries the connotation of a small fix, large fixes may use different nomenclature. Bulky patches or patches that change a program may circulate as "service packs" or as "software updates". Microsoft Windows NT and its successors use the "service pack" terminology. IBM used the terms "FixPaks" and "Corrective Service Diskette" to refer to these updates. Software suppliers distributed patches on paper tape or on punched cards, expecting the recipient to cut out the indicated part of the original tape, patch in the replacement segment.
Patch distributions used magnetic tape. After the invention of removable disk drives, patches came from the software developer via a disk or CD-ROM via mail. With the available Internet access, downloading patches from the developer's web site or through automated software updates became available to the end-users. Starting with Apple's Mac OS 9 and Microsoft's Windows ME, PC operating systems gained the ability to get automatic software updates via the Internet. Computer programs can coordinate patches to update a target program. Automation simplifies the end-user's task – they need only to execute an update program, whereupon that program makes sure that updating the target takes place and correctly. Service packs for Microsoft Windows NT and its successors and for many commercial software products adopt such automated strategies; some programs can update themselves via the Internet with little or no intervention on the part of users. The maintenance of server software and of operating systems takes place in this manner.
In situations where system administrators control a number of computers, this sort of automation helps to maintain consistency. The application of security patches occurs in this manner; the size of patches may vary from a few bytes to hundreds of megabytes. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files; such situations occur in the patching of computer games. Compared with the initial installation of software, patches do not take long to apply. In the case of operating systems and computer server software, patches have the important role of fixing security holes; some critical patches involve issues with drivers. Patches may require prior application of other patches, or may require prior or concurrent updates of several independent software components. To facilitate updates, operating systems provide automatic or semi-automatic updating facilities. Automatic updates have not succeeded in gaining widespread popularity in corporate computing environments because of the aforementioned glitches, but because administrators fear that software companies may gain unlimited control over their computers.
Package management systems can offer various degrees of patch automation. Usage of automatic updates has become far more widespread in the consumer market, due to the fact that Microsoft Windows added support for them, Service Pack 2 of Windows XP enabled them by default. Cautious users system administrators, tend to put off applying patches until they can verify the stability of the fixes. Microsoft SUS supports this. In the cases of large patches or of significant changes, distributors limit availability of patches to qualified developers as a beta test. Applying patches to firmware poses special challenges, as it involves the provisioning of new firmware images, rather than applying only the differences from the previous version; the patch consists of a firmware image in form of binary d
Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously. Large problems can be divided into smaller ones, which can be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level and task parallelism. Parallelism has long been employed in high-performance computing, but it's gaining broader interest due to the physical constraints preventing frequency scaling; as power consumption by computers has become a concern in recent years, parallel computing has become the dominant paradigm in computer architecture in the form of multi-core processors. Parallel computing is related to concurrent computing—they are used together, conflated, though the two are distinct: it is possible to have parallelism without concurrency, concurrency without parallelism. In parallel computing, a computational task is broken down into several many similar sub-tasks that can be processed independently and whose results are combined afterwards, upon completion.
In contrast, in concurrent computing, the various processes do not address related tasks. Parallel computers can be classified according to the level at which the hardware supports parallelism, with multi-core and multi-processor computers having multiple processing elements within a single machine, while clusters, MPPs, grids use multiple computers to work on the same task. Specialized parallel computer architectures are sometimes used alongside traditional processors, for accelerating specific tasks. In some cases parallelism is transparent to the programmer, such as in bit-level or instruction-level parallelism, but explicitly parallel algorithms those that use concurrency, are more difficult to write than sequential ones, because concurrency introduces several new classes of potential software bugs, of which race conditions are the most common. Communication and synchronization between the different subtasks are some of the greatest obstacles to getting good parallel program performance.
A theoretical upper bound on the speed-up of a single program as a result of parallelization is given by Amdahl's law. Traditionally, computer software has been written for serial computation. To solve a problem, an algorithm is implemented as a serial stream of instructions; these instructions are executed on a central processing unit on one computer. Only one instruction may execute at a time—after that instruction is finished, the next one is executed. Parallel computing, on the other hand, uses multiple processing elements to solve a problem; this is accomplished by breaking the problem into independent parts so that each processing element can execute its part of the algorithm with the others. The processing elements can be diverse and include resources such as a single computer with multiple processors, several networked computers, specialized hardware, or any combination of the above. Parallel computing was used for scientific computing and the simulation of scientific problems in the natural and engineering sciences, such as meteorology.
This led to the design of parallel software, as well as high performance computing. Frequency scaling was the dominant reason for improvements in computer performance from the mid-1980s until 2004; the runtime of a program is equal to the number of instructions multiplied by the average time per instruction. Maintaining everything else constant, increasing the clock frequency decreases the average time it takes to execute an instruction. An increase in frequency thus decreases runtime for all compute-bound programs. However, power consumption P by a chip is given by the equation P = C × V 2 × F, where C is the capacitance being switched per clock cycle, V is voltage, F is the processor frequency. Increases in frequency increase the amount of power used in a processor. Increasing processor power consumption led to Intel's May 8, 2004 cancellation of its Tejas and Jayhawk processors, cited as the end of frequency scaling as the dominant computer architecture paradigm. To deal with the problem of power consumption and overheating the major central processing unit manufacturers started to produce power efficient processors with multiple cores.
The core is the computing unit of the processor and in multi-core processors each core is independent and can access the same memory concurrently. Multi-core processors have brought parallel computing to desktop computers, thus parallelisation of serial programmes has become a mainstream programming task. In 2012 quad-core processors became standard for desktop computers, while servers have 10 and 12 core processors. From Moore's law it can be predicted that the number of cores per processor will double every 18–24 months; this could mean that after 2020 a typical processor will have hundreds of cores. An operating system can ensure that different tasks and user programmes are run in parallel on the available cores. However, for a serial software programme to take full advantage of the multi-core architecture the programmer needs to restructure and parallelise the code. A speed-up of application software runtime will no longer be achieved through frequency scaling, instead programmers will need to parallelise their software code to take
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo