Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
The Intel 80286 is a 16-bit microprocessor, introduced on February 1, 1982. It was the first 8086-based CPU with separate, non-multiplexed address and data buses and the first with memory management and wide protection abilities; the 80286 used 134,000 transistors in its original nMOS incarnation and, just like the contemporary 80186, it could execute most software written for the earlier Intel 8086 and 8088 processors. The 80286 was employed for the IBM PC/AT, introduced in 1984, widely used in most PC/AT compatible computers until the early 1990s. Intel's first 80286 chips were specified for a maximum clockrate of 4, 6 or 8 MHz and releases for 12.5 MHz. AMD and Harris produced 16 MHz, 20 MHz and 25 MHz parts, respectively. Intersil and Fujitsu designed static CMOS versions of Intel's original depletion-load nMOS implementation aimed at battery-powered devices. On average, the 80286 was measured to have a speed of about 0.21 instructions per clock on "typical" programs, although it could be faster on optimized code and in tight loops, as many instructions could execute in 2 clock cycles each.
The 6 MHz, 10 MHz and 12 MHz models were measured to operate at 0.9 MIPS, 1.5 MIPS and 2.66 MIPS respectively. The E-stepping level of the 80286 was free of the several significant errata that caused problems for programmers and operating-system writers in the earlier B-step and C-step CPUs; the 80286 was designed for multi-user systems with multitasking applications, including communications and real-time process control. It had 134,000 transistors and consisted of four independent units: address unit, bus unit, instruction unit and execution unit, organized into a loosely coupled pipeline just as in the 8086; the increased performance over the 8086 was due to the non-multiplexed address and data buses, more address-calculation hardware and a faster multiplier. It was produced in a 68-pin package, including LCC and PGA packages; the performance increase of the 80286 over the 8086 could be more than 100% per clock cycle in many programs. This was a large increase comparable to the speed improvements around a decade when the i486 or the original Pentium were introduced.
This was due to the non-multiplexed address and data buses, but to the fact that address calculations were less expensive. They were performed by a dedicated unit in the 80286, while the older 8086 had to do effective address computation using its general ALU, consuming several extra clock cycles in many cases; the 80286 was more efficient in the prefetch of instructions, execution of jumps, in complex microcoded numerical operations such as MUL/DIV than its predecessor. The 80286 included, in addition to all of the 8086 instructions, all of the new instructions of the 80186: ENTER, LEAVE, BOUND, INS, OUTS, PUSHA, POPA, PUSH immediate, IMUL immediate, immediate shifts and rotates; the 80286 added new instructions for protected mode: ARPL, CLTS, LAR, LGDT, LIDT, LLDT, LMSW, LSL, LTR, SGDT, SIDT, SLDT, SMSW, STR, VERR, VERW. Some of the instructions for protected mode can be used in real mode to set up and switch to protected mode, a few are useful for real mode itself; the Intel 80286 had a 24-bit address bus and was able to address up to 16 MB of RAM, compared to the 1 MB addressability of its predecessor.
However, memory cost and the initial rarity of software using the memory above 1 MB meant that 80286 computers were shipped with more than one megabyte of RAM. Additionally, there was a performance penalty involved in accessing extended memory from real mode, as noted below; the 286 was the first of the x86 CPU family to support protected virtual-address mode called "protected mode". In addition, it was the first commercially available microprocessor with on-chip MMU capabilities; this would allow IBM compatibles to have advanced multitasking OSes for the first time and compete in the Unix-dominated server/workstation market. Several additional instructions were introduced in protected mode of 80286, which are helpful for multitasking operating systems. Another important feature of 80286 is prevention of unauthorized access; this is achieved by: Forming different segments for data and stack, preventing their overlapping. Assigning privilege levels to each segment. Segment with lower privilege level cannot access the segment with higher privilege level.
In 80286, arithmetic operations can be performed on the following different types of numbers: unsigned packed decimal, unsigned binary, unsigned unpacked decimal, signed binary, floating-point numbers. By design, the 286 could not revert from protected mode to the basic 8086-compatible real address mode without a hardware-initiated reset. In the PC/AT introduced in 1984, IBM added external circuitry, as well as specialized code in the ROM BIOS and the 8042 peripheral microcontroller to enable software to cause the reset, allowing real-mode reentry while retaining active memory and returning control to the program that initiated the reset. (The BIOS is involved because it obtains control di
Open-source software is a type of computer software in which source code is released under a license in which the copyright holder grants users the rights to study and distribute the software to anyone and for any purpose. Open-source software may be developed in a collaborative public manner. Open-source software is a prominent example of open collaboration. Open-source software development generates an more diverse scope of design perspective than any company is capable of developing and sustaining long term. A 2008 report by the Standish Group stated that adoption of open-source software models have resulted in savings of about $60 billion per year for consumers. In the early days of computing and developers shared software in order to learn from each other and evolve the field of computing; the open-source notion moved to the way side of commercialization of software in the years 1970-1980. However, academics still developed software collaboratively. For example Donald Knuth in 1979 with the TeX typesetting system or Richard Stallman in 1983 with the GNU operating system.
In 1997, Eric Raymond published The Cathedral and the Bazaar, a reflective analysis of the hacker community and free-software principles. The paper received significant attention in early 1998, was one factor in motivating Netscape Communications Corporation to release their popular Netscape Communicator Internet suite as free software; this source code subsequently became the basis behind SeaMonkey, Mozilla Firefox and KompoZer. Netscape's act prompted Raymond and others to look into how to bring the Free Software Foundation's free software ideas and perceived benefits to the commercial software industry, they concluded that FSF's social activism was not appealing to companies like Netscape, looked for a way to rebrand the free software movement to emphasize the business potential of sharing and collaborating on software source code. The new term they chose was "open source", soon adopted by Bruce Perens, publisher Tim O'Reilly, Linus Torvalds, others; the Open Source Initiative was founded in February 1998 to encourage use of the new term and evangelize open-source principles.
While the Open Source Initiative sought to encourage the use of the new term and evangelize the principles it adhered to, commercial software vendors found themselves threatened by the concept of distributed software and universal access to an application's source code. A Microsoft executive publicly stated in 2001 that "open source is an intellectual property destroyer. I can't imagine something that could be worse than this for the software business and the intellectual-property business." However, while Free and open-source software has played a role outside of the mainstream of private software development, companies as large as Microsoft have begun to develop official open-source presences on the Internet. IBM, Oracle and State Farm are just a few of the companies with a serious public stake in today's competitive open-source market. There has been a significant shift in the corporate philosophy concerning the development of FOSS; the free-software movement was launched in 1983. In 1998, a group of individuals advocated that the term free software should be replaced by open-source software as an expression, less ambiguous and more comfortable for the corporate world.
Software licenses grant rights to users which would otherwise be reserved by copyright law to the copyright holder. Several open-source software licenses have qualified within the boundaries of the Open Source Definition; the most prominent and popular example is the GNU General Public License, which "allows free distribution under the condition that further developments and applications are put under the same licence", thus free. The open source label came out of a strategy session held on April 7, 1998 in Palo Alto in reaction to Netscape's January 1998 announcement of a source code release for Navigator. A group of individuals at the session included Tim O'Reilly, Linus Torvalds, Tom Paquin, Jamie Zawinski, Larry Wall, Brian Behlendorf, Sameer Parekh, Eric Allman, Greg Olson, Paul Vixie, John Ousterhout, Guido van Rossum, Philip Zimmermann, John Gilmore and Eric S. Raymond, they used the opportunity before the release of Navigator's source code to clarify a potential confusion caused by the ambiguity of the word "free" in English.
Many people claimed that the birth of the Internet, since 1969, started the open-source movement, while others do not distinguish between open-source and free software movements. The Free Software Foun
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
The 8086 is a 16-bit microprocessor chip designed by Intel between early 1976 and June 8, 1978, when it was released. The Intel 8088, released July 1, 1979, is a modified chip with an external 8-bit data bus, is notable as the processor used in the original IBM PC design, including the widespread version called IBM PC XT; the 8086 gave rise to the x86 architecture, which became Intel's most successful line of processors. On June 5, 2018, Intel released a limited edition CPU celebrating the anniversary of the Intel 8086, called the Intel Core i7-8086K. In 1972, Intel launched the first 8-bit microprocessor, it implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, which proved to be general-purpose. The device needed several additional ICs to produce a functional computer, in part due to it being packaged in a small 18-pin "memory package", which ruled out the use of a separate address bus. Two years Intel launched the 8080, employing the new 40-pin DIL packages developed for calculator ICs to enable a separate address bus.
It has an extended instruction set, source-compatible with the 8008 and includes some 16-bit instructions to make programming easier. The 8080 device, was replaced by the depletion-load-based 8085, which sufficed with a single +5 V power supply instead of the three different operating voltages of earlier chips. Other well known 8-bit microprocessors that emerged during these years are Motorola 6800, General Instrument PIC16X, MOS Technology 6502, Zilog Z80, Motorola 6809; the 8086 project started in May 1976 and was intended as a temporary substitute for the ambitious and delayed iAPX 432 project. It was an attempt to draw attention from the less-delayed 16- and 32-bit processors of other manufacturers and at the same time to counter the threat from the Zilog Z80, which became successful. Both the architecture and the physical chip were therefore developed rather by a small group of people, using the same basic microarchitecture elements and physical implementation techniques as employed for the older 8085.
Marketed as source compatible, the 8086 was designed to allow assembly language for the 8008, 8080, or 8085 to be automatically converted into equivalent 8086 source code, with little or no hand-editing. The programming model and instruction set is based on the 8080. However, the 8086 design was expanded to support full 16-bit processing, instead of the limited 16-bit capabilities of the 8080 and 8085. New kinds of instructions were added as well. Instructions directly supporting nested ALGOL-family languages such as Pascal and PL/M were added. According to principal architect Stephen P. Morse, this was a result of a more software-centric approach than in the design of earlier Intel processors. Other enhancements included microcoded multiply and divide instructions and a bus structure better adapted to future coprocessors and multiprocessor systems; the first revision of the instruction set and high level architecture was ready after about three months, as no CAD tools were used, four engineers and 12 layout people were working on the chip.
The 8086 took a little more than two years from idea to working product, considered rather fast for a complex design in 1976–1978. The 8086 was sequenced using a mixture of random logic and microcode and was implemented using depletion-load nMOS circuitry with 20,000 active transistors, it was soon moved to a new refined nMOS manufacturing process called HMOS that Intel developed for manufacturing of fast static RAM products. This was followed by HMOS-II, HMOS-III versions, a static CMOS version for battery powered devices, manufactured using Intel's CHMOS processes; the original chip measured minimum feature size was 3.2 μm. The architecture was defined by Stephen P. Morse with some help and assistance by Bruce Ravenel in refining the final revisions. Logic designer Jim McKevitt and John Bayliss were the lead engineers of the hardware-level development team and Bill Pohlman the manager for the project; the legacy of the 8086 is enduring in the basic instruction set of today's personal computers and servers.
All internal registers, as well as internal and external data buses, are 16 bits wide, which established the "16-bit microprocessor" identity of the 8086. A 20-bit external address bus provides a 1 MB physical address space; this address space is addressed by means of internal memory "segmentation". The data bus is multiplexed with the address bus in order to fit all of the control lines into a standard 40-pin dual in-line package, it provides a 16-bit I/O address bus. The maximum line
A computer terminal is an electronic or electromechanical hardware device, used for entering data into, displaying or printing data from, a computer or a computing system. The teletype was an example of an early day hardcopy terminal, predated the use of a computer screen by decades; the acronym CRT, which once referred to a computer terminal, has come to refer to a type of screen of a personal computer. Early terminals were inexpensive devices but slow compared to punched cards or paper tape for input, but as the technology improved and video displays were introduced, terminals pushed these older forms of interaction from the industry. A related development was timesharing systems, which evolved in parallel and made up for any inefficiencies of the user's typing ability with the ability to support multiple users on the same machine, each at their own terminal; the function of a terminal is confined to input of data. A terminal that depends on the host computer for its processing power is called a "dumb terminal" or a thin client.
A personal computer can run terminal emulator software that replicates the function of a terminal, sometimes allowing concurrent use of local programs and access to a distant terminal host system. The terminal of the first working programmable automatic digital Turing-complete computer, the Z3, had a keyboard and a row of lamps to show results. Early user terminals connected to computers were electromechanical teleprinters/teletypewriters, such as the Teletype Model 33 ASR used for telegraphy or the Friden Flexowriter. Keyboard/printer terminals that came included the IBM 2741 and the DECwriter LA30. Respective top speeds of teletypes, IBM 2741 and LA30 were 15 and 30 characters per second. Although at that time "paper was king" the speed of interaction was limited. Early video computer displays were sometimes nicknamed "Glass TTYs" or "Visual Display Units", used no CPU, instead relying on individual logic gates or primitive LSI chips, they became popular Input-Output devices on many different types of computer system once several suppliers gravitated to a set of common standards: ASCII character set, but early/economy models supported only capital letters RS-232 serial ports 24 lines of 80 characters of text.
Models sometimes had two character-width settings. Some type of cursor that can be positioned. Implementation of at least 3 control codes: Carriage Return, Line-Feed, Bell, but many more, such as Escape sequences to provide underlining, dim or reverse-video character highlighting, to clear the display and position the cursor; the Datapoint 3300 from Computer Terminal Corporation was announced in 1967 and shipped in 1969, making it one of the earliest stand-alone display-based terminals. It solved the memory space issue mentioned above by using a digital shift-register design, using only 72 columns rather than the more common choice of 80. Starting with the Datapoint 3300, by the late 1970s and early 1980s, there were dozens of manufacturers of terminals, including Lear-Siegler, ADDS, Data General, DEC, Hazeltine Corporation, Heath/Zenith, Hewlett Packard, IBM, Volker-Craig, Wyse, many of which had incompatible command sequences; the great variations in the control codes between makers gave rise to software that identified and grouped terminal types so the system software would display input forms using the appropriate control codes.
The great majority of terminals were monochrome, manufacturers variously offering green, white or amber and sometimes blue screen phosphors.. Terminals with modest color capability were available but not used. An "intelligent" terminal does its own processing implying a microprocessor is built in, but not all terminals with microprocessors did any real processing of input: the main computer to which it was attached would have to respond to each keystroke; the term "intelligent" in this context dates from 1969. Notable examples include the IBM 2250 and IBM 2260, predecessors to the IBM 3270 and introduced with System/360 in 1964. Most terminals were connected to minicomputers or mainframe computers and had a green or amber screen. Terminals communicate wi
CP/M standing for Control Program/Monitor and Control Program for Microcomputers, is a mass-market operating system created in 1974 for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc. Confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory versions of CP/M added multi-user variations and were migrated to 16-bit processors; the combination of CP/M and S-100 bus computers was loosely patterned on the MITS Altair, an early standard in the microcomputer industry. This computer platform was used in business through the late 1970s and into the mid-1980s. CP/M increased the market size for both hardware and software by reducing the amount of programming required to install an application on a new manufacturer's computer. An important driver of software innovation was the advent of low-cost microcomputers running CP/M, as independent programmers and hackers bought them and shared their creations in user groups. CP/M was displaced by DOS soon after the 1981 introduction of the IBM PC.
A minimal 8-bit CP/M system would contain the following components: A computer terminal using the ASCII character set An Intel 8080 or Zilog Z80 microprocessor The NEC V20 and V30 processors support an 8080-emulation mode that can run 8-bit CP/M on a PC DOS/MS-DOS computer so equipped, though any PC can run the 16-bit CP/M-86. At least 16 kilobytes of RAM, beginning at address 0 A means to bootstrap the first sector of the diskette At least one floppy disk driveThe only hardware system that CP/M, as sold by Digital Research, would support was the Intel 8080 Development System. Manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, console devices. CP/M would run on systems based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. While the Digital Research distributed core of CP/M did not use any of the Z80-specific instructions, many Z80-based systems used Z80 code in the system-specific BIOS, many applications were dedicated to Z80-based CP/M machines.
On most machines the bootstrap was a minimal bootloader in ROM combined with some means of minimal bank switching or a means of injecting code on the bus. CP/M used the 7-bit ASCII set; the other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, Osborne machines used the 8th bit set to indicate an underlined character. WordStar used the 8th bit as an end-of-word marker. International CP/M systems most used the ISO 646 norm for localized character sets, replacing certain ASCII characters with localized characters rather than adding them beyond the 7-bit boundary. In the 8-bit versions, while running, the CP/M operating system loaded into memory had three components: Basic Input/Output System or BIOS, Basic Disk Operating System or BDOS, Console Command Processor or CCP; the BIOS and BDOS were memory-resident, while the CCP was memory-resident unless overwritten by an application, in which case it was automatically reloaded after the application finished running.
A number of transient commands for standard utilities were provided. The transient commands resided in files with the extension. COM on disk; the BIOS directly controlled hardware components other than main memory. It contained functions such as character input and output and the reading and writing of disk sectors; the BDOS implemented the CP/M file system and some input/output abstractions on top of the BIOS. The CCP took user commands and either executed them directly or loaded and started an executable file of the given name. Third-party applications for CP/M were essentially transient commands; the BDOS, CCP and standard transient commands were the same in all installations of a particular revision of CP/M, but the BIOS portion was always adapted to the particular hardware. Adding memory to a computer, for example, meant that the CP/M system had to be reinstalled with an updated BIOS capable of addressing the additional memory. A utility was provided to patch the supplied BIOS, BDOS and CCP to allow them to be run from higher memory.
Once installed, the operating system was stored in reserved areas at the beginning of any disk which would be used to boot the system. On start-up, the bootloader would load the operating system from the disk in drive A:. By modern standards CP/M was primitive. With version 1.0 there was no provision for detecting a changed disk. If a user changed disks without manually rereading the disk directory the system would write on the new disk using the old disk's directory information, ruining the data stored on the disk. From version 1.1 or 1.2 onwards, changing a disk trying to write to it before its directory was read would cause a fatal error to be signalled. This avoided overwriting the disk but required a reboot and loss of the data, to be stored on disk; the majority of the complexity in CP/M was isolated in the BDOS, to a lesser extent, the CCP and transient commands. This meant that by porting the limited number of simple routines in the BIOS to a particular hardware platform, the entire OS would work.