History of the Berkeley Software Distribution
The History of the Berkeley Software Distribution begins in the 1970s. The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix; the operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry, on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS, so that Unix only ran on the machine eight hours per day. A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. In 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor, he started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex.
Other universities became interested in the software at Berkeley, so in 1977 Joy started compiling the first Berkeley Software Distribution, released on March 9, 1978. 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out; the Second Berkeley Software Distribution, released in May 1979, included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor and the C shell. Some 75 copies of 2BSD were sent out by Bill Joy. A further feature was a networking package called Berknet, developed by Eric Schmidt as part of his master's thesis work, that could connect up to twenty-six computers and provided email and file transfer. After 3BSD had come out for the VAX line of computers, new releases of 2BSD for the PDP-11 were still issued and distributed through USENIX. 2.9BSD from 1983 included code from 4.1cBSD, was the first release, a full OS rather than a set of applications and patches.
The most recent release, 2.11BSD, was first issued in 1992. In the 21st Century, maintenance updates from volunteers continued: patch 451 was released on December 22, 2018. A VAX computer was installed at Berkeley in 1978, but the port of Unix to the VAX architecture, UNIX/32V, did not take advantage of the VAX's virtual memory capabilities; the kernel of 32V was rewritten by Berkeley students to include a virtual memory implementation, a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was alternatively called Virtual VAX/UNIX or VMUNIX, BSD kernel images were called /vmunix until 4.4BSD. The success of 3BSD was a major factor in the Defense Advanced Research Projects Agency's decision to fund Berkeley's Computer Systems Research Group, which would develop a standard Unix platform for future DARPA research in the VLSI Project. 4BSD offered a number of enhancements over 3BSD, notably job control in the released csh, delivermail, "reliable" signals, the Curses programming library.
In a 1985 review of BSD releases, John Quarterman et al. wrote: 4BSD was the operating system of choice for VAXs from the beginning until the release of System III Most organizations would buy a 32V license and order 4BSD from Berkeley without bothering to get a 32V tape. Many installations inside the Bell System ran 4.1BSD. 4.1BSD was a response to criticisms of BSD's performance relative to the dominant VAX operating system, VMS. The 4.1BSD kernel was systematically tuned up by Bill Joy until it could perform as well as VMS on several benchmarks. The release would have been called 5BSD. Before its official release came three intermediate versions: 4.1a incorporated a modified version of BBN's preliminary TCP/IP implementation. Back at Bell Labs, 4.1cBSD became the basis of the 8th Edition of Research Unix, a commercially supported version was available from mtXinu. To guide the design of 4.2BSD, Duane Adams of DARPA formed a "steering committee" consisting of Bob Fabry, Bill Joy and Sam Leffler from UCB, Alan Nemeth and Rob Gurwitz from BBN, Dennis Ritchie from Bell Labs, Keith Lantz from Stanford, Rick Rashid from Carnegie-Mellon, Bert Halstead from MIT, Dan Lynch from ISI, Gerald J. Popek of UCLA.
The committee met from April 1981 to June 1983. Apart from the Fast File System, several features from outside contributors were accepted, including disk quotas and job control. Sun Microsystems provided testing on its Motorola 68000 machines prior to release, ensuring portability of the system; the official 4.2BSD release came in August 1983. It was notable as the first version released after the 1982 departure of Bill Joy to co-found Sun Microsystems.
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
IBM PC compatible
IBM PC compatible computers are computers similar to the original IBM PC, XT, AT, able to use the same software and expansion cards. Such computers used to be referred to as PC clones, or IBM clones, they duplicate exactly all the significant features of the PC architecture, facilitated by IBM's choice of commodity hardware components and various manufacturers' ability to reverse engineer the BIOS firmware using a "clean room design" technique. Columbia Data Products built the first clone of the IBM personal computer by a clean room implementation of its BIOS. Early IBM PC compatibles used the same computer bus as AT models; the IBM AT compatible bus was named the Industry Standard Architecture bus by manufacturers of compatible computers. The term "IBM PC compatible" is now a historical description only, since IBM has ended its personal computer sales. Descendants of the IBM PC compatibles comprise the majority of personal computers on the market presently with the dominant operating system being Microsoft Windows, although interoperability with the bus structure and peripherals of the original PC architecture may be limited or non-existent.
Some computers ran MS-DOS but had enough hardware differences that IBM compatible software could not be used. Only the Macintosh kept significant market share without compatibility with the IBM PC. IBM decided in 1980 to market a low-cost single-user computer as as possible in response to Apple Computer's success in the burgeoning microcomputer market. On 12 August 1981, the first IBM PC went on sale. There were three operating systems available for it; the least expensive and most popular was PC DOS made by Microsoft. In a crucial concession, IBM's agreement allowed Microsoft to sell its own version, MS-DOS, for non-IBM computers; the only component of the original PC architecture exclusive to IBM was the BIOS. IBM at first asked developers to avoid writing software that addressed the computer's hardware directly, to instead make standard calls to BIOS functions that carried out hardware-dependent operations; this software would run on any machine using MS-DOS or PC-DOS. Software that directly addressed the hardware instead of making standard calls was however.
Software addressing IBM PC hardware in this way would not run on MS-DOS machines with different hardware. The IBM PC was sold in high enough volumes to justify writing software for it, this encouraged other manufacturers to produce machines which could use the same programs, expansion cards, peripherals as the PC; the 808x computer marketplace excluded all machines which were not hardware- and software-compatible with the PC. The 640 KB barrier on "conventional" system memory available to MS-DOS is a legacy of that period. Rumors of "lookalike", compatible computers, created without IBM's approval, began immediately after the IBM PC's release. InfoWorld wrote on the first anniversary of the IBM PC that The dark side of an open system is its imitators. If the specs are clear enough for you to design peripherals, they are clear enough for you to design imitations. Apple... has patents on two important components of its systems... IBM, which has no special patents on the PC, is more vulnerable. Numerous PC-compatible machines—the grapevine says 60 or more—have begun to appear in the marketplace.
By June 1983 PC Magazine defined "PC'clone'" as "a computer accommodate the user who takes a disk home from an IBM PC, walks across the room, plugs it into the'foreign' machine". Because of a shortage of IBM PCs that year, many customers purchased clones instead. Columbia Data Products produced the first computer more or less compatible with the IBM PC standard during June 1982, soon followed by Eagle Computer. Compaq announced its first IBM PC compatible in the Compaq Portable; the Compaq was the first sewing machine-sized portable computer, 100% PC-compatible. The company could not copy the BIOS directly as a result of the court decision in Apple v. Franklin, but it could reverse-engineer the IBM BIOS and write its own BIOS using clean room design. At the same time, many manufacturers such as Tandy/RadioShack, Hewlett-Packard, Digital Equipment Corporation, Texas Instruments, Tulip and Olivetti introduced personal computers that supported MS-DOS, but were not software- or hardware-compatible with the IBM PC.
Tandy described the Tandy 2000, for example, as having a "'next generation' true 16-bit CPU", with "More speed. More disk storage. More expansion" than the IBM PC or "other MS-DOS computers". While admitting in 1984 that many MS-DOS programs did not support the computer, the company stated that "the most popular, sophisticated software on the market" was available, either or "over the next six months". Like IBM, Microsoft's intention was that application writers would write to the application programming interfaces in MS-DOS or the firmware BIOS, that this would form what would now be termed a hardware abstraction layer; each computer would have its own Original Equipment Manufacturer version of MS-DOS, customized to its hardware. Any software written for MS-DOS would operate on any MS-DOS computer, despite variations in hardware design; this expectation seemed reasonable in the computer marketplace of the time. Until Microsoft was based on computer languages such as BASIC; the established small system operating software was CP/M from Digital Research, in use both at the hobbyist level and by the more professional of t
A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of called programs; these programs enable computers to perform an wide range of tasks. A "complete" computer including the hardware, the operating system, peripheral equipment required and used for "full" operation can be referred to as a computer system; this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computers are used as control systems for a wide variety of industrial and consumer devices; this includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century; the first digital electronic calculating machines were developed during World War II. The speed and versatility of computers have been increasing ever since then. Conventionally, a modern computer consists of at least one processing element a central processing unit, some form of memory; the processing element carries out arithmetic and logical operations, a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices, output devices, input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, the best Arithmetician that euer breathed, he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations; the Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' is from 1897."
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name. Devices have been used to aid computation for thousands of years using one-to-one correspondence with fingers; the earliest counting device was a form of tally stick. Record keeping aids throughout the Fertile Crescent included calculi which represented counts of items livestock or grains, sealed in hollow unbaked clay containers; the use of counting rods is one example. The abacus was used for arithmetic tasks; the Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, markers moved around on it according to certain rules, as an aid to calculating sums of money; the Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.
It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use; the planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry and division, for various functions, such as squares and cube roots, was developed in
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
In computing, booting is starting up a computer or computer appliance until it can be used. It can be initiated by software command. After the power is switched on, the computer is dumb and can read only part of its storage called read-only memory. There, a small program is stored called firmware, it does power-on self-tests and, most allows accessing other types of memory like a hard disk and main memory. The firmware runs it. In general purpose computers, but additionally in smartphones and tablets, optionally a boot manager is run; the boot manager lets a user choose which operating system to run and set more complex parameters for it. The firmware or the boot manager loads the boot loader into the memory and runs it; this piece of software is able to place an operating system kernel like Windows or Linux into the computer's main memory and run it. Afterwards, the kernel runs so-called user space software – well known is the graphical user interface, which lets the user log in to the computer or run some other applications.
The whole process may take seconds to tenths of seconds on modern day general purpose computers. Restarting a computer is called reboot, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware such as a button press or by software command. Booting is complete when the operative runtime system operating system and some applications, is attained; the process of returning a computer from a state of hibernation or sleep does not involve booting. Minimally, some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may run operational programs that are stored in ROM. All computing systems are state machines, a reboot may be the only method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the boot process can load a storage dump program for diagnosing problems in an operating system.
Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by one's bootstraps. The usage calls attention to the requirement that, if most software is loaded onto a computer by other software running on the computer, some mechanism must exist to load the initial software onto the computer. Early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem; the invention of read-only memory of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased. Growth in the capacity of ROM has allowed more elaborate start up procedures to be implemented. There are many different methods available to load a short initial program into a computer; these methods reach from simple, physical input to removable media that can hold more complex programs. Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of many problems that had to be solved.
An early computer, ENIAC, had no "program" stored in memory, but was set up for each problem by a configuration of interconnecting cables. Bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied; the EDSAC system, the second stored program computer to be built, used stepping switches to transfer a fixed program into memory when its start button was pressed. The program stored on this device, which David Wheeler completed in late 1948, loaded further instructions from punched tape and executed them; the first programmable computers for commercial sale, such as the UNIVAC I and the IBM 701 included features to make their operation simpler. They included instructions that performed a complete input or output operation; the same hardware logic could be used to load the contents of a punch card or other input media, such as a magnetic drum or magnetic tape, that contained a bootstrap program by pressing a single button. This booting concept was called a variety of names for IBM computers of the 1950s and early 1960s, but IBM used the term "Initial Program Load" with the IBM 7030 Stretch and used it for their mainframe lines, starting with the System/360 in 1964.
The IBM 701 computer had a "Load" button that initiated reading of the first 36-bit word into main memory from a punched card in a card reader, a magnetic tape in a tape drive, or a magnetic drum unit, depending on the position of the Load Selector switch. The left 18-bit half-word was executed as an instruction, which read additional words into memory; the loaded boot program was executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator. The term "boot" has been used in this sense since at least 1958. Other IBM computers of that era had similar features. For example, the IBM 1401 system used a card reader to load a program from a punched card; the 80 characters stored in the punched card were read into memory locations 001 to 080 the computer would branch to memory location 001 to read its first stored instruction. This instruction was always the same: move the information in these first 80 memory locations to an assembly area where the information in punched cards 2, 3, 4, so on, could be combined to form the stored program.
Once this information was moved to the assembly area, the machine would branch to an instruction in location 080 and the next card
BIOS is non-volatile firmware used to perform hardware initialization during the booting process, to provide runtime services for operating systems and programs. The BIOS firmware comes pre-installed on a personal computer's system board, it is the first software to run when powered on; the name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS proprietary to the IBM PC has been reverse engineered by companies looking to create compatible systems; the interface of that original system serves as a de facto standard. The BIOS in modern PCs initializes and tests the system hardware components, loads a boot loader from a mass memory device which initializes an operating system. In the era of DOS, the BIOS provided a hardware abstraction layer for the keyboard and other input/output devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS after loading, instead accessing the hardware components directly.
Most BIOS implementations are designed to work with a particular computer or motherboard model, by interfacing with various devices that make up the complementary system chipset. BIOS firmware was stored in a ROM chip on the PC motherboard. In modern computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard; this allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails can brick the motherboard permanently, unless the system includes some form of backup for this case. Unified Extensible Firmware Interface is a successor to the legacy PC BIOS, aiming to address its technical shortcomings; the term BIOS was created by Gary Kildall and first appeared in the CP/M operating system in 1975, describing the machine-specific part of CP/M loaded during boot time that interfaces directly with the hardware.
Versions of MS-DOS, PC DOS or DR-DOS contain a file called variously "IO. SYS", "IBMBIO. COM", "IBMBIO. SYS", or "DRBIOS. SYS". Together with the underlying hardware-specific but operating system-independent "System BIOS", which resides in ROM, it represents the analogue to the "CP/M BIOS". With the introduction of PS/2 machines, IBM divided the System BIOS into real- and protected-mode portions; the real-mode portion was meant to provide backward compatibility with existing operating systems such as DOS, therefore was named "CBIOS", whereas the "ABIOS" provided new interfaces suited for multitasking operating systems such as OS/2. The BIOS of the original IBM PC and XT had no interactive user interface. Error codes or messages were displayed on the screen, or coded series of sounds were generated to signal errors when the power-on self-test had not proceeded to the point of initializing a video display adapter. Options on the IBM PC and XT were set by switches and jumpers on the main board and on expansion cards.
Starting around the mid-1990s, it became typical for the BIOS ROM to include a "BIOS configuration utility" or "BIOS setup utility", accessed at system power-up by a particular key sequence. This program allowed the user to set system configuration options, of the type set using DIP switches, through an interactive menu system controlled through the keyboard. In the interim period, IBM-compatible PCs—including the IBM AT—held configuration settings in battery-backed RAM and used a bootable configuration program on disk, not in the ROM, to set the configuration options contained in this memory; the disk was supplied with the computer, if it was lost the system settings could not be changed. The same applied in general to computers with an EISA bus, for which the configuration program was called an EISA Configuration Utility. A modern Wintel-compatible computer provides a setup routine unchanged in nature from the ROM-resident BIOS setup utilities of the late 1990s; when errors occur at boot time, a modern BIOS displays user-friendly error messages presented as pop-up boxes in a TUI style, offers to enter the BIOS setup utility or to ignore the error and proceed if possible.
Instead of battery-backed RAM, the modern Wintel machine may store the BIOS configuration settings in flash ROM the same flash ROM that holds the BIOS itself. Early Intel processors started at physical address 000FFFF0h. Systems with processors provide logic to start running the BIOS from the system ROM. If the system has just been powered up or the reset button was pressed, the full power-on self-test is run. If Ctrl+Alt+Delete was pressed, a special flag value stored in nonvolatile BIOS memory tested by the BIOS allows bypass of the lengthy POST and memory detection; the POST identifies, initializes system devices such as the CPU, RAM, interrupt and DMA controllers and other parts of the chipset, video display card, hard disk drive, optical disc drive and other basic hardware. Early IBM PCs had a routine in the POST that would download a program into RAM through the keyboard port and run it; this featur