In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing; the processor responds by suspending its current activities, saving its state, executing a function called an interrupt handler to deal with the event. This interruption is temporary, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, either a part of the computer itself, such as a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position.
Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction set which causes an interrupt when it is executed; the former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, a divide-by-zero exception will be thrown if the processor's arithmetic logic unit is commanded to divide a number by zero as this instruction is an error and impossible; the operating system will catch this exception, can decide what to do about it: aborting the process and displaying an error message. Software interrupt instructions can function to subroutine calls and are used for a variety of purposes, such as to request services from device drivers, like interrupts sent to and from a disk controller to request reading or writing of data to and from the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request lines to the processor, but there may be hundreds of different software interrupts. Interrupts are a used technique for computer multitasking in real-time computing; such a system is said to be interrupt-driven. Interrupts are similar to signals, the difference being that signals are used for inter-process communication, mediated by the kernel and handled by processes, while interrupts are mediated by the processor and handled by the kernel; the kernel may pass an interrupt as a signal to the process. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events; the first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. Interrupts may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem.
If implemented in hardware, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. Interrupts can be categorized into these different types: Maskable interrupt: a hardware interrupt that may be ignored by setting a bit in an interrupt mask register's bit-mask. Non-maskable interrupt: a hardware interrupt that lacks an associated bit-mask, so that it can never be ignored. NMIs are used for the highest priority tasks such as timers watchdog timers. Inter-processor interrupt: a special case of interrupt, generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt: an interrupt generated within a processor by executing an instruction. Software interrupts are used to implement system calls because they result in a subroutine call with a CPU ring level change.
Spurious interrupt: a hardware interrupt, unwanted. They are generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware. Processors have an internal interrupt mask which allows software to ignore all external hardware interrupts while it is set. Setting or clearing this mask may be faster than accessing an interrupt mask register in a PIC or disabling interrupts in the device itself. In some cases, such as the x86 architecture and enabling interrupts on the processor itself act as a memory barrier. An interrupt that leaves the machine in a well-defined state is called a precise interrupt; such an interrupt has four properties: The Program Counter is saved in a known place. All instructions before the one pointed to by the PC have executed. No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt; the execution state of the instruction pointed to by the PC is known.
An interrupt that does not meet these requirements is called an impr
IBM Personal Computer XT
The IBM Personal Computer XT shortened to the IBM XT, PC XT, or XT, is a version of the IBM PC with a built-in hard drive. It was released as IBM Machine Type number 5160 on March 8, 1983. Apart from the hard drive, it was the same as the original PC, with only minor improvements; the XT was intended as an enhanced IBM PC for business users. Floppy-only models would replace the original model 5150 PC. A corresponding 3270 PC featuring 3270 terminal emulation was released in October 1983. XT stands for eXtended Technology; the IBM Personal Computer XT came with 128 KB of RAM, a 360 KB double-sided 5¼ inch floppy disk drive, a 10 MB Seagate ST-412 hard drive with Xebec 1210 Modified Frequency Modulation controller, an Asynchronous Adapter, a 130-watt power supply. The motherboard had an Intel 8088 microprocessor running at 4.77 MHz, with a socket for an optional 8087 math coprocessor. IBM recognized soon after the IBM PC's release in 1981 that its five 8-bit "I/O channel" expansion slots were insufficient.
An internal IBM publication stated in October 1981 about the number that "In my opinion, it could be a problem", reporting that others within IBM advised swapping cards if necessary. Every PC required at least a display adapter card and a floppy disk controller card, leaving only three slots available for a parallel printer port card, a serial port card, memory expansion boards, a 3rd-party hard disk controller card, a second display adapter card, or possible other special adapter cards; when IBM announced a successor product to the PC in early 1983, initial speculations were that it would be a next-generation machine based on the Intel 8086 or include other advanced features. When the XT was unveiled however, there was mild disappointment that the new machine was an incremental improvement of the PC based on the same 8088 CPU and would in fact not replace it at all. A BYTE Magazine article commented that "DOS 2.0 is more revolutionary and advanced than the computer itself." A Seagate ST-412 hard disk was standard equipment, the XT was not offered in a floppy-only model for its first two years on the market, although the standard ribbon cable with two floppy connectors was still included.
The only way to purchase an XT with factory-installed dual floppy drives was if the user bought the optional 5161 expansion chassis and placed the hard disk in that, which in effect amounted to purchasing two hard disks as the 5161 came with one standard. Unlike many hard disk systems on microcomputers at the time, the XT was able to boot directly off the drive and did not require a boot floppy. Aside from the hard disk, a serial port card was standard equipment on the XT, all other cards being optional. By the end of 1983, the XT was neck-and-neck with the original PC for sales and IBM were selling every one that they made; the XT had eight slots. Two were behind the floppy drive, shorter than the original PC's slots; the other six fit into the same space as the original PC's five slots. Most PC cards would not fit into the two short slots, some would not fit into the six standard-length, but narrower, slots cards with double boards on them; the floppy and hard drive adapters, the serial port card, nearly always a display adapter board occupied slots.
The basic specification was soon upgraded to have 256 KB of RAM as standard. Expansion slots could be used for memory expansion. Available Video cards were the Monochrome Display Adapter and Color Graphics Adapter, with Enhanced Graphics Adapter and Professional Graphics Controller becoming available in 1984; the XT had a desktop case similar to that of the IBM PC. It weighed 32 pounds and was 19.5 inches wide by 16 inches deep by 5.5 inches high. The power supply of the original XT sold in the US was configured for 120 V AC only and could not be used with 240 V mains supplies. XTs with 240 V-compatible power supplies were sold in international markets. Both were rated at 130 watts; the operating system sold with the XT was PC DOS 2.0 or, by the time the XT was discontinued in early 1987, DOS 3.2. Like the original PC, the XT came with IBM BASIC in its ROM. Despite the lack of a cassette port on XTs, IBM's licensing agreement with Microsoft forced them to include BASIC on all their PCs, the BASICA program, included with DOS depended on the BASIC ROM.
The XT BIOS displayed a memory count during the POST, unlike the PC. The XT was discontinued in the spring of 1987, replaced by the PS/2 Model 30. XT motherboards came in two different versions; the original had 64 KB of 4164 RAM socketed on it with further sockets to support up to 256 KB and any more RAM had to be put on an expansion card, of which the AST Research Six Pak was the most widespread and popular. XTs produced in 1983-84 shipped in 1985, 256k; the second version had 256 KB socketed on it and could accommodate the entire 640 KB. XTs used 4164 DRAMs only for the first 256k and the remainder of system memory consisted of larger 41256 DRAMs; as a result, it took only 44 RAM chips to reach 640 kB versus the 80 chips needed on the original model XT. There were two or three revisions of the motherboard with minor differences between them; the first version incorporates a 470 ohm resistor to fix a race condition between the CPU and DMA controller which created the possibility of the system locking up.
In the spring o
X86 is a family of instruction set architectures based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address; the term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Many additions and extensions have been added to the x86 instruction set over the years consistently with full backward compatibility; the architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many other companies. Of those, only Intel, AMD, VIA hold x86 architectural licenses, are producing modern 64-bit designs; the term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. As of 2018, the majority of personal computers and laptops sold are based on the x86 architecture, while other categories—especially high-volume mobile categories such as smartphones or tablets—are dominated by ARM.
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 represented any 8086 compatible CPU. Today, however, x86 implies a binary compatibility with the 32-bit instruction set of the 80386; this is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, as well as simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were terms iRMX, iSBC, iSBX – all together under the heading Microsystem 80. However, this naming scheme was quite temporary.
Although the 8086 was developed for embedded systems and small multi-user or single-user computers as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, is used in midrange computers, workstations and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware. Modern x86 is uncommon in embedded systems and small low power applications as well as low-cost microprocessor markets, such as home appliances and toys, lack any significant x86 presence. Simple 8-bit and 16-bit based architectures are common here, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some low power and low cost segments. There have been several attempts, including by Intel itself, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors.
Examples of this are the iAPX 432, the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips such as the eight-core Intel Xeon and 12-core AMD Opteron is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from new architectures; the table below lists processor models and model series implementing variations of the x86 instruction set, in chronological order. Each line item is characterized by improved or commercially successful processor microarchitecture designs. At various times, companies such as IBM, NEC, AMD, TI, STM, Fujitsu, OKI, Cyrix, Intersil, C&T, NexGen, UMC, DM&P started to design or manufacture x86 processors intended for personal computers as well as embedded systems; such x86 implementations are simple copies but employ different internal microarchitectures as well as different solutions at the electronic and physical levels.
Quite early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors named to Intel's original chips. Other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, Weitek. Following the pipelined i486, Intel introduced the Pentium brand name for their new set of superscalar x86 designs.
Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
A floppy disk known as a floppy, diskette, or disk, is a type of disk storage composed of a disk of thin and flexible magnetic storage medium, sealed in a rectangular plastic enclosure lined with fabric that removes dust particles. Floppy disks are written by a floppy disk drive. Floppy disks as 8-inch media and in 5 1⁄4-inch and 3 1⁄2 inch sizes, were a ubiquitous form of data storage and exchange from the mid-1970s into the first years of the 21st century. By 2006 computers were manufactured with installed floppy disk drives; these formats are handled by older equipment. The prevalence of floppy disks in late-twentieth century culture was such that many electronic and software programs still use the floppy disks as save icons. While floppy disk drives still have some limited uses with legacy industrial computer equipment, they have been superseded by data storage methods with much greater capacity, such as USB flash drives, flash storage cards, portable external hard disk drives, optical discs, cloud storage and storage available through computer networks.
The first commercial floppy disks, developed in the late 1960s, were 8 inches in diameter. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, Burroughs Corporation; the term "floppy disk" appeared in print as early as 1970, although IBM announced its first media as the "Type 1 Diskette" in 1973, the industry continued to use the terms "floppy disk" or "floppy". In 1976, Shugart Associates introduced the 5 1⁄4-inch FDD. By 1978 there were more than 10 manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as FM, MFM, M2FM and GCR; the 5 1⁄4-inch format displaced the 8-inch one for most applications, the hard-sectored disk format disappeared. The most common capacity of the 5 1⁄4-inch format in DOS-based PCs was 360 KB, for the DSDD format using MFM encoding. In 1984 IBM introduced with its PC-AT model the 1.2 MB dual-sided 5 1⁄4-inch floppy disk, but it never became popular.
IBM started using the 720 KB double-density 3 1⁄2-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the PS/2 line in 1987. These disk drives could be added to older PC models. In 1988 IBM introduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models, but this was a commercial failure. Throughout the early 1980s, limitations of the 5 1⁄4-inch format became clear. Designed to be more practical than the 8-inch format, it was itself too large. A number of solutions were developed, with drives at 2-, 2 1⁄2-, 3-, 3 1⁄4-, 3 1⁄2- and 4-inches offered by various companies, they all shared a number of advantages over the old format, including a rigid case with a sliding metal shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, a sliding write protection tab, far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5 1⁄4-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share.
A variant on the Sony design, introduced in 1982 by a large number of manufacturers, was rapidly adopted. The term floppy disk persisted though style floppy disks have a rigid case around an internal floppy disk. By the end of the 1980s, 5 1⁄4-inch disks had been superseded by 3 1⁄2-inch disks. During this time, PCs came equipped with drives of both sizes. By the mid-1990s, 5 1⁄4-inch drives had disappeared, as the 3 1⁄2-inch disk became the predominant floppy disk; the advantages of the 3 1⁄2-inch disk were its higher capacity, its smaller size, its rigid case which provided better protection from dirt and other environmental risks. If a person touches the exposed disk surface of a 5 1⁄4-inch disk through the drive hole, fingerprints may foul the disk—and the disk drive head if the disk is subsequently loaded into a drive—and it is easily possible to damage a disk of this type by folding or creasing it rendering it at least unreadable; however due to its simpler construction the 5 1⁄4-inch disk unit price was lower throughout its history in the range of a third to a half that of a 3 1⁄2-inch disk.
Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, create backups. Before hard disks became affordable to the general population, floppy disks were used to store a computer's operating system. Most home computers from that period have an elementary OS and BASIC stored in ROM, with the option of loading a more advanced operating system from a floppy disk. By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Distribution of larger packages was replaced by CD-ROMs, DVDs and online distribution. An
MS-DOS is an operating system for x86-based personal computers developed by Microsoft. Collectively, MS-DOS, its rebranding as IBM PC DOS, some operating systems attempting to be compatible with MS-DOS, are sometimes referred to as "DOS". MS-DOS was the main operating system for IBM PC compatible personal computers during the 1980s and the early 1990s, when it was superseded by operating systems offering a graphical user interface, in various generations of the graphical Microsoft Windows operating system. MS-DOS was the result of the language developed in the seventies, used by IBM for its mainframe operating system. Microsoft acquired the rights to meet IBM specifications. IBM re-released it on August 12, 1981 as PC DOS 1.0 for use in their PCs. Although MS-DOS and PC DOS were developed in parallel by Microsoft and IBM, the two products diverged after twelve years, in 1993, with recognizable differences in compatibility and capabilities. During its lifetime, several competing products were released for the x86 platform, MS-DOS went through eight versions, until development ceased in 2000.
MS-DOS was targeted at Intel 8086 processors running on computer hardware using floppy disks to store and access not only the operating system, but application software and user data as well. Progressive version releases delivered support for other mass storage media in greater sizes and formats, along with added feature support for newer processors and evolving computer architectures, it was the key product in Microsoft's growth from a programming language company to a diverse software development firm, providing the company with essential revenue and marketing resources. It was the underlying basic operating system on which early versions of Windows ran as a GUI, it is a flexible operating system, consumes negligible installation space. MS-DOS was a renamed form of 86-DOS – owned by Seattle Computer Products, written by Tim Paterson. Development of 86-DOS took only six weeks, as it was a clone of Digital Research's CP/M, ported to run on 8086 processors and with two notable differences compared to CP/M.
This first version was shipped in August 1980. Microsoft, which needed an operating system for the IBM Personal Computer hired Tim Paterson in May 1981 and bought 86-DOS 1.10 for $75,000 in July of the same year. Microsoft kept the version number, but renamed it MS-DOS, they licensed MS-DOS 1.10/1.14 to IBM, who, in August 1981, offered it as PC DOS 1.0 as one of three operating systems for the IBM 5150, or the IBM PC. Within a year Microsoft licensed MS-DOS to over 70 other companies, it was designed to be an OS. Each computer would have its own distinct hardware and its own version of MS-DOS, similar to the situation that existed for CP/M, with MS-DOS emulating the same solution as CP/M to adapt for different hardware platforms. To this end, MS-DOS was designed with a modular structure with internal device drivers, minimally for primary disk drives and the console, integrated with the kernel and loaded by the boot loader, installable device drivers for other devices loaded and integrated at boot time.
The OEM would use a development kit provided by Microsoft to build a version of MS-DOS with their basic I/O drivers and a standard Microsoft kernel, which they would supply on disk to end users along with the hardware. Thus, there were many different versions of "MS-DOS" for different hardware, there is a major distinction between an IBM-compatible machine and an MS-DOS machine; some machines, like the Tandy 2000, were MS-DOS compatible but not IBM-compatible, so they could run software written for MS-DOS without dependence on the peripheral hardware of the IBM PC architecture. This design would have worked well for compatibility, if application programs had only used MS-DOS services to perform device I/O, indeed the same design philosophy is embodied in Windows NT. However, in MS-DOS's early days, the greater speed attainable by programs through direct control of hardware was of particular importance for games, which pushed the limits of their contemporary hardware. Soon an IBM-compatible architecture became the goal, before long all 8086-family computers emulated IBM's hardware, only a single version of MS-DOS for a fixed hardware platform was needed for the market.
This version is the version of MS-DOS, discussed here, as the dozens of other OEM versions of "MS-DOS" were only relevant to the systems they were designed for, in any case were similar in function and capability to some standard version for the IBM PC—often the same-numbered version, but not always, since some OEMs used their own proprietary version numbering schemes —with a few notable exceptions. Microsoft omitted multi-user support from MS-DOS because Microsoft's Unix-based operating system, was multi-user; the company planned, over time, to improve MS-DOS so it would be indistinguishable from single-user Xenix, or XEDOS, which would run on the Motorola 68000, Zilog Z8000, the LSI-11. Microsoft advertised MS-DOS and Xenix together, listing the shared features of its "single-user OS" and "the multi-user, multi-tasking, UNIX-derived operating system", promising easy