DOS is a family of disk operating systems, hence the name. DOS consists of MS-DOS and a rebranded version under the name IBM PC DOS, both of which were introduced in 1981. Other compatible systems from other manufacturers include DR-DOS, ROM-DOS, PTS-DOS, FreeDOS. MS-DOS dominated the x86-based IBM PC compatible market between 1981 and 1995. Dozens of other operating systems use the acronym "DOS", including the mainframe DOS/360 from 1966. Others are Apple DOS, Apple ProDOS, Atari DOS, Commodore DOS, TRSDOS, AmigaDOS. Fictional operating systems have used this acronym as well, such as GLaDOS from the video game Portal. IBM PC DOS and its predecessor, 86-DOS, resembled Digital Research's CP/M—the dominant disk operating system for 8-bit Intel 8080 and Zilog Z80 microcomputers—but instead ran on Intel 8086 16-bit processors; when IBM introduced the IBM PC, built with the Intel 8088 microprocessor, they needed an operating system. Seeking an 8088-compatible build of CP/M, IBM approached Microsoft CEO Bill Gates.
IBM was sent to Digital Research, a meeting was set up. However, the initial negotiations for the use of CP/M broke down. Digital Research founder Gary Kildall refused, IBM withdrew. IBM again approached Bill Gates. Gates in turn approached Seattle Computer Products. There, programmer Tim Paterson had developed a variant of CP/M-80, intended as an internal product for testing SCP's new 16-bit Intel 8086 CPU card for the S-100 bus; the system was named QDOS, before being made commercially available as 86-DOS. Microsoft purchased 86-DOS for $50,000; this became Microsoft Disk Operating System, MS-DOS, introduced in 1981. Within a year Microsoft licensed MS-DOS to over 70 other companies, which supplied the operating system for their own hardware, sometimes under their own names. Microsoft required the use of the MS-DOS name, with the exception of the IBM variant. IBM continued to develop their version, PC DOS, for the IBM PC. Digital Research became aware that an operating system similar to CP/M was being sold by IBM, threatened legal action.
IBM responded by offering an agreement: they would give PC consumers a choice of PC DOS or CP/M-86, Kildall's 8086 version. Side-by-side, CP/M cost $200 more than PC DOS, sales were low. CP/M faded, with MS-DOS and PC DOS becoming the marketed operating system for PCs and PC compatibles. Microsoft sold MS-DOS only to original equipment manufacturers. One major reason for this was. DOS was structured such that there was a separation between the system specific device driver code and the DOS kernel. Microsoft provided an OEM Adaptation Kit which allowed OEMs to customize the device driver code to their particular system. By the early 1990s, most PCs adhered to IBM PC standards so Microsoft began selling MS-DOS in retail with MS-DOS 5.0. In the mid-1980s Microsoft developed a multitasking version of DOS; this version of DOS is referred to as "European MS-DOS 4" because it was developed for ICL and licensed to several European companies. This version of DOS supports preemptive multitasking, shared memory, device helper services and New Executable format executables.
None of these features were used in versions of DOS, but they were used to form the basis of the OS/2 1.0 kernel. This version of DOS is distinct from the released PC DOS 4.0, developed by IBM and based upon DOS 3.3. Digital Research attempted to regain the market lost from CP/M-86 with Concurrent DOS, FlexOS and DOS Plus with Multiuser DOS and DR DOS. Digital Research was bought by Novell, DR DOS became Novell DOS 7. Gordon Letwin wrote in 1995 that "DOS was, when we first wrote it, a one-time throw-away product intended to keep IBM happy so that they'd buy our languages". Microsoft expected; the company planned to over time improve MS-DOS so it would be indistinguishable from single-user Xenix, or XEDOS, which would run on the Motorola 68000, Zilog Z-8000, LSI-11. IBM, did not want to replace DOS. After AT&T began selling Unix, Microsoft and IBM began developing OS/2 as an alternative; the two companies had a series of disagreements over two successor operating systems to DOS, OS/2 and Windows.
They split development of their DOS systems as a result. The last retail version of MS-DOS was MS-DOS 6.22. The last retail version of PC DOS was PC DOS 2000, though IBM did develop PC DOS 7.10 for OEMs and internal use. The FreeDOS project began on 26 June 1994, when Microsoft announced it would no longer sell or support MS-DOS. Jim Hall posted a manifesto proposing the development of an open-source replacement. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. A kernel, the COMMAND. COM command line interpreter, core utilities were created by pooling code they had wri
A floppy disk known as a floppy, diskette, or disk, is a type of disk storage composed of a disk of thin and flexible magnetic storage medium, sealed in a rectangular plastic enclosure lined with fabric that removes dust particles. Floppy disks are written by a floppy disk drive. Floppy disks as 8-inch media and in 5 1⁄4-inch and 3 1⁄2 inch sizes, were a ubiquitous form of data storage and exchange from the mid-1970s into the first years of the 21st century. By 2006 computers were manufactured with installed floppy disk drives; these formats are handled by older equipment. The prevalence of floppy disks in late-twentieth century culture was such that many electronic and software programs still use the floppy disks as save icons. While floppy disk drives still have some limited uses with legacy industrial computer equipment, they have been superseded by data storage methods with much greater capacity, such as USB flash drives, flash storage cards, portable external hard disk drives, optical discs, cloud storage and storage available through computer networks.
The first commercial floppy disks, developed in the late 1960s, were 8 inches in diameter. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, Burroughs Corporation; the term "floppy disk" appeared in print as early as 1970, although IBM announced its first media as the "Type 1 Diskette" in 1973, the industry continued to use the terms "floppy disk" or "floppy". In 1976, Shugart Associates introduced the 5 1⁄4-inch FDD. By 1978 there were more than 10 manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as FM, MFM, M2FM and GCR; the 5 1⁄4-inch format displaced the 8-inch one for most applications, the hard-sectored disk format disappeared. The most common capacity of the 5 1⁄4-inch format in DOS-based PCs was 360 KB, for the DSDD format using MFM encoding. In 1984 IBM introduced with its PC-AT model the 1.2 MB dual-sided 5 1⁄4-inch floppy disk, but it never became popular.
IBM started using the 720 KB double-density 3 1⁄2-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the PS/2 line in 1987. These disk drives could be added to older PC models. In 1988 IBM introduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models, but this was a commercial failure. Throughout the early 1980s, limitations of the 5 1⁄4-inch format became clear. Designed to be more practical than the 8-inch format, it was itself too large. A number of solutions were developed, with drives at 2-, 2 1⁄2-, 3-, 3 1⁄4-, 3 1⁄2- and 4-inches offered by various companies, they all shared a number of advantages over the old format, including a rigid case with a sliding metal shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, a sliding write protection tab, far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5 1⁄4-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share.
A variant on the Sony design, introduced in 1982 by a large number of manufacturers, was rapidly adopted. The term floppy disk persisted though style floppy disks have a rigid case around an internal floppy disk. By the end of the 1980s, 5 1⁄4-inch disks had been superseded by 3 1⁄2-inch disks. During this time, PCs came equipped with drives of both sizes. By the mid-1990s, 5 1⁄4-inch drives had disappeared, as the 3 1⁄2-inch disk became the predominant floppy disk; the advantages of the 3 1⁄2-inch disk were its higher capacity, its smaller size, its rigid case which provided better protection from dirt and other environmental risks. If a person touches the exposed disk surface of a 5 1⁄4-inch disk through the drive hole, fingerprints may foul the disk—and the disk drive head if the disk is subsequently loaded into a drive—and it is easily possible to damage a disk of this type by folding or creasing it rendering it at least unreadable; however due to its simpler construction the 5 1⁄4-inch disk unit price was lower throughout its history in the range of a third to a half that of a 3 1⁄2-inch disk.
Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, create backups. Before hard disks became affordable to the general population, floppy disks were used to store a computer's operating system. Most home computers from that period have an elementary OS and BASIC stored in ROM, with the option of loading a more advanced operating system from a floppy disk. By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Distribution of larger packages was replaced by CD-ROMs, DVDs and online distribution. An
Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
Disk formatting is the process of preparing a data storage device such as a hard disk drive, solid-state drive, floppy disk or USB flash drive for initial use. In some cases, the formatting operation may create one or more new file systems; the first part of the formatting process that performs basic medium preparation is referred to as "low-level formatting". Partitioning is the common term for the second part of the process, making the data storage device visible to an operating system; the third part of the process termed "high-level formatting" most refers to the process of generating a new file system. In some operating systems all or parts of these three processes can be combined or repeated at different levels and the term "format" is understood to mean an operation in which a new disk medium is prepared to store files; as a general rule, formatting a disk leaves most if not all existing data on the disk medium. Special tools can remove user data by a single overwrite of free space. A block, a contiguous number of bytes, is the minimum unit of storage, read from and written to a disk by a disk driver.
The earliest disk drives had fixed block sizes but starting with the 1301 IBM marketed subsystems that featured variable block sizes: a particular track could have blocks of different sizes. The disk subsystems on the IBM System/360 expanded this concept in the form of Count Key Data and Extended Count Key Data. Modern hard disk drives, such as Serial attached SCSI and Serial ATA drives, appear at their interfaces as a contiguous set of fixed-size blocks. Floppy disks only used fixed block sizes but these sizes were a function of the host's OS and its interaction with its controller so that a particular type of media would have different block sizes depending upon the host OS and controller. Optical discs only use fixed block sizes. Formatting a disk for use by an operating system and its applications involves three different processes. Low-level formatting marks the surfaces of the disks with markers indicating the start of a recording block and other information like block CRC to be used in normal operations, by the disk controller to read or write data.
This is intended to be the permanent foundation of the disk, is completed at the factory. Partitioning divides a disk into one or more regions, writing data structures to the disk to indicate the beginning and end of the regions; this level of formatting includes checking for defective tracks or defective sectors. High-level formatting creates the file system format within a logical volume; this formatting includes the data structures used by the OS to identify the logical drive or partition's contents. This may occur during operating system installation. Disk and distributed file system may specify an optional boot block, and/or various volume and directory information for the operating system; the low-level format of floppy disks is performed by the disk drive's controller. Consider a standard 1.44 MB floppy disk. Low-level formatting of the floppy disk writes 18 sectors of 512 bytes to each of 160 tracks of the floppy disk, providing 1,474,560 bytes of storage on the disk. Physical sectors are larger than 512 bytes, as in addition to the 512 byte data field they include a sector identifier field, CRC bytes and gaps between the fields.
These additional bytes are not included in the quoted figure for overall storage capacity of the disk. Different low-level formats can be used on the same media. Several freeware and free software programs allowed more control over formatting, allowing the formatting of high-density 3.5" disks with a capacity up to 2 MB. Techniques used include: head/track sector skew, interleaving sectors, increasing the number of sectors per track, increasing the number of tracks. Linux supports a variety of sector sizes, DOS and Windows support a large-record-size DMF-formatted floppy format. Hard disk drives prior to the 1990s had a separate disk controller that defined how data was encoded on the media. With the media, the drive and/or the controller procured from separate vendors, users were able to perform low-level formatting. Separate procurement had the potential of incompatibility between the separate components such that the subsystem would not reliably store data. User instigated low-level formatting of hard disk drives was common for minicomputer and personal computer systems until the 1990s.
IBM and other mainframe system vendors typical
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo