In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most the symbols "0"–"9" to represent values zero to nine, "A"–"F" to represent values ten to fifteen. Hexadecimal numerals are used by computer system designers and programmers, as they provide a more human-friendly representation of binary-coded values; each hexadecimal digit represents four binary digits known as a nibble, half a byte. For example, a single byte can have values ranging from 0000 0000 to 1111 1111 in binary form, which can be more conveniently represented as 00 to FF in hexadecimal. In mathematics, a subscript is used to specify the radix. For example the decimal value 10,995 would be expressed in hexadecimal as 2AF316. In programming, a number of notations are used to support hexadecimal representation involving a prefix or suffix; the prefix 0x is used in C and related languages, which would denote this value by 0x2AF3. Hexadecimal is used in the transfer encoding Base16, in which each byte of the plaintext is broken into two 4-bit values and represented by two hexadecimal digits.
In contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript can give the base explicitly: 15910 is decimal 159; some authors prefer a text subscript, such as 159decimal and 159hex, or 159h. In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: In URIs, character codes are written as hexadecimal pairs prefixed with %: http://www.example.com/name%20with%20spaces where %20 is the space character, ASCII code point 20 in hex, 32 in decimal. In XML and XHTML, characters can be expressed as hexadecimal numeric character references using the notation
ode, thus ’. In the Unicode standard, a character value is represented with U+ followed by the hex value, e.g. U+20AC is the Euro sign. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #: white, for example, is represented #FFFFFF.
CSS allows 3-hexdigit abbreviations with one hexdigit per component: #FA3 abbreviates #FFAA33. Unix shells, AT&T assembly language and the C programming language use the prefix 0x for numeric constants represented in hex: 0x5A3. Character and string constants may express character codes in hexadecimal with the prefix \x followed by two hex digits:'\x1B' represents the Esc control character. To output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In MIME quoted-printable encoding, characters that cannot be represented as literal ASCII characters are represented by their codes as two hexadecimal digits prefixed by an equal to sign =, as in Espa=F1a to send "España". In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h: FFh or 05A3H; some implementations require a leading zero when the first hexadecimal digit character is not a decimal digit, so one would write 0FFh instead of FFh Other assembly languages, Delphi, some versions of BASIC, GameMaker Language and Forth use $ as a prefix: $5A3.
Some assembly languages use the notation H'ABCD'. Fortran 95 uses Z'ABCD'. Ada and VHDL enclose hexadecimal numerals in based "numeric quotes": 16#5A3#. For bit vector constants VHDL uses the notation x"5A3". Verilog represents hexadecimal constants in the form 8'hFF, where 8 is the number of bits in the value and FF is the hexadecimal constant; the Smalltalk language uses the prefix 16r: 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#: 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs: AA213FD51B3801043FBC... Common Lisp uses the prefixes # 16r. Setting the variables *read-base* and *print-base* to 16 can be used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers, thus Hexadecimal numbers can be represented without the #x or #16r prefix code, when the input or output base has been changed to 16. MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H: &H5A3 BBC BASIC and Locomotive BASIC use & for hex.
TI-89 and 92 series uses a 0h prefix: 0h5A3 ALGOL 68 uses the prefix 16r to denote hexadecimal numbers: 16r5a3. Binary and octal numbers can be specified similarly; the most common format for hexadecimal on IBM mainframes and midrange computers running the traditional OS's is X'5A3', is used in Assembler, PL/I, COBOL, JCL, scripts and other places. This format was common on
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
United States v. Microsoft Corp.
United States v. Microsoft Corporation, 253 F.3d 34, is a U. S. antitrust law case, settled by the Department of Justice, in which the technology company Microsoft was accused of holding a monopoly and engaging in anti-competitive practices contrary to sections 1 and 2 of the Sherman Antitrust Act. The plaintiffs alleged that Microsoft had abused monopoly power on Intel-based personal computers in its handling of operating system and web browser integration; the issue central to the case was whether Microsoft was allowed to bundle its flagship Internet Explorer web browser software with its Windows operating system. Bundling them is alleged to have been responsible for Microsoft's victory in the browser wars as every Windows user had a copy of IE, it was further alleged that this restricted the market for competing web browsers, since it took a while to download or purchase such software at a store. Underlying these disputes were questions over whether Microsoft had manipulated its application programming interfaces to favor IE over third-party web browsers, Microsoft's conduct in forming restrictive licensing agreements with original equipment manufacturers, Microsoft's intent in its course of conduct.
Microsoft stated that the merging of Windows and IE was the result of innovation and competition, that the two were now the same product and inextricably linked, that consumers were receiving the benefits of IE free. Opponents countered that IE was still a separate product which did not need to be tied to Windows, since a separate version of IE was available for Mac OS, they asserted that IE was not free because its development and marketing costs may have inflated the price of Windows. The case was tried before Judge Thomas Penfield Jackson in the United States District Court for the District of Columbia; the DOJ was represented by David Boies. Compared to the European Decision against Microsoft, the DOJ case is focused less on interoperability and more on predatory strategies and market barriers to entry. By 1984 Microsoft was one of the most successful software companies, with $55 million in 1983 sales. InfoWorld wrote that it is recognized as the most influential company in the microcomputer-software industry.
Claiming more than a million installed MS-DOS machines and chairman Bill Gates has decided to certify Microsoft's jump on the rest of the industry by dominating applications, operating systems, peripherals and, most book publishing. Some insiders say. Although Gates says that he isn't trying to dominate the industry with sheer numbers, his strategy for dominance involves Microsoft's new Windows operating system... "Our strategies and energies as a company are committed to Windows, in the same way that we're committed to operating-system kernels like MS-DOS and Xenix," says Gates. "We're saying that only applications that take advantage of Windows will be competitive in the long run." Gates claimed that Microsoft's entrance into the application market with such products as Multiplan and the new Chart product was not a big-time operation. The U. S. government's interest in Microsoft began in 1992 with an inquiry by the Federal Trade Commission over whether Microsoft was abusing its monopoly on the PC operating system market.
The commissioners deadlocked with a 2–2 vote in 1993 and closed the investigation, but the Department of Justice led by Janet Reno opened its own investigation on August 21 of that year, resulting in a settlement on July 15, 1994 in which Microsoft consented not to tie other Microsoft products to the sale of Windows but remained free to integrate additional features into the operating system. In the years that followed, Microsoft insisted that Internet Explorer was not a product but a feature which it was allowed to add to Windows, although the DOJ did not agree with this definition. In its 2008 Annual Report, Microsoft stated: Lawsuits brought by the U. S. Department of Justice, 18 states, the District of Columbia in two separate actions were resolved through a Consent Decree that took effect in 2001 and a Final Judgment entered in 2002; these proceedings imposed various constraints on our Windows operating system businesses. These constraints include limits on certain contracting practices, mandated disclosure of certain software program interfaces and protocols, rights for computer manufacturers to limit the visibility of certain Windows features in new PCs.
We believe. However, if we fail to comply with them, additional restrictions could be imposed on us that would adversely affect our business; the suit began on May 18, 1998, with the U. S. Department of Justice and the Attorneys General of twenty U. S. states suing Microsoft for illegally thwarting competition in order to protect and extend its software monopoly. In October 1998, the U. S. Department of Justice sued Microsoft for violating a 1994 consent decree by forcing computer makers to include its Internet browser as a part of the installation of Windows software. While the DOJ was represented by David Boies, the States were separately represented by New York Attorneys General Alan Kusinitz, Gail Cleary and Steve Houck. Bill Gates was called "nonresponsive" by a source present at his deposition, he argued over the definitions of words such as "compete", "concerned", "ask", "we". Businessweek reported that "early rounds of his deposition show him offering obfuscatory answers and saying'I don't recall' so many times that the presiding judge had to chuckle.
Many of the technology chief's
In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, automated reasoning, other tasks; as an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input, the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states producing "output" and terminating at a final ending state; the transition from one state to the next is not deterministic. The concept of algorithm has existed for centuries. Greek mathematicians used algorithms in the sieve of Eratosthenes for finding prime numbers, the Euclidean algorithm for finding the greatest common divisor of two numbers; the word algorithm itself is derived from the 9th century mathematician Muḥammad ibn Mūsā al-Khwārizmī, Latinized Algoritmi.
A partial formalization of what would become the modern concept of algorithm began with attempts to solve the Entscheidungsproblem posed by David Hilbert in 1928. Formalizations were framed as attempts to define "effective calculability" or "effective method"; those formalizations included the Gödel–Herbrand–Kleene recursive functions of 1930, 1934 and 1935, Alonzo Church's lambda calculus of 1936, Emil Post's Formulation 1 of 1936, Alan Turing's Turing machines of 1936–37 and 1939. The word'algorithm' has its roots in Latinizing the name of Muhammad ibn Musa al-Khwarizmi in a first step to algorismus. Al-Khwārizmī was a Persian mathematician, astronomer and scholar in the House of Wisdom in Baghdad, whose name means'the native of Khwarazm', a region, part of Greater Iran and is now in Uzbekistan. About 825, al-Khwarizmi wrote an Arabic language treatise on the Hindu–Arabic numeral system, translated into Latin during the 12th century under the title Algoritmi de numero Indorum; this title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name.
Al-Khwarizmi was the most read mathematician in Europe in the late Middle Ages through another of his books, the Algebra. In late medieval Latin, English'algorism', the corruption of his name meant the "decimal number system". In the 15th century, under the influence of the Greek word ἀριθμός'number', the Latin word was altered to algorithmus, the corresponding English term'algorithm' is first attested in the 17th century. In English, it was first used in about 1230 and by Chaucer in 1391. English adopted the French term, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu, it begins thus: Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as: Algorism is the art by which at present we use those Indian figures, which number two times five; the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals.
An informal definition could be "a set of rules that defines a sequence of operations". Which would include all computer programs, including programs that do not perform numeric calculations. A program is only an algorithm if it stops eventually. A prototypical example of an algorithm is the Euclidean algorithm to determine the maximum common divisor of two integers. Boolos, Jeffrey & 1974, 1999 offer an informal meaning of the word in the following quotation: No human being can write fast enough, or long enough, or small enough† to list all members of an enumerably infinite set by writing out their names, one after another, in some notation, but humans can do something useful, in the case of certain enumerably infinite sets: They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. Such instructions are to be given quite explicitly, in a form in which they could be followed by a computing machine, or by a human, capable of carrying out only elementary operations on symbols.
An "enumerably infinite set" is one whose elements can be put into one-to-one correspondence with the integers. Thus and Jeffrey are saying that an algorithm implies instructions for a process that "creates" output integers from an arbitrary "input" integer or integers that, in theory, can be arbitrarily large, thus an algorithm can be an algebraic equation such as y = m + n – two arbitrary "input variables" m and n that produce an output y. But various authors' attempts to define the notion indicate that the word implies much more than this, something on the order of: Precise instructions for a fast, efficient, "good" process that specifies the "moves" of "the computer" to find and process arbitrary input integers/symbols m and n, symbols + and =... and "effectively" produce, in a "reasonable" time, output-integer y at a specified place and in a specified format
Internationalization and localization
In computing, internationalization and localization are means of adapting computer software to different languages, regional peculiarities and technical requirements of a target locale. Internationalization is the process of designing a software application so that it can be adapted to various languages and regions without engineering changes. Localization is the process of adapting internationalized software for a specific region or language by translating text and adding locale-specific components. Localization uses the flexibility provided by internationalization; the terms are abbreviated to the numeronyms i18n and L10n for localization, due to the length of the words. Some companies, like IBM and Sun Microsystems, use the term globalization, g11n, for the combination of internationalization and localization. Known as "glocalization". Microsoft defines internationalization as a combination of world-readiness and localization. World-readiness is a developer task, which enables a product to be used with multiple scripts and cultures and separating user interface resources in a localizable format.
Hewlett-Packard and HP-UX created a system called "National Language Support" or "Native Language Support" to produce localizable software. According to Software without frontiers, the design aspects to consider when internationalizing a product are "data encoding and documentation, software construction, hardware device support, user interaction". Translation is the most time-consuming component of language localization; this may involve: For film and audio, translation of spoken words or music lyrics using either dubbing or subtitles Text translation for printed materials, digital media Potentially altering images and logos containing text to contain translations or generic icons Different translation length and differences in character sizes can cause layouts that work well in one language to work poorly in others Consideration of differences in dialect, register or variety Writing conventions like: Formatting of numbers Date and time format including use of different calendars Computer software can encounter differences above and beyond straightforward translation of words and phrases, because computer programs can generate content dynamically.
These differences may need to be taken into account by the internationalization process in preparation for translation. Many of these differences are so regular that a conversion between languages can be automated; the Common Locale Data Repository by Unicode provides a collection of such differences. Its data is used by major operating systems, including Microsoft Windows, macOS and Debian, by major Internet companies or projects such as Google and the Wikimedia Foundation. Examples of such differences include: Different "scripts" in different writing systems use different characters – a different set of letters, logograms, or symbols. Modern systems use the Unicode standard to represent many different languages with a single character encoding. Writing direction is left to right in most European languages, right-to-left in Hebrew and Arabic, or both in boustrophedon scripts, optionally vertical in some Asian languages. Complex text layout, for languages where characters change shape depending on context Capitalization exists in some scripts and not in others Different languages and writing systems have different text sorting rules Different languages have different numeral systems, which might need to be supported if Western Arabic numerals are not used Different languages have different pluralization rules, which can complicate programs that dynamically display numerical content.
Other grammar rules might vary, e.g. genitive. Different languages use different punctuation Keyboard shortcuts can only make use of buttons on the keyboard layout, being localized for. If a shortcut corresponds to a word in a particular language, it may need to be changed. Different countries have different economic conventions, including variations in: Paper sizes Broadcast television systems and popular storage media Telephone number formats Postal address formats, postal codes, choice of delivery services Currency – ISO 4217 codes are used for internationalization System of measurement Battery sizes Voltage and current standardsIn particular, the United States and Europe differ in most of these cases. Other areas follow one of these. Specific third-party services, such as online maps, weather reports, or payment service providers, might not be available worldwide from the same carriers, or at all. Time zones vary across the world, this must be taken into account if a product only interacted with people in a single time zone.
For internationalization, UTC is used internally and then
DOS is a family of disk operating systems, hence the name. DOS consists of MS-DOS and a rebranded version under the name IBM PC DOS, both of which were introduced in 1981. Other compatible systems from other manufacturers include DR-DOS, ROM-DOS, PTS-DOS, FreeDOS. MS-DOS dominated the x86-based IBM PC compatible market between 1981 and 1995. Dozens of other operating systems use the acronym "DOS", including the mainframe DOS/360 from 1966. Others are Apple DOS, Apple ProDOS, Atari DOS, Commodore DOS, TRSDOS, AmigaDOS. Fictional operating systems have used this acronym as well, such as GLaDOS from the video game Portal. IBM PC DOS and its predecessor, 86-DOS, resembled Digital Research's CP/M—the dominant disk operating system for 8-bit Intel 8080 and Zilog Z80 microcomputers—but instead ran on Intel 8086 16-bit processors; when IBM introduced the IBM PC, built with the Intel 8088 microprocessor, they needed an operating system. Seeking an 8088-compatible build of CP/M, IBM approached Microsoft CEO Bill Gates.
IBM was sent to Digital Research, a meeting was set up. However, the initial negotiations for the use of CP/M broke down. Digital Research founder Gary Kildall refused, IBM withdrew. IBM again approached Bill Gates. Gates in turn approached Seattle Computer Products. There, programmer Tim Paterson had developed a variant of CP/M-80, intended as an internal product for testing SCP's new 16-bit Intel 8086 CPU card for the S-100 bus; the system was named QDOS, before being made commercially available as 86-DOS. Microsoft purchased 86-DOS for $50,000; this became Microsoft Disk Operating System, MS-DOS, introduced in 1981. Within a year Microsoft licensed MS-DOS to over 70 other companies, which supplied the operating system for their own hardware, sometimes under their own names. Microsoft required the use of the MS-DOS name, with the exception of the IBM variant. IBM continued to develop their version, PC DOS, for the IBM PC. Digital Research became aware that an operating system similar to CP/M was being sold by IBM, threatened legal action.
IBM responded by offering an agreement: they would give PC consumers a choice of PC DOS or CP/M-86, Kildall's 8086 version. Side-by-side, CP/M cost $200 more than PC DOS, sales were low. CP/M faded, with MS-DOS and PC DOS becoming the marketed operating system for PCs and PC compatibles. Microsoft sold MS-DOS only to original equipment manufacturers. One major reason for this was. DOS was structured such that there was a separation between the system specific device driver code and the DOS kernel. Microsoft provided an OEM Adaptation Kit which allowed OEMs to customize the device driver code to their particular system. By the early 1990s, most PCs adhered to IBM PC standards so Microsoft began selling MS-DOS in retail with MS-DOS 5.0. In the mid-1980s Microsoft developed a multitasking version of DOS; this version of DOS is referred to as "European MS-DOS 4" because it was developed for ICL and licensed to several European companies. This version of DOS supports preemptive multitasking, shared memory, device helper services and New Executable format executables.
None of these features were used in versions of DOS, but they were used to form the basis of the OS/2 1.0 kernel. This version of DOS is distinct from the released PC DOS 4.0, developed by IBM and based upon DOS 3.3. Digital Research attempted to regain the market lost from CP/M-86 with Concurrent DOS, FlexOS and DOS Plus with Multiuser DOS and DR DOS. Digital Research was bought by Novell, DR DOS became Novell DOS 7. Gordon Letwin wrote in 1995 that "DOS was, when we first wrote it, a one-time throw-away product intended to keep IBM happy so that they'd buy our languages". Microsoft expected; the company planned to over time improve MS-DOS so it would be indistinguishable from single-user Xenix, or XEDOS, which would run on the Motorola 68000, Zilog Z-8000, LSI-11. IBM, did not want to replace DOS. After AT&T began selling Unix, Microsoft and IBM began developing OS/2 as an alternative; the two companies had a series of disagreements over two successor operating systems to DOS, OS/2 and Windows.
They split development of their DOS systems as a result. The last retail version of MS-DOS was MS-DOS 6.22. The last retail version of PC DOS was PC DOS 2000, though IBM did develop PC DOS 7.10 for OEMs and internal use. The FreeDOS project began on 26 June 1994, when Microsoft announced it would no longer sell or support MS-DOS. Jim Hall posted a manifesto proposing the development of an open-source replacement. Within a few weeks, other programmers including Pat Villani and Tim Norman joined the project. A kernel, the COMMAND. COM command line interpreter, core utilities were created by pooling code they had wri
A floppy disk known as a floppy, diskette, or disk, is a type of disk storage composed of a disk of thin and flexible magnetic storage medium, sealed in a rectangular plastic enclosure lined with fabric that removes dust particles. Floppy disks are written by a floppy disk drive. Floppy disks as 8-inch media and in 5 1⁄4-inch and 3 1⁄2 inch sizes, were a ubiquitous form of data storage and exchange from the mid-1970s into the first years of the 21st century. By 2006 computers were manufactured with installed floppy disk drives; these formats are handled by older equipment. The prevalence of floppy disks in late-twentieth century culture was such that many electronic and software programs still use the floppy disks as save icons. While floppy disk drives still have some limited uses with legacy industrial computer equipment, they have been superseded by data storage methods with much greater capacity, such as USB flash drives, flash storage cards, portable external hard disk drives, optical discs, cloud storage and storage available through computer networks.
The first commercial floppy disks, developed in the late 1960s, were 8 inches in diameter. These disks and associated drives were produced and improved upon by IBM and other companies such as Memorex, Shugart Associates, Burroughs Corporation; the term "floppy disk" appeared in print as early as 1970, although IBM announced its first media as the "Type 1 Diskette" in 1973, the industry continued to use the terms "floppy disk" or "floppy". In 1976, Shugart Associates introduced the 5 1⁄4-inch FDD. By 1978 there were more than 10 manufacturers producing such FDDs. There were competing floppy disk formats, with hard- and soft-sector versions and encoding schemes such as FM, MFM, M2FM and GCR; the 5 1⁄4-inch format displaced the 8-inch one for most applications, the hard-sectored disk format disappeared. The most common capacity of the 5 1⁄4-inch format in DOS-based PCs was 360 KB, for the DSDD format using MFM encoding. In 1984 IBM introduced with its PC-AT model the 1.2 MB dual-sided 5 1⁄4-inch floppy disk, but it never became popular.
IBM started using the 720 KB double-density 3 1⁄2-inch microfloppy disk on its Convertible laptop computer in 1986 and the 1.44 MB high-density version with the PS/2 line in 1987. These disk drives could be added to older PC models. In 1988 IBM introduced a drive for 2.88 MB "DSED" diskettes in its top-of-the-line PS/2 models, but this was a commercial failure. Throughout the early 1980s, limitations of the 5 1⁄4-inch format became clear. Designed to be more practical than the 8-inch format, it was itself too large. A number of solutions were developed, with drives at 2-, 2 1⁄2-, 3-, 3 1⁄4-, 3 1⁄2- and 4-inches offered by various companies, they all shared a number of advantages over the old format, including a rigid case with a sliding metal shutter over the head slot, which helped protect the delicate magnetic medium from dust and damage, a sliding write protection tab, far more convenient than the adhesive tabs used with earlier disks. The large market share of the well-established 5 1⁄4-inch format made it difficult for these diverse mutually-incompatible new formats to gain significant market share.
A variant on the Sony design, introduced in 1982 by a large number of manufacturers, was rapidly adopted. The term floppy disk persisted though style floppy disks have a rigid case around an internal floppy disk. By the end of the 1980s, 5 1⁄4-inch disks had been superseded by 3 1⁄2-inch disks. During this time, PCs came equipped with drives of both sizes. By the mid-1990s, 5 1⁄4-inch drives had disappeared, as the 3 1⁄2-inch disk became the predominant floppy disk; the advantages of the 3 1⁄2-inch disk were its higher capacity, its smaller size, its rigid case which provided better protection from dirt and other environmental risks. If a person touches the exposed disk surface of a 5 1⁄4-inch disk through the drive hole, fingerprints may foul the disk—and the disk drive head if the disk is subsequently loaded into a drive—and it is easily possible to damage a disk of this type by folding or creasing it rendering it at least unreadable; however due to its simpler construction the 5 1⁄4-inch disk unit price was lower throughout its history in the range of a third to a half that of a 3 1⁄2-inch disk.
Floppy disks became commonplace during the 1980s and 1990s in their use with personal computers to distribute software, transfer data, create backups. Before hard disks became affordable to the general population, floppy disks were used to store a computer's operating system. Most home computers from that period have an elementary OS and BASIC stored in ROM, with the option of loading a more advanced operating system from a floppy disk. By the early 1990s, the increasing software size meant large packages like Windows or Adobe Photoshop required a dozen disks or more. In 1996, there were an estimated five billion standard floppy disks in use. Distribution of larger packages was replaced by CD-ROMs, DVDs and online distribution. An