Compact disc is a digital optical disc data storage format, co-developed by Philips and Sony and released in 1982. The format was developed to store and play only sound recordings but was adapted for storage of data. Several other formats were further derived from these, including write-once audio and data storage, rewritable media, Video Compact Disc, Super Video Compact Disc, Photo CD, PictureCD, CD-i, Enhanced Music CD; the first commercially available audio CD player, the Sony CDP-101, was released October 1982 in Japan. Standard CDs have a diameter of 120 millimetres and can hold up to about 80 minutes of uncompressed audio or about 700 MiB of data; the Mini CD has various diameters ranging from 60 to 80 millimetres. At the time of the technology's introduction in 1982, a CD could store much more data than a personal computer hard drive, which would hold 10 MB. By 2010, hard drives offered as much storage space as a thousand CDs, while their prices had plummeted to commodity level. In 2004, worldwide sales of audio CDs, CD-ROMs and CD-Rs reached about 30 billion discs.
By 2007, 200 billion CDs had been sold worldwide. From the early 2000s CDs were being replaced by other forms of digital storage and distribution, with the result that by 2010 the number of audio CDs being sold in the U. S. had dropped about 50% from their peak. In 2014, revenues from digital music services matched those from physical format sales for the first time. American inventor James T. Russell has been credited with inventing the first system to record digital information on an optical transparent foil, lit from behind by a high-power halogen lamp. Russell's patent application was filed in 1966, he was granted a patent in 1970. Following litigation and Philips licensed Russell's patents in the 1980s; the compact disc is an evolution of LaserDisc technology, where a focused laser beam is used that enables the high information density required for high-quality digital audio signals. Prototypes were developed by Sony independently in the late 1970s. Although dismissed by Philips Research management as a trivial pursuit, the CD became the primary focus for Philips as the LaserDisc format struggled.
In 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. After a year of experimentation and discussion, the Red Book CD-DA standard was published in 1980. After their commercial release in 1982, compact discs and their players were popular. Despite costing up to $1,000, over 400,000 CD players were sold in the United States between 1983 and 1984. By 1988, CD sales in the United States surpassed those of vinyl LPs, by 1992 CD sales surpassed those of prerecorded music cassette tapes; the success of the compact disc has been credited to the cooperation between Philips and Sony, which together agreed upon and developed compatible hardware. The unified design of the compact disc allowed consumers to purchase any disc or player from any company, allowed the CD to dominate the at-home music market unchallenged. In 1974, Lou Ottens, director of the audio division of Philips, started a small group with the aim to develop an analog optical audio disc with a diameter of 20 cm and a sound quality superior to that of the vinyl record.
However, due to the unsatisfactory performance of the analog format, two Philips research engineers recommended a digital format in March 1974. In 1977, Philips established a laboratory with the mission of creating a digital audio disc; the diameter of Philips's prototype compact disc was set at 11.5 cm, the diagonal of an audio cassette. Heitaro Nakajima, who developed an early digital audio recorder within Japan's national public broadcasting organization NHK in 1970, became general manager of Sony's audio department in 1971, his team developed a digital PCM adaptor audio tape recorder using a Betamax video recorder in 1973. After this, in 1974 the leap to storing digital audio on an optical disc was made. Sony first publicly demonstrated an optical digital audio disc in September 1976. A year in September 1977, Sony showed the press a 30 cm disc that could play 60 minutes of digital audio using MFM modulation. In September 1978, the company demonstrated an optical digital audio disc with a 150-minute playing time, 44,056 Hz sampling rate, 16-bit linear resolution, cross-interleaved error correction code—specifications similar to those settled upon for the standard compact disc format in 1980.
Technical details of Sony's digital audio disc were presented during the 62nd AES Convention, held on 13–16 March 1979, in Brussels. Sony's AES technical paper was published on 1 March 1979. A week on 8 March, Philips publicly demonstrated a prototype of an optical digital audio disc at a press conference called "Philips Introduce Compact Disc" in Eindhoven, Netherlands. Sony executive Norio Ohga CEO and chairman of Sony, Heitaro Nakajima were convinced of the format's commercial potential and pushed further development despite widespread skepticism; as a result, in 1979, Sony and Philips set up a joint task force of engineers to design a new digital audio disc. Led by engineers Kees Schouhamer Immink and Toshitada Doi, the research pushed forward laser and optical disc technology. After a year of experimentation and discussion, the task force produced the Red Book CD-DA standard. First published in 1980, the stand
In computing, booting is starting up a computer or computer appliance until it can be used. It can be initiated by software command. After the power is switched on, the computer is dumb and can read only part of its storage called read-only memory. There, a small program is stored called firmware, it does power-on self-tests and, most allows accessing other types of memory like a hard disk and main memory. The firmware runs it. In general purpose computers, but additionally in smartphones and tablets, optionally a boot manager is run; the boot manager lets a user choose which operating system to run and set more complex parameters for it. The firmware or the boot manager loads the boot loader into the memory and runs it; this piece of software is able to place an operating system kernel like Windows or Linux into the computer's main memory and run it. Afterwards, the kernel runs so-called user space software – well known is the graphical user interface, which lets the user log in to the computer or run some other applications.
The whole process may take seconds to tenths of seconds on modern day general purpose computers. Restarting a computer is called reboot, which can be "hard", e.g. after electrical power to the CPU is switched from off to on, or "soft", where the power is not cut. On some systems, a soft boot may optionally clear RAM to zero. Both hard and soft booting can be initiated by hardware such as a button press or by software command. Booting is complete when the operative runtime system operating system and some applications, is attained; the process of returning a computer from a state of hibernation or sleep does not involve booting. Minimally, some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may run operational programs that are stored in ROM. All computing systems are state machines, a reboot may be the only method to return to a designated zero-state from an unintended, locked state. In addition to loading an operating system or stand-alone utility, the boot process can load a storage dump program for diagnosing problems in an operating system.
Boot is short for bootstrap or bootstrap load and derives from the phrase to pull oneself up by one's bootstraps. The usage calls attention to the requirement that, if most software is loaded onto a computer by other software running on the computer, some mechanism must exist to load the initial software onto the computer. Early computers used a variety of ad-hoc methods to get a small program into memory to solve this problem; the invention of read-only memory of various types solved this paradox by allowing computers to be shipped with a start up program that could not be erased. Growth in the capacity of ROM has allowed more elaborate start up procedures to be implemented. There are many different methods available to load a short initial program into a computer; these methods reach from simple, physical input to removable media that can hold more complex programs. Early computers in the 1940s and 1950s were one-of-a-kind engineering efforts that could take weeks to program and program loading was one of many problems that had to be solved.
An early computer, ENIAC, had no "program" stored in memory, but was set up for each problem by a configuration of interconnecting cables. Bootstrapping did not apply to ENIAC, whose hardware configuration was ready for solving problems as soon as power was applied; the EDSAC system, the second stored program computer to be built, used stepping switches to transfer a fixed program into memory when its start button was pressed. The program stored on this device, which David Wheeler completed in late 1948, loaded further instructions from punched tape and executed them; the first programmable computers for commercial sale, such as the UNIVAC I and the IBM 701 included features to make their operation simpler. They included instructions that performed a complete input or output operation; the same hardware logic could be used to load the contents of a punch card or other input media, such as a magnetic drum or magnetic tape, that contained a bootstrap program by pressing a single button. This booting concept was called a variety of names for IBM computers of the 1950s and early 1960s, but IBM used the term "Initial Program Load" with the IBM 7030 Stretch and used it for their mainframe lines, starting with the System/360 in 1964.
The IBM 701 computer had a "Load" button that initiated reading of the first 36-bit word into main memory from a punched card in a card reader, a magnetic tape in a tape drive, or a magnetic drum unit, depending on the position of the Load Selector switch. The left 18-bit half-word was executed as an instruction, which read additional words into memory; the loaded boot program was executed, which, in turn, loaded a larger program from that medium into memory without further help from the human operator. The term "boot" has been used in this sense since at least 1958. Other IBM computers of that era had similar features. For example, the IBM 1401 system used a card reader to load a program from a punched card; the 80 characters stored in the punched card were read into memory locations 001 to 080 the computer would branch to memory location 001 to read its first stored instruction. This instruction was always the same: move the information in these first 80 memory locations to an assembly area where the information in punched cards 2, 3, 4, so on, could be combined to form the stored program.
Once this information was moved to the assembly area, the machine would branch to an instruction in location 080 and the next card
USB flash drive
A USB flash drive known as a thumb drive, pen drive, gig stick, flash stick, jump drive, disk key, disk on key, flash-drive, memory stick, USB key, USB stick or USB memory, is a data storage device that includes flash memory with an integrated USB interface. It is removable and much smaller than an optical disc. Most weigh less than 1 oz. Since first appearing on the market in late 2000, as with all other computer memory devices, storage capacities have risen while prices have dropped; as of March 2016, flash drives with anywhere from 8 to 256 GB were sold, while 512 GB and 1 TB units were less frequent. As of 2018, 2TB flash drives were the largest available in terms of storage capacity; some allow up to 100,000 write/erase cycles, depending on the exact type of memory chip used, are thought to last between 10 and 100 years under normal circumstances. USB flash drives are used for storage, data back-up and transfer of computer files. Compared with floppy disks or CDs, they are smaller, have more capacity, are more durable due to a lack of moving parts.
Additionally, they are immune to electromagnetic interference, are unharmed by surface scratches. Until about 2005, most desktop and laptop computers were supplied with floppy disk drives in addition to USB ports, but floppy disk drives became obsolete after widespread adoption of USB ports and the larger USB drive capacity compared to the 1.44 MB 3.5-inch floppy disk. USB flash drives use the USB mass storage device class standard, supported natively by modern operating systems such as Windows, macOS and other Unix-like systems, as well as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by many other systems such as the Xbox One, PlayStation 4, DVD players, automobile entertainment systems, in a number of handheld devices such as smartphones and tablet computers, though the electronically similar SD card is better suited for those devices. A flash drive consists of a small printed circuit board carrying the circuit elements and a USB connector, insulated electrically and protected inside a plastic, metal, or rubberized case, which can be carried in a pocket or on a key chain, for example.
The USB connector may be protected by a removable cap or by retracting into the body of the drive, although it is not to be damaged if unprotected. Most flash drives use a standard type-A USB connection allowing connection with a port on a personal computer, but drives for other interfaces exist. USB flash drives draw power from the computer via the USB connection; some devices combine the functionality of a portable media player with USB flash storage. M-Systems, an Israeli company, were granted a US patent on November 14, 2000, titled "Architecture for a -based Flash Disk", crediting the invention to Amir Ban, Dov Moran and Oron Ogdan, all M-Systems employees at the time; the patent application was filed by M-Systems in April 1999. In 1999, IBM filed an invention disclosure by one of its employees. Flash drives were sold by Trek 2000 International, a company in Singapore, which began selling in early 2000. IBM became the first to sell USB flash drives in the United States in 2000; the initial storage capacity of a flash drive was 8 MB.
Another version of the flash drive, described as a pen drive, was developed. Pua Khein-Seng from Malaysia has been credited with this invention. Patent disputes have arisen over the years, with competing companies including Singaporean company Trek Technology and Chinese company Netac Technology, attempting to enforce their patents. Trek has lost battles in other countries. Netac Technology has brought lawsuits against PNY Technologies, aigo and Taiwan's Acer and Tai Guen Enterprise Co. Flash drives are measured by the rate at which they transfer data. Transfer rates may be given in megabytes per second, megabits per second, or in optical drive multipliers such as "180X". File transfer rates vary among devices. Second generation flash drives have claimed to read at up to 30 MB/s and write at about half that rate, about 20 times faster than the theoretical transfer rate achievable by the previous model, USB 1.1, limited to 12 Mbit/s with accounted overhead. The effective transfer rate of a device is affected by the data access pattern.
By 2002, USB flash drives had USB 2.0 connectivity, which has 480 Mbit/s as the transfer rate upper bound. That same year, Intel sparked widespread use of second generation USB by including them within its laptops. Third generation USB flash drives were announced in late 2008 and became available in 2010. Like USB 2.0 before it, USB 3.0 improved data transfer rates compared to its predecessor. The USB 3.0 interface specified transfer rates up compared to USB 2.0's 480 Mbit/s. By 2010 the maximum available storage capacity for the devices had reached upwards of 128GB. USB 3.0 was slow to appear in laptops. As of 2010, the majority of laptop models still contained the 2.0. In January 2013, tech company Kingston, released a flash drive with 1TB of storage; the first USB 3.1 type-C flash drives, with read/write speeds of around 530 MB/s, were announced in March 2015. As of July 2016, flash drives within the 8 to 256 GB
Preboot Execution Environment
In computing, the Preboot eXecution Environment specification describes a standardized client-server environment that boots a software assembly, retrieved from a network, on PXE-enabled clients. On the client side it requires only a PXE-capable network interface controller, uses a small set of industry-standard network protocols such as DHCP and TFTP; the concept behind the PXE originated in the early days of protocols like BOOTP/DHCP/TFTP, as of 2015 it forms part of the Unified Extensible Firmware Interface standard. In modern data centers, PXE is the most frequent choice for operating system booting and deployment. Since the beginning of computer networks, there has been a persistent need for client systems which can boot appropriate software images, with appropriate configuration parameters, both retrieved at boot time from one or more network servers; this goal requires a client to use a set of pre-boot services, based on industry standard network protocols. Additionally, the Network Bootstrap Program, downloaded and run must be built using a client firmware layer providing a hardware independent standardized way to interact with the surrounding network booting environment.
In this case the availability and subjection to standards are a key factor required to guarantee the network boot process system interoperability. One of the first attempts in this regard was the Bootstrap Loading using TFTP standard RFC 906, published in 1984, which established the 1981 published Trivial File Transfer Protocol standard RFC 783 to be used as the standard file transfer protocol for bootstrap loading, it was followed shortly after by the Bootstrap Protocol standard RFC 951, published in 1985, which allowed a disk-less client machine to discover its own IP address, the address of a TFTP server, the name of an NBP to be loaded into memory and executed. BOOTP implementation difficulties, among other reasons led to the development of the Dynamic Host Configuration Protocol standard RFC 2131 published in 1997; the pioneering TFTP/BOOTP/DHCP approach fell short, as at the time, it did not define the required standardized client side of the provisioning environment. The Preboot Execution Environment was introduced as part of the Wired for Management framework by Intel and is described in the specification published by Intel and SystemSoft.
PXE version 2.0 was released in December 1998, the update 2.1 was made public in September 1999. The PXE environment makes use of several standard client‑server protocols like DHCP and TFTP. Within the PXE schema the client side of the provisioning equation is now an integral part of the PXE standard and it is implemented either as a Network Interface Card BIOS extension or today in modern devices as UEFI code; this distinctive firmware layer makes available at the client the functions of a basic Universal Network Device Interface, a minimalistic UDP/IP stack, a Preboot client module and a TFTP client module, together forming the PXE application programming interfaces used by the NBP when needing to interact with the services offered by the server counterpart of the PXE environment. TFTP's low throughput when used over high-latency links, has been mitigated by the TFTP Blocksize Option RFC 2348 published in May 1998, by the TFTP Windowsize Option RFC 7440 published in January 2015, allowing larger payload deliveries and thus improving throughput.
The PXE environment relies on a combination of industry-standard Internet protocols, namely UDP/IP, DHCP and TFTP. These protocols have been selected because they are implemented in the client's NIC firmware, resulting in standardized small-footprint PXE ROMs. Standardization, small size of PXE firmware images and their low use of resources are some of the primary design goals, allowing the client side of the PXE standard to be identically implemented on a wide variety of systems, ranging from powerful client computers to resource-limited single-board computers and system-on-a-chip computers. DHCP is used to provide the appropriate client network parameters and the location of the TFTP server hosting, ready for download, the initial bootstrap program and complementary files. To initiate a PXE bootstrap session the DHCP component of the client's PXE firmware broadcasts a DHCPDISCOVER packet containing PXE-specific options to port 67/UDP; the PXE-specific options identify the initiated DHCP transaction as a PXE transaction.
Standard DHCP servers will be able to answer with a regular DHCPOFFER carrying networking information but not the PXE specific parameters. A PXE client will not be able to boot if it only receives an answer from a non PXE enabled DHCP server. After parsing a PXE enabled DHCP server DHCPOFFER, the client will be able to set its own network IP address, IP Mask, etc. and to point to the network located booting resources, based on the received TFTP Server IP address and the name of the NBP. The client next transfers the NBP into its own random-access memory using TFTP verifies it, boots from it. NBPs are just the first link in the boot chain process and they request via TFTP a small set of complementary files in order to get running a minimalistic OS executive; the small OS executive TCP/IP stack. At this point, the remaining instructions required to boot or install a full OS are provided
A command-line interface or command language interpreter known as command-line user interface, console user interface and character user interface, is a means of interacting with a computer program where the user issues commands to the program in the form of successive lines of text. A program which handles the interface is called shell; the CLI was the primary means of interaction with most computer systems on computer terminals in the mid-1960s, continued to be used throughout the 1970s and 1980s on OpenVMS, Unix systems and personal computer systems including MS-DOS, CP/M and Apple DOS. The interface is implemented with a command line shell, a program that accepts commands as text input and converts commands into appropriate operating system functions. Today, many end users if use command-line interfaces and instead rely upon graphical user interfaces and menu-driven interactions. However, many software developers, system administrators and advanced users still rely on command-line interfaces to perform tasks more efficiently, configure their machine, or access programs and program features that are not available through a graphical interface.
Alternatives to the command line include, but are not limited to text user interface menus, keyboard shortcuts, various other desktop metaphors centered on the pointer. Examples of this include the Windows versions 1, 2, 3, 3.1, 3.11, DosShell, Mouse Systems PowerPanel. Programs with command-line interfaces are easier to automate via scripting. Command-line interfaces for software other than operating systems include a number of programming languages such as Tcl/Tk, PHP, others, as well as utilities such as the compression utility WinZip, some FTP and SSH/Telnet clients. Compared with a graphical user interface, a command line requires fewer system resources to implement. Since options to commands are given in a few characters in each command line, an experienced user finds the options easier to access. Automation of repetitive tasks is simplified - most operating systems using a command line interface support some mechanism for storing used sequences in a disk file, for re-use. A command-line history can be kept, allowing repetition of commands.
A command-line system may require paper or online manuals for the user's reference, although a "help" option provides a concise review of the options of a command. The command-line environment may not provide the graphical enhancements such as different fonts or extended edit windows found in a GUI, it may be difficult for a new user to become familiar with all the commands and options available, compared with the drop-down menus of a graphical user interface, without repeated reference to manuals. Operating system command line interfaces are distinct programs supplied with the operating system. A program that implements such a text interface is called a command-line interpreter, command processor or shell. Examples of command-line interpreters include DEC's DIGITAL Command Language in OpenVMS and RSX-11, the various Unix shells, CP/M's CCP, DOS's COMMAND. COM, as well as the OS/2 and the Windows CMD. EXE programs, the latter groups being based on DEC's RSX-11 and RSTS CLIs. Under most operating systems, it is possible to replace the default shell program with alternatives.
Although the term'shell' is used to describe a command-line interpreter speaking a'shell' can be any program that constitutes the user-interface, including graphically oriented ones. For example, the default Windows GUI is a shell program named EXPLORER. EXE, as defined in the SHELL=EXPLORER. EXE line in the WIN. INI configuration file; these programs are shells, but not CLIs. Application programs may have command line interfaces. An application program may support none, any, or all of these three major types of command line interface mechanisms: Parameters: Most operating systems support a means to pass additional information to a program when it is launched; when a program is launched from an OS command line shell, additional text provided along with the program name is passed to the launched program. Interactive command line sessions: After launch, a program may provide an operator with an independent means to enter commands in the form of text. OS inter-process communication: Most operating systems support means of inter-process communication.
Command lines from client processes may be redirected to a CLI program by one of these methods. Some applications support only a CLI, presenting a CLI prompt to the user and acting upon command lines as they are entered. Other programs support both a CLI and a GUI. In some cases, a GUI is a wrapper around a separate CLI executable file. In other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs support different functionality. For example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features; the early Sierra games, such as the first three King's Quest games, used commands from an internal command line to move the character around in the graphic window. The command-line interface evolved from a form of dialog once conducted by humans over teleprinter machines, in which human operators remotely exchanged inf
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
Microsoft Windows is a group of several graphical operating system families, all of which are developed and sold by Microsoft. Each family caters to a certain sector of the computing industry. Active Windows families include Windows Embedded. Defunct Windows families include Windows Mobile and Windows Phone. Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces. Microsoft Windows came to dominate the world's personal computer market with over 90% market share, overtaking Mac OS, introduced in 1984. Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh. On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25 %. This comparison however may not be relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows show one third market share, similar to that for end user use; as of October 2018, the most recent version of Windows for PCs, tablets and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console. Microsoft, the developer of Windows, has registered several trademarks, each of which denote a family of Windows operating systems that target a specific sector of the computing industry; as of 2014, the following Windows families are being developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and smartphones.
The latest version is Windows 10. The main competitor of this family is macOS by Apple for personal computers and Android for mobile devices. Windows Server: The operating system for server computers; the latest version is Windows Server 2019. Unlike its client sibling, it has adopted a strong naming scheme; the main competitor of this family is Linux. Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers, recovery or troubleshooting purposes; the latest version is Windows PE 10. Windows IoT: Initially, Microsoft developed Windows CE as a general-purpose operating system for every device, too resource-limited to be called a full-fledged computer. However, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive.
The following Windows families are no longer being developed: Windows 9x: An operating system that targeted consumers market. Discontinued because of suboptimal performance. Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system; the first version was called Pocket PC 2000. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones; the first version was Windows Phone 7, followed by Windows Phone 8, the last version Windows Phone 8.1. It was succeeded by Windows 10 Mobile; the term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are categorized as follows: The history of Windows dates back to 1981, when Microsoft started work on a program called "Interface Manager", it was announced in November 1983 under the name "Windows", but Windows 1.0 was not released until November 1985.
Windows 1.0 was to achieved little popularity. Windows 1.0 is not a complete operating system. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Cardfile, Clipboard viewer, Control Panel, Paint, Reversi and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, was more popular than its predecessor. It features several improvements to the user memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0