In computing, a computer keyboard is a typewriter-style device which uses an arrangement of buttons or keys to act as mechanical levers or electronic switches. Following the decline of punch cards and paper tape, interaction via teleprinter-style keyboards became the main input method for computers. Keyboard keys have characters engraved or printed on them, each press of a key corresponds to a single written symbol. However, producing some symbols may require pressing and holding several keys or in sequence. While most keyboard keys produce letters, numbers or signs, other keys or simultaneous key presses can produce actions or execute computer commands. In normal usage, the keyboard is used as a text entry interface for typing text and numbers into a word processor, text editor or any other program. In a modern computer, the interpretation of key presses is left to the software. A computer keyboard distinguishes each physical key from every other key and reports all key presses to the controlling software.
Keyboards are used for computer gaming — either regular keyboards or keyboards with special gaming features, which can expedite used keystroke combinations. A keyboard is used to give commands to the operating system of a computer, such as Windows' Control-Alt-Delete combination. Although on Pre-Windows 95 Microsoft operating systems this forced a re-boot, now it brings up a system security options screen. A command-line interface is a type of user interface navigated using a keyboard, or some other similar device that does the job of one. While typewriters are the definitive ancestor of all key-based text entry devices, the computer keyboard as a device for electromechanical data entry and communication derives from the utility of two devices: teleprinters and keypunches, it was through such devices. As early as the 1870s, teleprinter-like devices were used to type and transmit stock market text data from the keyboard across telegraph lines to stock ticker machines to be copied and displayed onto ticker tape.
The teleprinter, in its more contemporary form, was developed from 1907 to 1910 by American mechanical engineer Charles Krum and his son Howard, with early contributions by electrical engineer Frank Pearne. Earlier models were developed separately by individuals such as Royal Earl House and Frederick G. Creed. Earlier, Herman Hollerith developed the first keypunch devices, which soon evolved to include keys for text and number entry akin to normal typewriters by the 1930s; the keyboard on the teleprinter played a strong role in point-to-point and point-to-multipoint communication for most of the 20th century, while the keyboard on the keypunch device played a strong role in data entry and storage for just as long. The development of the earliest computers incorporated electric typewriter keyboards: the development of the ENIAC computer incorporated a keypunch device as both the input and paper-based output device, while the BINAC computer made use of an electromechanically controlled typewriter for both data entry onto magnetic tape and data output.
The keyboard remained the primary, most integrated computer peripheral well into the era of personal computing until the introduction of the mouse as a consumer device in 1984. By this time, text-only user interfaces with sparse graphics gave way to comparatively graphics-rich icons on screen. However, keyboards remain central to human-computer interaction to the present as mobile personal computing devices such as smartphones and tablets adapt the keyboard as an optional virtual, touchscreen-based means of data entry. One factor determining the size of a keyboard is the presence of duplicate keys, such as a separate numeric keyboard or two each of Shift, ALT and CTL for convenience. Further the keyboard size depends on the extent to which a system is used where a single action is produced by a combination of subsequent or simultaneous keystrokes, or multiple pressing of a single key. A keyboard with few keys is called a keypad. Another factor determining the size of a keyboard is the spacing of the keys.
Reduction is limited by the practical consideration that the keys must be large enough to be pressed by fingers. Alternatively a tool is used for pressing small keys. Standard alphanumeric keyboards have keys that are on three-quarter inch centers, have a key travel of at least 0.150 inches. Desktop computer keyboards, such as the 101-key US traditional keyboards or the 104-key Windows keyboards, include alphabetic characters, punctuation symbols, numbers and a variety of function keys; the internationally common 102/104 key keyboards have a smaller left shift key and an additional key with some more symbols between that and the letter to its right. The enter key is shaped differently. Computer keyboards are similar to electric-typewriter keyboards but contain additional keys, such as the command or Windows keys. There is no standard computer keyboard. There are three different PC keyboards: the original PC keyboard with 84 keys, the AT keyboard with 84 keys and the enhanced keyboard with 101 keys.
The three differ somewhat in the placement of function keys, the control keys, the return key, the shift key. Keyboards on laptops and notebook computers have a shorter travel distance for the keystroke, shorter over travel distance, a reduced set of keys, they may not have a numeric keypad, the function keys may be placed in locations that differ from their placement on a standard, full-sized keyboard. The switch
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
USB is an industry standard that establishes specifications for cables and protocols for connection and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 2.0 and USB 3.x. USB was designed to standardize the connection of peripherals like keyboards, pointing devices, digital still and video cameras, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, it has replaced interfaces such as serial ports and parallel ports, has become commonplace on a wide range of devices. USB connectors have been replacing other types for battery chargers of portable devices; this section is intended to allow fast identification of USB receptacles on equipment. Further diagrams and discussion of plugs and receptacles can be found in the main article above; the Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, when compared with existing standard or ad-hoc proprietary interfaces.
From the computer user's perspective, the USB interface improved ease of use in several ways. The USB interface is self-configuring, so the user need not adjust settings on the device and interface for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels. USB connectors are standardized at the host, so any peripheral can use any available receptacle. USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves; the USB interface is "hot pluggable", meaning devices can be exchanged without rebooting the host computer. Small devices can be powered directly from displacing extra power supply cables; because use of the USB logos is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration. Installation of a device relying on the USB standard requires minimal operator action.
When a device is plugged into a port on a running personal computer system, it is either automatically configured using existing device drivers, or the system prompts the user to locate a driver, installed and configured automatically. For hardware manufacturers and software developers, the USB standard eliminates the requirement to develop proprietary interfaces to new peripherals; the wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces. A USB interface can be designed to provide the best available latency for time-critical functions, or can be set up to do background transfers of bulk data with little impact on system resources; the USB interface is generalized with no signal lines dedicated to only one function of one device. USB cables are limited in length, as the standard was meant to connect to peripherals on the same table-top, not between rooms or between buildings. However, a USB port can be connected to a gateway.
USB has "master-slave" protocol for addressing peripheral devices. Some extension to this limitation is possible through USB On-The-Go. A host cannot "broadcast" signals to all peripherals at once, each must be addressed individually; some high speed peripheral devices require sustained speeds not available in the USB standard. While converters exist between certain "legacy" interfaces and USB, they may not provide full implementation of the legacy hardware. For a product developer, use of USB requires implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale must obtain a USB ID which requires a fee paid to the Implementers' Forum. Developers of products that use the USB specification must sign an agreement with Implementer's Forum. Use of the USB logos on the product require annual fees and membership in the organization. A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Microsoft, NEC, Nortel.
The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. Ajay Bhatt and his team worked on the standard at Intel; the original USB 1.0 specification, introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first used version of USB was 1.1, released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, the lower 1.5 Mbit/s rate for low data
The BeBox is a dual CPU personal computer sold by Be Inc. to run the company's own operating system, BeOS. Notable aspects of the system include its CPU configuration, I/O board with "GeekPort", "Blinkenlights" on the front bezel; the BeBox made its debut in October 1995. The processors were upgraded to 133 MHz in August 1996. Production was halted in January 1997, following the port of BeOS to the Macintosh, in order for the company to concentrate on software. Be sold around a thousand 66 MHz BeBoxes and 800 133 MHz BeBoxes. BeBox creator Jean-Louis Gassée did not see the BeBox as a general consumer device, warning that "Before we let you use the BeBox, we believe you must have some aptitude toward programming – the standard language is C++." Initial prototypes are equipped with three AT&T 9308S DSPs. Production models use two PowerPC 603 processors running at 133 MHz to power the BeBox. Prototypes having dual 200 MHz CPUs or four CPUs exist. Four serial ports One mouse port, PS/2-type Two joystick ports Two MIDI out ports Two MIDI in ports Three infrared I/O ports One internal CD audio line-level input One internal microphone audio input One internal headphone audio output Two line-level RCA inputs Two line-level outputs One microphone input 3.5 mm stereo phono jack One headphone output 3.5 mm stereo phono jack A 16-bit stereo sound system capable of 48 kHz and 44.1 kHz One "GeekPort" An experimental-electronic-development oriented port, backed by three fuses on the mainboard Digital and analog I/O and DC power connector, 37-pin connector on the ISA bus Two independent, bidirectional 8-bit ports Four A/D pins routing to a 12-bit A/D converter Four D/A pins connected to an independent 8-bit D/A converter Two signal ground reference pins Eleven power and ground pins: Two at +5 V, one at +12 V, one at -12 V, seven ground pins Two yellow/green vertical LED arrays, dubbed the "blinkenlights", are built into the front bezel to illustrate the CPU load.
The bottommost LED on the right side indicates hard disk activity. Multi Emulator Super System – able to emulate both BeBox 66 and 133 "The BeBox Zone". Archived from the original on 15 April 2013. BeBox Photo Gallery Pinout for the GeekPort connector U. S. Trademark 75,054,089
Geode is a series of x86-compatible system-on-a-chip microprocessors and I/O companions produced by AMD, targeted at the embedded computing market. The series was launched by National Semiconductor as the Geode family in 1999; the original Geode processor core itself is derived from the Cyrix MediaGX platform, acquired in National's merger with Cyrix in 1997. AMD bought the Geode business from National in August 2003 to augment its existing line of embedded x86 processor products. AMD expanded the Geode series to two classes of processor: the MediaGX-derived Geode GX and LX, the modern Athlon-derived Geode NX. Geode processors are optimized for low power consumption and low cost while still remaining compatible with software written for the x86 platform; the MediaGX-derived processors lack modern features such as SSE and a large on-die L1 cache but these are offered on the more recent Athlon-derived Geode NX. Geode processors integrate some of the functions provided by a separate chipset, such as the northbridge.
Whilst the processor family is best suited for thin client, set top box and embedded computing applications, it can be found in unusual applications such as the Nao robot and the Win Enterprise IP-PBX The One Laptop per Child project used the GX series Geode processor in OLPC XO-1 prototypes, but moved to the Geode LX for production. The Linutop is based on the Geode LX. 3Com Audrey was powered by a 200 MHz Geode GX1. The SCxxxx range of Geode devices are a single-chip version, comparable to the SiS 552, VIA CoreFusion or Intel's Tolapai, which integrate the CPU, memory controller, graphics and I/O devices into one package. Single processor boards based on these processors are manufactured by Artec Group, PC Engines and Win Enterprises. Cyrix MediaGXm clone. Returns "CyrixInstead" on CPUID. MediaGX-derived core 0.35 µm, four-layer metal CMOS MMX instructions 3.3 V I/O, 2.9 V core 12 KB direct-mapped write-through unified L1 cache, 4 KB I/O scratchpad SRAM 25–33 MHz 32-bit 486 bus PCI controller 32-bit EDO DRAM memory interface CS5530 companion chip VSA architecture 1280×1024×8 or 1024×768×16 display MediaGX-derived core 0.25 µm, four-layer metal CMOS 3.3 V I/O 2.2, 2.5, 2.9 V core 12 KB direct-mapped write-through unified L1 cache, 4 KB I/O scratchpad SRAM Fully static design 1.0 W at 2.2 V/166 MHz, 2.5 W at 2.9 V/266 MHz MediaGX-derived core 0.18 µm CMOS 200–333 MHz 1.6–2.2 V core 16 KB four-way set associative write-back unified L1 cache, 2 or 4 KB of which can be reserved as I/O scratchpad RAM for use by the integrated graphics core 33 MHz PCI bus interconnect with CPU bus Integrated northbridge and memory controller 0.8–1.2 W typical 16–64-bit SDRAM memory, 111 MHz CS5530A companion chip 60 Hz VGA refresh rateNational Semiconductor/AMD SC1100 is based on the Cyrix GX1 core and the CS5530 support chip.
Announced by National Semiconductor Corporation October, 2001 at Microprocessor Forum. First demonstration at COMPUTEX Taiwan, June, 2002. 0.15 µm process technology MMX and 3DNow! Instructions 16 KB Instruction and 16 B Data caches GeodeLink architecture, 6 GB/s on-chip bandwidth, up to 2 GB/s memory bandwidth Integrated 64-bit PC133 SDRAM and DDR266 controller Clockrate: 266, 333, 400 MHz 33 MHz PCI bus interconnect with CPU bus 3 PCI masters supported 1600×1200 24-bit display with video scaling CRT DACs and an UMA DSTN/TFT controller. Geode CS5535 or CS5536 companion chip Developed by National Tel Aviv based on IP from Longmont and other sources. Applications: The SC3200 was used in the Tatung TWN-5213 CU. In 2002, AMD introduced the Geode GX series, a re-branding of the National Semiconductor GX2; this was followed by the Geode LX, running up to 667 MHz. LX brought many improvements, such as higher speed DDR, a re-designed instruction pipe, a more powerful display controller; the upgrade from the CS5535 I/O Companion to the CS5536 brought higher speed USB.
Geode GX and LX processors are found in devices such as thin clients and industrial control systems. However, they have come under competitive pressure from VIA on the x86 side, ARM processors from various vendors taking much of the low-end business; because of the relative performance, albeit higher PPW, of the GX and LX core design, AMD introduced the Geode NX, an embedded version of the Athlon processor, K7. Geode NX is quite similar to the Athlon XP-M that use this core; the Geode NX includes 256 KB of level 2 cache, runs fanless at up to 1 GHz in the NX1500@6 W version. The NX2001 part runs at 1.8 GHz, the NX1750 part runs at 1.4 GHz, the NX1250 runs at 667 MHz. The Geode NX, with its strong FPU, is suited for embedded devices with graphical performance requirements, such as information kiosks and casino gaming machines, such as video slots. However, it was reported that the specific design team for Geode processors in Longmont, has been closed, 75 employees are being relocated to the new development facility in Fort Collins, Colorado.
It is expected that the Geode line of processors will be updated less due to the closure of the Geode design center. In 2009, comments by AMD indicated that there are no plans for any future micro architecture upgrades to the processor and that there will be no successor. In 2016 AMD updated the product roadmap announcing extension of last time buy and shipment for the LX series to 2019. In early 2018 hardware manufacturer congatec announced an agreement with AMD for a further extension of availability of congatec's Geode based platforms. Features: Low powe
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri