A Linux distribution is an operating system made from a software collection, based upon the Linux kernel and a package management system. Linux users obtain their operating system by downloading one of the Linux distributions, which are available for a wide variety of systems ranging from embedded devices and personal computers to powerful supercomputers. A typical Linux distribution comprises a Linux kernel, GNU tools and libraries, additional software, documentation, a window system, a window manager, a desktop environment. Most of the included software is free and open-source software made available both as compiled binaries and in source code form, allowing modifications to the original software. Linux distributions optionally include some proprietary software that may not be available in source code form, such as binary blobs required for some device drivers. A Linux distribution may be described as a particular assortment of application and utility software, packaged together with the Linux kernel in such a way that its capabilities meet the needs of many users.
The software is adapted to the distribution and packaged into software packages by the distribution's maintainers. The software packages are available online in so-called repositories, which are storage locations distributed around the world. Beside glue components, such as the distribution installers or the package management systems, there are only few packages that are written from the ground up by the maintainers of a Linux distribution. Six hundred Linux distributions exist, with close to five hundred out of those in active development; because of the huge availability of software, distributions have taken a wide variety of forms, including those suitable for use on desktops, laptops, mobile phones and tablets, as well as minimal environments for use in embedded systems. There are commercially backed distributions, such as Fedora, openSUSE and Ubuntu, community-driven distributions, such as Debian, Slackware and Arch Linux. Most distributions come ready to use and pre-compiled for a specific instruction set, while some distributions are distributed in source code form and compiled locally during installation.
Linus Torvalds developed the Linux kernel and distributed its first version, 0.01, in 1991. Linux was distributed as source code only, as a pair of downloadable floppy disk images – one bootable and containing the Linux kernel itself, the other with a set of GNU utilities and tools for setting up a file system. Since the installation procedure was complicated in the face of growing amounts of available software, distributions sprang up to simplify this. Early distributions included the following: H. J. Lu's "Boot-root", the aforementioned disk image pair with the kernel and the absolute minimal tools to get started, in late 1991 MCC Interim Linux, made available to the public for download in February 1992 Softlanding Linux System, released in 1992, was the most comprehensive distribution for a short time, including the X Window System Yggdrasil Linux/GNU/X, a commercial distribution first released in December 1992The two oldest and still active distribution projects started in 1993; the SLS distribution was not well maintained, so in July 1993 a new distribution, called Slackware and based on SLS, was released by Patrick Volkerding.
Dissatisfied with SLS, Ian Murdock set to create a free distribution by founding Debian, which had its first release in December 1993. Users were attracted to Linux distributions as alternatives to the DOS and Microsoft Windows operating systems on IBM PC compatible computers, Mac OS on the Apple Macintosh, proprietary versions of Unix. Most early adopters were familiar with Unix from school, they embraced Linux distributions for their low cost, availability of the source code for most or all of the software included. The distributions were a convenience, offering a free alternative to proprietary versions of Unix but they became the usual choice for Unix or Linux experts. To date, Linux has become more popular in server and embedded devices markets than in the desktop market. For example, Linux is used on over 50% of web servers, whereas its desktop market share is about 3.7%. Many Linux distributions provide an installation system akin to that provided with other modern operating systems. On the other hand, some distributions, including Gentoo Linux, provide only the binaries of a basic kernel, compilation tools, an installer.
Distributions are segmented into packages. Each package contains service. Examples of packages are a library for handling the PNG image format, a collection of fonts or a web browser; the package is provided as compiled code, with installation and removal of packages handled by a package management system rather than a simple file archiver. Each package intended for such a PMS contains meta-information such as a package description, "dependencies"; the package management system can evaluate this meta-information to allow package searches, to perform an automatic upgrade to a newer version, to check that all dependencies of a package are fulfilled, and/or to fulfill them automatically. Alth
Ubuntu is a free and open-source Linux distribution based on Debian. Ubuntu is released in three editions: Desktop and Core. Ubuntu is a popular operating system for cloud computing, with support for OpenStack. Ubuntu is released every six months, with long-term support releases every two years; the latest release is 18.10, the most recent long-term support release is 18.04 LTS, supported until 2028. Ubuntu is developed by the community under a meritocratic governance model. Canonical provides security updates and support for each Ubuntu release, starting from the release date and until the release reaches its designated end-of-life date. Canonical generates revenue through the sale of premium services related to Ubuntu. Ubuntu is named after the African philosophy of ubuntu, which Canonical translates as "humanity to others" or "I am what I am because of who we all are". Ubuntu is built on Debian's architecture and infrastructure, comprises Linux server and discontinued phone and tablet operating system versions.
Ubuntu releases updated versions predictably every six months, each release receives free support for nine months with security fixes, high-impact bug fixes and conservative beneficial low-risk bug fixes. The first release was in October 2004. Current long-term support releases are supported for five years, are released every two years. LTS releases get regular point releases with support for new hardware and integration of all the updates published in that series to date. Ubuntu packages are based on packages from Debian's unstable branch. Both distributions use package management tools. Debian and Ubuntu packages are not binary compatible with each other, however, so packages may need to be rebuilt from source to be used in Ubuntu. Many Ubuntu developers are maintainers of key packages within Debian. Ubuntu cooperates with Debian by pushing changes back to Debian, although there has been criticism that this does not happen enough. Ian Murdock, the founder of Debian, had expressed concern about Ubuntu packages diverging too far from Debian to remain compatible.
Before release, packages are imported from Debian unstable continuously and merged with Ubuntu-specific modifications. One month before release, imports are frozen, packagers work to ensure that the frozen features interoperate well together. Ubuntu is funded by Canonical Ltd. On 8 July 2005, Mark Shuttleworth and Canonical announced the creation of the Ubuntu Foundation and provided an initial funding of US$10 million; the purpose of the foundation is to ensure the support and development for all future versions of Ubuntu. Mark Shuttleworth describes the foundation goal. On 12 March 2009, Ubuntu announced developer support for third-party cloud management platforms, such as those used at Amazon EC2. GNOME 3 has been the default GUI for Ubuntu Desktop since Ubuntu 17.10, while Unity is still the default in older versions, including all current LTS versions except 18.04 LTS. However, a community-driven fork of Unity 8, called Yunit, has been created to continue the development of Unity. Shuttleworth wrote on 8 April 2017, "We will invest in Ubuntu GNOME with the intent of delivering a fantastic all-GNOME desktop.
We're helping the Ubuntu GNOME team, not creating something different or competitive with that effort. While I am passionate about the design ideas in Unity, hope GNOME may be more open to them now, I think we should respect the GNOME design leadership by delivering GNOME the way GNOME wants it delivered. Our role in that, as usual, will be to make sure that upgrades, security and the full experience are fantastic." Shuttleworth mentioned that Canonical will cease development for Ubuntu Phone and convergence.32-bit i386 processors have been supported up to Ubuntu 18.04, but users "will not be allowed to upgrade to Ubuntu 18.10 as dropping support for that architecture is being evaluated". A default installation of Ubuntu contains a wide range of software that includes LibreOffice, Thunderbird and several lightweight games such as Sudoku and chess. Many additional software packages are accessible from the built in Ubuntu Software as well as any other APT-based package management tools. Many additional software packages that are no longer installed by default, such as Evolution, GIMP, Synaptic, are still accessible in the repositories still installable by the main tool or by any other APT-based package management tool.
Cross-distribution snap packages and flatpaks are available, that both allow installing software, such as some of Microsoft's software, in most of the major Linux operating systems. The default file manager is GNOME Files called Nautilus. Ubuntu operates under the GNU General Public License and all of the application software installed by default is free software. In addition, Ubuntu installs some hardware drivers that are available only in binary format, but such packages are marked in the restricted component. Ubuntu aims to be secure by default. User programs can not corrupt the operating system or other users' files. For increased security, the sudo tool is used to assign temporary privileges for performing administrative tasks, which allows the root account to remain locked and helps prevent inexperienced users from inadvertently making catastrophic system changes or opening secu
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
A Unix shell is a command-line interpreter or shell that provides a command line user interface for Unix-like operating systems. The shell is both an interactive command language and a scripting language, is used by the operating system to control the execution of the system using shell scripts. Users interact with a Unix shell using a terminal emulator. All Unix shells provide filename wildcarding, here documents, command substitution and control structures for condition-testing and iteration; the most generic sense of the term shell means any program. A shell hides the details of the underlying operating system and manages the technical details of the operating system kernel interface, the lowest-level, or "inner-most" component of most operating systems. In Unix-like operating systems, users have many choices of command-line interpreters for interactive sessions; when a user logs into the system interactively, a shell program is automatically executed for the duration of the session. The type of shell, which may be customized for each user, is stored in the user's profile, for example in the local passwd file or in a distributed configuration system such as NIS or LDAP.
On hosts with a windowing system, like macOS, some users may never use the shell directly. On Unix systems, the shell has been the implementation language of system startup scripts, including the program that starts a windowing system, configures networking, many other essential functions. However, some system vendors have replaced the traditional shell-based startup system with different approaches, such as systemd; the first Unix shell was the Thompson shell, sh, written by Ken Thompson at Bell Labs and distributed with Versions 1 through 6 of Unix, from 1971 to 1975. Though rudimentary by modern standards, it introduced many of the basic features common to all Unix shells, including piping, simple control structures using if and goto, filename wildcarding. Though not in current use, it is still available as part of some Ancient UNIX Systems, it was modeled after the Multics shell, developed in 1965 by American software engineer Glenda Schroeder. Schroeder's Multics shell was itself modeled after the RUNCOM program Louis Pouzin showed to the Multics Team.
The "rc" suffix on some Unix configuration files, is a remnant of the RUNCOM ancestry of Unix shells. The PWB shell or Mashey shell, sh, was an upward-compatible version of the Thompson shell, augmented by John Mashey and others and distributed with the Programmer's Workbench UNIX, circa 1975-1977, it focused on making shell programming practical in large shared computing centers. It added shell variables, user-executable shell scripts, interrupt-handling. Control structures were extended from if/goto to if/then/else/endif, switch/breaksw/endsw, while/end/break/continue; as shell programming became widespread, these external commands were incorporated into the shell itself for performance. But the most distributed and influential of the early Unix shells were the Bourne shell and the C shell. Both shells have been used as the coding base and model for many derivative and work-alike shells with extended feature sets; the Bourne shell, sh, was a new Unix shell by Stephen Bourne at Bell Labs. Distributed as the shell for UNIX Version 7 in 1979, it introduced the rest of the basic features considered common to all the Unix shells, including here documents, command substitution, more generic variables and more extensive builtin control structures.
The language, including the use of a reversed keyword to mark the end of a block, was influenced by ALGOL 68. Traditionally, the Bourne shell program name is sh and its path in the Unix file system hierarchy is /bin/sh, but a number of compatible work-alikes are available with various improvements and additional features. On many systems, sh may be a symbolic link or hard link to one of these alternatives: Almquist shell: written as a BSD-licensed replacement for the Bourne Shell; the sh of FreeBSD, NetBSD are based on ash, enhanced to be POSIX conformant for the occasion. Bourne-Again shell: written as part of the GNU Project to provide a superset of Bourne Shell functionality; this shell can be found installed and is the default interactive shell for users on most Linux and macOS systems. Debian Almquist shell: a modern replacement for ash in Debian and Ubuntu Korn shell: written by David Korn based on the Bourne shell sources while working at Bell Labs Public domain Korn shell MirBSD Korn shell: a descendant of the OpenBSD /bin/ksh and pdksh, developed as part of MirOS BSD Z shell: a modern shell, backward compatible with bash Busybox: a set of Unix utilities for small and embedded systems, which includes 2 shells: ash, a derivative of the Almquist shell.
The POSIX standard specifies its standard shell as a strict subset of the Korn shell, an enhanced version of the Bourne shell. From a user's perspective the Bourne shell was recognized when active by its characteristic default command line prompt character, the dollar sign; the C shell, was modeled on the C programming language, including the control structures and the expression grammar. It was written by Bill Joy as a graduate student at University of California and was wide
In computing, a desktop environment is an implementation of the desktop metaphor made of a bundle of programs running on top of a computer operating system, which share a common graphical user interface, sometimes described as a graphical shell. The desktop environment was seen on personal computers until the rise of mobile computing. Desktop GUIs help the user to access and edit files, while they do not provide access to all of the features found in the underlying operating system. Instead, the traditional command-line interface is still used when full control over the operating system is required. A desktop environment consists of icons, toolbars, folders and desktop widgets. A GUI might provide drag and drop functionality and other features that make the desktop metaphor more complete. A desktop environment aims to be an intuitive way for the user to interact with the computer using concepts which are similar to those used when interacting with the physical world, such as buttons and windows.
While the term desktop environment described a style of user interfaces following the desktop metaphor, it has come to describe the programs that realize the metaphor itself. This usage has been popularized by projects such as the Common Desktop Environment, K Desktop Environment, GNOME. On a system that offers a desktop environment, a window manager in conjunction with applications written using a widget toolkit are responsible for most of what the user sees; the window manager supports the user interactions with the environment, while the toolkit provides developers a software library for applications with a unified look and behavior. A windowing system of some sort interfaces directly with the underlying operating system and libraries; this provides support for graphical hardware, pointing devices, keyboards. The window manager runs on top of this windowing system. While the windowing system may provide some window management functionality, this functionality is still considered to be part of the window manager, which happens to have been provided by the windowing system.
Applications that are created with a particular window manager in mind make use of a windowing toolkit provided with the operating system or window manager. A windowing toolkit gives applications access to widgets that allow the user to interact graphically with the application in a consistent way; the first desktop environment was sold with the Xerox Alto in the 1970s. The Alto was considered by Xerox to be a personal office computer. With the Lisa, Apple introduced a desktop environment on an affordable personal computer, which failed in the market; the desktop metaphor was popularized on commercial personal computers by the original Macintosh from Apple in 1984, was popularized further by Windows from Microsoft since the 1990s. As of 2014, the most popular desktop environments are descendants of these earlier environments, including the Aero environment used in Windows Vista and Windows 7, the Aqua environment used in macOS; when compared with the X-based desktop environments available for Unix-like operating systems such as Linux and FreeBSD, the proprietary desktop environments included with Windows and macOS have fixed layouts and static features, with integrated "seamless" designs that aim to provide consistent customer experiences across installations.
Microsoft Windows dominates in marketshare among personal computers with a desktop environment. Computers using Unix-like operating systems such as macOS, Chrome OS, Linux, BSD or Solaris are much less common. Among the more popular of these are Google's Chromebooks and Chromeboxes, Intel's NUC, the Raspberry Pi, etc. On tablets and smartphones, the situation is the opposite, with Unix-like operating systems dominating the market, including the iOS, Tizen and Ubuntu. Microsoft's Windows phone, Windows RT and Windows 10 are used on a much smaller number of tablets and smartphones. However, the majority of Unix-like operating systems dominant on handheld devices do not use the X11 desktop environments used by other Unix-like operating systems, relying instead on interfaces based on other technologies. On systems running the X Window System, desktop environments are much more dynamic and customizable to meet user needs. In this context, a desktop environment consists of several separate components, including a window manager, a file manager, a set of graphical themes, together with toolkits and libraries for managing the desktop.
All these individual modules can be exchanged and independently configured to suit users, but most desktop environments provide a default configuration that works with minimal user setup. Some window managers—such as IceWM, Openbox, ROX Desktop and Window Maker—contain sparse desktop environment elements, such as an integrated spatial file manager, while others like evilwm and wmii do not provide such elements. Not all of the program code, part of a desktop environment has effects which are directly visible to the user; some of it may be low-level code. KDE, for example, provides so-called KIO slaves which give the user access to a wide range of virtual devices; these I/O slaves are not av
In computer graphics, alpha compositing is the process of combining an image with a background to create the appearance of partial or full transparency. It is useful to render image elements in separate passes, combine the resulting multiple 2D images into a single, final image called the composite. For example, compositing is used extensively when combining computer-rendered image elements with live footage. In order to combine these image elements it is necessary to keep an associated matte for each element; this matte contains the coverage information—the shape of the geometry being drawn—making it possible to distinguish between parts of the image where the geometry was drawn and other parts of the image that are empty. To store matte information, the concept of an alpha channel was introduced by Alvy Ray Smith in the late 1970s, developed in a 1984 paper by Thomas Porter and Tom Duff. In a 2D image element, which stores a color for each pixel, additional data is stored in the alpha channel with a value between 0 and 1.
A value of 0 means that the pixel is transparent. A value of 1 means that the pixel is opaque because the geometry overlapped the pixel. If an alpha channel is used in an image, there are two common representations that are available: straight alpha, premultiplied alpha. With straight alpha, the RGB components represent the color of the object or pixel, disregarding its opacity. With premultiplied alpha, the RGB components represent the emission of the object or pixel, the alpha represents the occlusion. A more obvious advantage of this is that, in certain situations, it can save a subsequent multiplication. However, the most significant advantages of using premultiplied alpha are for correctness and simplicity rather than performance: premultiplied alpha allows correct filtering and blending. In addition, premultiplied alpha allows regions of regular alpha blending and regions with additive blending mode to be encoded within the same image, because channel values are stored in a fixed-point format which bounds the values to be between 0 and 1.
Assuming that the pixel color is expressed using straight RGBA tuples, a pixel value of implies a pixel that has 70% of the maximum green intensity and 50% opacity. If the color were green, its RGBA would be. However, if this pixel uses premultiplied alpha, all of the RGB values are as though they were scaled for occlusion by 0.5 and the alpha is appended to the end to yield. In this case, the 0.35 value for the G channel indicates 70% green emission intensity. A pure green emission would be encoded as. For this reason, knowing whether a file uses straight or premultiplied alpha is essential to process or composite it as a different formula is used, it is entirely acceptable to have an RGBA triplet express emission with no occlusion, such as. Fires and flames, glows and other such phenomena can only be represented using associated / premultiplied alpha; the only important difference is in the dynamic range of the colour representation in finite precision numerical calculations: premultiplied alpha has a unique representation for transparent pixels, avoiding the need to choose a "clear color" or resultant artefacts such as edge fringes.
In an associated / premultiplied alpha image, the RGB represents the emission amount, while the alpha is occlusion. Premultiplied alpha has some practical advantages over normal alpha blending because interpolation and filtering give correct results. Ordinary interpolation without premultiplied alpha leads to RGB information leaking out of transparent regions though this RGB information is ideally invisible; when interpolating or filtering images with abrupt borders between transparent and opaque regions, this can result in borders of colors that were not visible in the original image. Errors occur in areas of semitransparancy because the RGB components are not weighted, giving incorrectly high weighting to the color of the more transparent pixels. Premultiplication can reduce the available relative precision in the RGB values when using integer or fixed-point representation for the color components, which may cause a noticeable loss of quality if the color information is brightened or if the alpha channel is removed.
In practice, this is not noticeable because during typical composition operations, such as OVER, the influence of the low-precision colour information in low-alpha areas on the final output image is correspondingly reduced. This loss of precision makes premultiplied images easier to compress using certain compression schemes, as they do not record the color variations hidden inside transparent regions, can allocate fewer bits to encode low-alpha areas. With the existence of an alpha channel, it is possible to express compositing image operations using a compositing algebra. For example, given two image elements A and B, the most common compositing operation is to combine the images such that A appears in the foreground and B appears in the background; this can be expressed as A over B. In addition to over and Duff defined the compositing operators in, held out by, xor from a consideration of choices in blending the colors of two pixels when their coverage
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri