The Linux kernel is a free and open-source, Unix-like operating system kernel. The Linux family of operating systems is based on this kernel and deployed on both traditional computer systems such as personal computers and servers in the form of Linux distributions, on various embedded devices such as routers, wireless access points, PBXes, set-top boxes, FTA receivers, smart TVs, PVRs, NAS appliances. While the adoption of the Linux kernel in desktop computer operating system is low, Linux-based operating systems dominate nearly every other segment of computing, from mobile devices to mainframes; as of November 2017, all of the world's 500 most powerful supercomputers run Linux. The Android operating system for tablet computers and smartwatches uses the Linux kernel; the Linux kernel was conceived and created in 1991 by Linus Torvalds for his personal computer and with no cross-platform intentions, but has since expanded to support a huge array of computer architectures, many more than other operating systems or kernels.
Linux attracted developers and users who adopted it as the kernel for other free software projects, notably the GNU Operating System, created as a free, non-proprietary operating system, based on UNIX as a by-product of the fallout of the Unix wars. The Linux kernel API, the application programming interface through which user programs interact with the kernel, is meant to be stable and to not break userspace programs; as part of the kernel's functionality, device drivers control the hardware. However, the interface between the kernel and loadable kernel modules, unlike in many other kernels and operating systems, is not meant to be stable by design; the Linux kernel, developed by contributors worldwide, is a prominent example of free and open source software. Day-to-day development discussions take place on the Linux kernel mailing list; the Linux kernel is released under the GNU General Public License version 2, with some firmware images released under various non-free licenses. In April 1991, Linus Torvalds, at the time a 21-year-old computer science student at the University of Helsinki, started working on some simple ideas for an operating system.
He started with a task switcher in a terminal driver. On 25 August 1991, Torvalds posted the following to comp.os.minix, a newsgroup on Usenet: I'm doing a operating system for 386 AT clones. This has been brewing since April, is starting to get ready. I'd like any feedback on things people like/dislike in minix. I've ported bash and gcc, things seem to work; this implies that I'll get something practical within a few months Yes - it's free of any minix code, it has a multi-threaded fs. It is NOT portable, it never will support anything other than AT-harddisks, as that's all I have:-(. It's in C, but most people wouldn't call what I write C, it uses every conceivable feature of the 386 I could find, as it was a project to teach me about the 386. As mentioned, it uses a MMU, for both paging and segmentation. It's the segmentation; some of my "C"-files are as much assembler as C. Unlike minix, I happen to LIKE interrupts, so interrupts are handled without trying to hide the reason behind them. After that, many people contributed code to the project.
Early on, the MINIX community contributed code and ideas to the Linux kernel. At the time, the GNU Project had created many of the components required for a free operating system, but its own kernel, GNU Hurd, was incomplete and unavailable; the Berkeley Software Distribution had not yet freed itself from legal encumbrances. Despite the limited functionality of the early versions, Linux gained developers and users. In September 1991, Torvalds released version 0.01 of the Linux kernel on the FTP server of the Finnish University and Research Network. It had 10,239 lines of code. On 5 October 1991, version 0.02 of the Linux kernel was released. Torvalds assigned version 0 to the kernel to indicate that it was for testing and not intended for productive use. In December 1991, Linux kernel 0.11 was released. This version was the first to be self-hosted as Linux kernel 0.11 could be compiled by a computer running the same kernel version. When Torvalds released version 0.12 in February 1992, he adopted the GNU General Public License version 2 over his previous self-drafted license, which had not permitted commercial redistribution.
On 19 January 1992, the first post to the new newsgroup alt.os.linux was submitted. On 31 March 1992, the newsgroup was renamed comp.os.linux. The fact that Linux is a monolithic kernel rather than a microkernel was the topic of a debate between Andrew S. Tanenbaum, the creator of MINIX, Torvalds; this discussion is known as the Tanenbaum–Torvalds debate and started in 1992 on the Usenet discussion group comp.os.minix as a general debate about Linux and kernel architecture. Tanenbaum argued that microkernels were superior to monolithic kernels and that therefore Linux was obsolete. Unlike traditional monolithic kernels, device drivers in Linux are configured as loadable kernel modules and are loaded or unloaded while
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
LWN.net is a computing webzine with an emphasis on free software and software for Linux and other Unix-like operating systems. It consists of a weekly issue, separate stories which are published most days, threaded discussion attached to every story. Most news published daily are short summaries of articles published elsewhere, are free to all viewers. Original articles are published weekly on Thursdays and are available only to subscribers for one week, after which they become free as well. LWN.net is part of Inc.. LWN caters to a more technical audience than other Linux/free software publications, it is praised for its in-depth coverage of Linux kernel internals and Linux kernel mailing list discussions. The acronym "LWN" stood for Linux Weekly News. Founded by Jonathan Corbet and Elizabeth Coolbaugh and published since January 1998, LWN was a free site devoted to collecting Linux news, published weekly. At the end of May 2002, LWN announced a redesigned site. Among the changes was a facility for readers to post comments about stories.
On July 25, 2002, LWN announced that due to its inability to raise enough funds through donations, the following issue would be its last. Following an outpouring of support from readers, the editors of LWN decided to continue publishing, albeit with a subscription model. New weekly editions of LWN are only available to readers who subscribe at one of three levels. After a 1-week delay, each issue becomes available to readers who are unable or unwilling to pay. LWN.net staff consists of: Jonathan Corbet, who oversees the front and kernel pages, as well as overall "executive editor" functions. LWN.net purchases a number of articles from freelance authors. DistroWatch Slashdot Phoronix Official website Timeline page - Also includes the site's own history at the bottom 2007 Subscribers survey, showing demographics and what sections of the site are liked
In Unix and operating systems inspired by it, the file system is considered a central component of the operating system. It was one of the first parts of the system to be designed and implemented by Ken Thompson in the first experimental version of Unix, dated 1969; as in other operating systems, the filesystem provides information storage and retrieval, one of several forms of interprocess communication, in that the many small programs that traditionally form a Unix system can store information in files so that other programs can read them, although pipes complemented it in this role starting with the Third Edition. The filesystem provides access to other resources through so-called device files that are entry points to terminals and mice; the rest of this article uses Unix as a generic name to refer to both the original Unix operating system and its many workalikes. The filesystem appears as one rooted tree of directories. Instead of addressing separate volumes such as disk partitions, removable media, network shares as separate trees, such volumes can be mounted on a directory, causing the volume's file system tree to appear as that directory in the larger tree.
The root of the entire tree is denoted /. In the original Bell Labs Unix, a two-disk setup was customary, where the first disk contained startup programs, while the second contained users' files and programs; this second disk was mounted at the empty directory named usr on the first disk, causing the two disks to appear as one filesystem, with the second's disks contents viewable at /usr. Unix directories do not contain files. Instead, they contain the names of files paired with references to so-called inodes, which in turn contain both the file and its metadata. Multiple names in the file system may refer to the same file, a feature termed a hard link; the mathematical traits of hard links make the file system a limited type of directed acyclic graph, although the directories still form a tree, as they may not be hard-linked. The original Unix file system supported three types of files: ordinary files, "special files" termed device files; the Berkeley Software Distribution and System V each added a file type to be used for interprocess communication: BSD added sockets, while System V added FIFO files.
BSD added symbolic links to the range of file types, which are files that refer to other files, complement hard links. Symlinks were modeled after a similar feature in Multics, differ from hard links in that they may span filesystems and that their existence is independent of the target object. Other Unix systems may support additional types of files. Certain conventions exist for locating some kinds of files, such as programs, system configuration files, users' home directories; these were first documented in the hier man page since Version 7 Unix. The details of the directory layout have varied over time. Although the file system layout is not part of the Single UNIX Specification, several attempts exist to standardize it, such as the System V Application Binary Interface, the Intel Binary Compatibility Standard, the Common Operating System Environment, Linux Foundation's Filesystem Hierarchy Standard. Here is a generalized overview of common locations of files on a Unix operating system: This article incorporates material from the Citizendium article "Unix filesystem", licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL
OpenVMS is a closed-source, proprietary computer operating system for use in general-purpose computing. It is the successor to the VMS Operating System, produced by Digital Equipment Corporation, first released in 1977 for its series of VAX-11 minicomputers; the 11/780 was introduced at DEC's Oct. 1977 annual shareholder's meeting. In the 1990s, it was used for the successor series of DEC Alpha systems. OpenVMS runs on the HP Itanium-based families of computers; as of 2019, a port to the x86-64 architecture is underway. The name VMS is derived from virtual memory system, according to one of its principal architectural features. OpenVMS is a proprietary operating system. OpenVMS is a multi-user, multiprocessing virtual memory-based operating system designed for use in time-sharing, batch processing, transaction processing; when process priorities are suitably adjusted, it may approach real-time operating system characteristics. The system offers high availability through clustering and the ability to distribute the system over multiple physical machines.
This allows the system to be tolerant against disasters that may disable individual data-processing facilities. OpenVMS contains a graphical user interface, a feature, not available on the original VAX-11/VMS system. Prior to the introduction of DEC VAXstation systems in the 1980s, the operating system was used and managed from text-based terminals, such as the VT100, which provide serial data communications and screen-oriented display features. Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL and Accelerated Graphics Port graphics adapters. Enterprise-class environments select and use OpenVMS for various purposes including mail servers, network services, manufacturing or transportation control and monitoring, critical applications and databases, environments where system uptime and data access is critical. System up-times of more than 10 years have been reported, features such as rolling upgrades and clustering allow clustered applications and data to remain continuously accessible while operating system software and hardware maintenance and upgrades are performed, or when a whole data center is destroyed.
Customers using OpenVMS include banks and financial services and healthcare, network information services, large-scale industrial manufacturers of various products. As of mid-2014, Hewlett-Packard licensed the development of OpenVMS to VMS Software Inc.. VMS Software will be responsible for developing OpenVMS, supporting existing hardware and providing roadmap to clients; the company has a team of veteran developers that developed the software during DEC's ownership. In April 1975, Digital Equipment Corporation embarked on a hardware project, code named Star, to design a 32-bit virtual address extension to its PDP-11 computer line. A companion software project, code named Starlet, was started in June 1975 to develop a new operating system, based on RSX-11M, for the Star family of processors; these two projects were integrated from the beginning. Gordon Bell was the VP lead on its architecture. Roger Gourd was the project lead for the Starlet program, with software engineers Dave Cutler, Dick Hustvedt, Peter Lipman acting as the technical project leaders, each having responsibility for a different area of the operating system.
The Star and Starlet projects culminated in the VAX 11/780 computer and the VAX-11/VMS operating system. The Starlet name survived in VMS as a name of several of the main system libraries, including STARLET. OLB and STARLET. MLB. Over the years the name of the product has changed. In 1980 it was renamed, with version 2.0 release, to VAX/VMS. With the introduction of the MicroVAX range such as the MicroVAX I, MicroVAX II and MicroVAX 2000 in the mid-to-late 1980s, DIGITAL released MicroVMS versions targeted for these platforms which had much more limited memory and disk capacity. MicroVMS kits were released for VAX/VMS 4.4 to 4.7 on TK50 tapes and RX50 floppy disks, but discontinued with VAX/VMS 5.0. In 1991, VMS was renamed to OpenVMS as an indication for its support of "open systems" industry standards such as POSIX and Unix compatibility, to drop the hardware connection as the port to DIGITAL's 64-bit Alpha RISC processor was in process; the OpenVMS name first appeared after the version 5.4-2 release.
The VMS port to Alpha resulted in the creation of a second and separate source code libraries for the VAX 32-bit source code library and a second and new source code library for the Alpha 64-bit architectures. 1992 saw the release of the first version of OpenVMS for Alpha AXP systems, designated OpenVMS AXP V1.0. The decision to use the 1.x version numbering stream for the pre-production quality releases of OpenVMS AXP caused confusion for some customers and was not repeated in the next platform port to the Itanium. In 1994, with the release of OpenVMS version 6.1, feature parity between the VAX and Alpha variants was achieved. This was the so-called Functional Equivalence release, in the marketing materials of the time; some features were missing however, e.g. based shareable images, which were implemented in versions. Subsequent version numberings for the VAX and Alpha variants of the product have remaine
A computer cluster is a set of loosely or connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task and scheduled by software; the components of a cluster are connected to each other through fast local area networks, with each node running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups, different operating systems can be used on each computer, or different hardware. Clusters are deployed to improve performance and availability over that of a single computer, while being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, software for high-performance distributed computing.
They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed. In contrast to high-reliability mainframes clusters are cheaper to scale out, but have increased complexity in error handling, as in clusters error modes are not opaque to running programs; the desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach connects a number of available computing nodes via a fast local area network; the activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer; the developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may be used to achieve high levels of performance.
The TOP500 organization's semiannual list of the 500 fastest supercomputers includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s; the formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster; the first production system designed as a cluster was the Burroughs B5700 in the mid-1960s.
This allowed up to four computers, each with either one or two processors, to be coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation; the first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" system, developed in 1977, using ARCnet as the cluster interface. Clustering per se did not take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system; the ARC and VAXcluster products not only supported parallel computing, but shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan and the IBM S/390 Parallel Sysplex. Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer.
Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K compu