Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, keyboard, computer data storage, graphics card, sound card and motherboard. By contrast, software is instructions that can be run by hardware. Hardware is so-termed because it rigid with respect to changes or modifications. Intermediate between software and hardware is "firmware", software, coupled to the particular hardware of a computer system and thus the most difficult to change but among the most stable with respect to consistency of interface; the progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing. Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components; the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus; this is referred to as the Von Neumann bottleneck and limits the performance of the system. The personal computer known as the PC, is one of the most common types of computer due to its versatility and low price. Laptops are very similar, although they may use lower-power or reduced size components, thus lower performance; the computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, power supplies, controls and directs the flow of cooling air over internal components.
The case is part of the system to control electromagnetic interference radiated by the computer, protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. A power supply unit converts alternating current electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery for a period of hours; the motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include: The CPU, which performs most of the calculations which enable a computer to function, is sometimes referred to as the brain of the computer. It is cooled by a heatsink and fan, or water-cooling system. Most newer CPUs include an on-die graphics processing unit; the clock speed of CPUs governs how fast it executes instructions, is measured in GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling; the chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-access memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory. RAM comes on DIMMs in the sizes 2GB, 4GB, 8GB, but can be much larger. Read-only memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up".
The BIOS includes power management firmware. Newer motherboards use Unified Extensible Firmware Interface instead of BIOS. Buses that connect the CPU to various internal components and to expand cards for graphics and sound; the CMOS battery, which powers the memory for date and time in the BIOS chip. This battery is a watch battery; the video card, which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games. An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or
Exokernel is an operating system kernel developed by the MIT Parallel and Distributed Operating Systems group, a class of similar operating systems. Operating systems present hardware resources to applications through high-level abstractions such as file systems; the idea behind exokernels is to force as few abstractions as possible on application developers, enabling them to make as many decisions as possible about hardware abstractions. Exokernels are tiny, since functionality is limited to ensuring protection and multiplexing of resources, simpler than conventional microkernels' implementation of message passing and monolithic kernels' implementation of high-level abstractions. Implemented applications are called library operating systems; the kernel only ensures that the requested resource is free, the application is allowed to access it. This low-level hardware access allows the programmer to implement custom abstractions, omit unnecessary ones, most to improve a program's performance, it allows programmers to choose what level of abstraction they want, high, or low.
Exokernels can be seen as an application of the end-to-end principle to operating systems, in that they do not force an application program to layer its abstractions on top of other abstractions that were designed with different requirements in mind. For example, in the MIT Exokernel project, the Cheetah web server stores preformatted Internet Protocol packets on the disk, the kernel provides safe access to the disk by preventing unauthorized reading and writing, but how the disk is abstracted is up to the application or the libraries the application uses. Traditionally kernel designers have sought to make individual hardware resources invisible to application programs by requiring the programs to interact with the hardware via some abstraction model; these models include file systems for disk storage, virtual address spaces for memory, schedulers for task management, sockets for network communication. These abstractions of the hardware make it easier to write programs in general, but limit performance and stifle experimentation in new abstractions.
A security-oriented application might need a file system that does not leave old data on the disk, while a reliability-oriented application might need a file system that keeps such data for failure recovery. One option is to remove the kernel and program directly to the hardware, but the entire machine would be dedicated to the application being written; the exokernel concept is a compromise: let the kernel allocate the basic physical resources of the machine to multiple application programs, let each program decide what to do with these resources. The program can link to a support library that implements the abstractions it needs. MIT developed two exokernel-based operating systems, using two kernels: Aegis, a proof of concept with limited support for storage, XOK, which applied the exokernel concept more thoroughly. An essential idea of the MIT exokernel system is that the operating system should act as an executive for small programs provided by the application software, which are constrained only by the requirement that the exokernel must be able to guarantee that they use the hardware safely.
The MIT exokernel manages hardware resources as follows: Processor The kernel represents the processor resources as a timeline from which programs can allocate intervals of time. A program can yield the rest of its time slice to another designated program; the kernel notifies programs of processor events, such as interrupts, hardware exceptions, the beginning or end of a time slice. If a program takes a long time to handle an event, the kernel will penalize it on subsequent time slice allocations. Memory The kernel allocates physical memory pages to programs and controls the translation lookaside buffer. A program can share a page with another program by sending it a capability to access that page; the kernel ensures. Disk storage The kernel identifies disk blocks to the application program by their physical block address, allowing the application to optimize data placement; when the program initializes its use of the disk, it provides the kernel with a function that the kernel can use to determine which blocks the program controls.
The kernel uses this callback to verify that when it allocates a new block, the program claims only the block, allocated in addition to those it controlled. Networking The kernel implements a programmable packet filter, which executes programs in a byte code language designed for easy security-checking by the kernel; the available library operating systems for Exokernel include the custom ExOS system and an emulator for BSD. In addition to these, the exokernel team created the Cheetah web server, which uses the kernel directly; the exokernel concept has been around since at least 1994, but as of 2010 exokernels are still a research effort and have not been used in any major commercial operating systems. A concept operating exokernel system is Nemesis, written by University of Cambridge, University of Glasgow, Citrix Systems, the Swedish Institute of Computer Science. MIT has built several exokernel-based systems, including ExOS. Hybrid kernel Hypervisor Kernel Microkernel Monolithic kernel Nanokernel Paravirtualization Rump kernel Single address space operating system Unikernel BareMetal Erlingsson, Úlfar.
The NetBSD rump kernel is the first implementation of the "anykernel" concept where drivers either can be compiled into and/or run in the monolithic kernel or in user space on top of a light-weight rump kernel. The NetBSD drivers can be used on top of the rump kernel on a wide range of POSIX operating systems, such as the Hurd, NetBSD, DragonFlyBSD, Solaris and Cygwin, along with the file system utilities built with the rump libraries; the rump kernels can run without POSIX directly on top of the Xen hypervisor, an L4 microkernel using the Genode OS Framework or on "OS-less" bare metal. An anykernel is different in concept from microkernels, partitioned kernels or hybrid kernels in that it tries to preserve the advantages of a monolithic kernel, while still enabling the faster driver development and added security in user space; the "anykernel" concept refers to an architecture-agnostic approach to drivers where drivers can either be compiled into the monolithic kernel or be run as a userspace process, microkernel-style, without code changes.
With drivers, a wider concept is considered where not only device drivers are included but file systems and the networking stack. The File System Access Utilities is a subproject built with the rump libraries, it aims to have a set of utilities to access and modify a file system image without having to mount it. The fs-utils does not require superuser account to access the device; the advantage of fs-utils over similar projects such as mtools is supporting the usage of familiar filesystem Unix commands for a large number of file systems which are supported by NetBSD. Filesystem in Userspace Unikernel Rump kernel home page Kantee, Antti; the Design and Implementation of the Anykernel and Rump Kernels. P. 218. Rump Kernels: No OS? No Problem! by Antti Kantee and Justin Cormack DDEKit LibOS
Fortran is a general-purpose, compiled imperative programming language, suited to numeric computation and scientific computing. Developed by IBM in the 1950s for scientific and engineering applications, FORTRAN came to dominate this area of programming early on and has been in continuous use for over half a century in computationally intensive areas such as numerical weather prediction, finite element analysis, computational fluid dynamics, computational physics and computational chemistry, it is a popular language for high-performance computing and is used for programs that benchmark and rank the world's fastest supercomputers. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while retaining compatibility with prior versions. Successive versions have added support for structured programming and processing of character-based data, array programming, modular programming and generic programming, high performance Fortran, object-oriented programming and concurrent programming.
Fortran's design was the basis for many other programming languages. Among the better known is BASIC, based on FORTRAN II with a number of syntax cleanups, notably better logical structures, other changes to more work in an interactive environment; the names of earlier versions of the language through FORTRAN 77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90; the official language standards now refer to the language as "Fortran" rather than all-caps "FORTRAN". In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a more practical alternative to assembly language for programming their IBM 704 mainframe computer. Backus' historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, David Sayre, its concepts included easier entry of equations into a computer, an idea developed by J. Halcombe Laning and demonstrated in the Laning and Zierler system of 1952.
A draft specification for The IBM Mathematical Formula Translating System was completed by November 1954. The first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957; this was the first optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code with performance comparable to that of hand-coded assembly language. While the community was skeptical that this new method could outperform hand-coding, it reduced the number of programming statements necessary to operate a machine by a factor of 20, gained acceptance. John Backus said during a 1979 interview with Think, the IBM employee magazine, "Much of my work has come from being lazy. I didn't like writing programs, so, when I was working on the IBM 701, writing programs for computing missile trajectories, I started work on a programming system to make it easier to write programs."The language was adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code.
The inclusion of a complex number data type in the language made Fortran suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM 709, 650, 1620, 7090 computers; the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first used programming language supported across a variety of computer architectures; the development of Fortran paralleled the early evolution of compiler technology, many advances in the theory and design of compilers were motivated by the need to generate efficient code for Fortran programs. The initial release of FORTRAN for the IBM 704 contained 32 statements, including: DIMENSION and EQUIVALENCE statements Assignment statements Three-way arithmetic IF statement, which passed control to one of three locations in the program depending on whether the result of the arithmetic statement was negative, zero, or positive IF statements for checking exceptions.
The arithmetic IF statement was reminiscent of a three-way comparison instruction available on the 704. The statement provided the only way to compare numbers – by testing their difference, with an attendant risk of overflow; this deficiency was overcome by "logical" facilities introduced in FORTRAN IV. The FREQUENCY statement was used to give branch probabilities for the three branch cases of the arithmetic IF statement; the first FORTRAN compiler used this weighting to perform at compile time a Monte Carlo simulation of the generated code, the results of which were used to optimize the
History of operating systems
Computer operating systems provide a set of functions needed and used by most application programs on a computer, the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run and perform standard tasks, its own drivers for peripheral devices like printers and punched paper card readers; the growing complexity of hardware and application programs made operating systems a necessity for everyday use. The earliest computers were mainframes; each user had sole use of the machine for a scheduled period of time and would arrive at the computer with program and data on punched paper cards and magnetic or paper tape. The program would be loaded into the machine, the machine would be set to work until the program completed or crashed. Programs could be debugged via a control panel using dials, toggle switches and panel lights. Symbolic languages and compilers were developed for programmers to translate symbolic program-code into machine code that would have been hand-encoded.
Machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system. At Cambridge University in England the job queue was at one time a washing line from which tapes were hung with different colored clothes-pegs to indicate job-priority; as machines became more powerful the time to run programs diminished, the time to hand off the equipment to the next user became large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punch-cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the machine and were less and less concerned with implementing tasks manually.
When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example. All these features were building up towards the repertoire of a capable operating system; the runtime libraries became an amalgamated program, started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, go on to process the next job.
These resident background programs, capable of managing multistep processes, were called monitors or monitor-programs before the term "operating system" established itself. An underlying program offering basic hardware-management, software-scheduling and resource-monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era, but there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers and air-conditioners which became standard and more optional software features became standard features in every OS package, although some applications such as database management systems and spreadsheets remain optional and separately priced; this has led to the perception of an OS as a complete user-system with an integrated graphical user interface, some applications such as text editors and file managers, configuration tools. The true descendant of the early operating systems is what is now called the "kernel". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control-systems, which do not run user applications at the front-end.
An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s. The broader categories of systems and application software are discussed in the computer software article; the first operating system used for real work was GM-NAA I/O, produced in 1956 by General Motors' Research division for its IBM 704. Most other early operating systems for IBM mainframes were produced by customers. Early operating systems were diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer; every operating system from the same vendor, could have radically different models of commands, operating procedures, such facilities as debugging aids. Each time the manufacturer brought out a new machine, there would be a new operating system, most applications would have to be manually adjusted and retested; the state of affairs continued until the 1960s when IBM a leading hardware vendor, stopped work on existing systems and put all its eff
ARM Advanced RISC Machine Acorn RISC Machine, is a family of reduced instruction set computing architectures for computer processors, configured for various environments. Arm Holdings develops the architecture and licenses it to other companies, who design their own products that implement one of those architectures—including systems-on-chips and systems-on-modules that incorporate memory, radios, etc, it designs cores that implement this instruction set and licenses these designs to a number of companies that incorporate those core designs into their own products. Processors that have a RISC architecture require fewer transistors than those with a complex instruction set computing architecture, which improves cost, power consumption, heat dissipation; these characteristics are desirable for light, battery-powered devices—including smartphones and tablet computers, other embedded systems. For supercomputers, which consume large amounts of electricity, ARM could be a power-efficient solution.
ARM Holdings periodically releases updates to the architecture. Architecture versions ARMv3 to ARMv7 support 32-bit arithmetic; the Thumb version supports a variable-length instruction set that provides both 32- and 16-bit instructions for improved code density. Some older cores can provide hardware execution of Java bytecodes. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. With over 100 billion ARM processors produced as of 2017, ARM is the most used instruction set architecture and the instruction set architecture produced in the largest quantity; the used Cortex cores, older "classic" cores, specialized SecurCore cores variants are available for each of these to include or exclude optional capabilities. The British computer manufacturer Acorn Computers first developed the Acorn RISC Machine architecture in the 1980s to use in its personal computers, its first ARM-based products were coprocessor modules for the BBC Micro series of computers.
After the successful BBC Micro computer, Acorn Computers considered how to move on from the simple MOS Technology 6502 processor to address business markets like the one, soon dominated by the IBM PC, launched in 1981. The Acorn Business Computer plan required that a number of second processors be made to work with the BBC Micro platform, but processors such as the Motorola 68000 and National Semiconductor 32016 were considered unsuitable, the 6502 was not powerful enough for a graphics-based user interface. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/second bandwidth. After testing all available processors and finding them lacking, Acorn decided it needed a new architecture. Inspired by papers from the Berkeley RISC project, Acorn considered designing its own processor. A visit to the Western Design Center in Phoenix, where the 6502 was being updated by what was a single-person company, showed Acorn engineers Steve Furber and Sophie Wilson they did not need massive resources and state-of-the-art research and development facilities.
Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a 6502 second processor. This convinced Acorn engineers. Wilson approached Acorn's CEO, Hermann Hauser, requested more resources. Hauser assembled a small team to implement Wilson's model in hardware; the official Acorn RISC Machine project started in October 1983. They chose VLSI Technology as the silicon partner, as they were a source of ROMs and custom chips for Acorn. Wilson and Furber led the design, they implemented it with a similar efficiency ethos as the 6502. A key design goal was achieving low-latency input/output handling like the 6502; the 6502's memory access architecture had let developers produce fast machines without costly direct memory access hardware. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985; the first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips, sped up the CAD software used in ARM2 development.
Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be dense, making ARM BBC BASIC an good test for any ARM emulator; the original aim of a principally ARM-based computer was achieved in 1987 with the release of the Acorn Archimedes. In 1992, Acorn once more won the Queen's Award for Technology for the ARM; the ARM2 featured 26-bit address space and 27 32-bit registers. Eight bits from the program counter register were available for other purposes; the address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags. The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 40,000. Much of this simplicity came from the lack of mic