Linux is a family of free and open-source software operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is packaged in a Linux distribution. Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy. Popular Linux distributions include Debian and Ubuntu. Commercial distributions include SUSE Linux Enterprise Server. Desktop Linux distributions include a windowing system such as X11 or Wayland, a desktop environment such as GNOME or KDE Plasma. Distributions intended for servers may omit graphics altogether, include a solution stack such as LAMP; because Linux is redistributable, anyone may create a distribution for any purpose. Linux was developed for personal computers based on the Intel x86 architecture, but has since been ported to more platforms than any other operating system.
Linux is the leading operating system on servers and other big iron systems such as mainframe computers, the only OS used on TOP500 supercomputers. It is used by around 2.3 percent of desktop computers. The Chromebook, which runs the Linux kernel-based Chrome OS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux runs on embedded systems, i.e. devices whose operating system is built into the firmware and is tailored to the system. This includes routers, automation controls, digital video recorders, video game consoles, smartwatches. Many smartphones and tablet computers run other Linux derivatives; because of the dominance of Android on smartphones, Linux has the largest installed base of all general-purpose operating systems. Linux is one of the most prominent examples of open-source software collaboration; the source code may be used and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License.
The Unix operating system was conceived and implemented in 1969, at AT&T's Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna. First released in 1971, Unix was written in assembly language, as was common practice at the time. In a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie; the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, AT&T was required to license the operating system's source code to anyone who asked; as a result, Unix grew and became adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs; the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed of free software. Work began in 1984. In 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License in 1989.
By the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers and the kernel, called GNU/Hurd, were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, he would not have decided to write his own. Although not released until 1992, due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has stated that if 386BSD had been available at the time, he would not have created Linux. MINIX was created by Andrew S. Tanenbaum, a computer science professor, released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn the operating system principles. Although the complete source code of MINIX was available, the licensing terms prevented it from being free software until the licensing changed in April 2000. In 1991, while attending the University of Helsinki, Torvalds became curious about operating systems.
Frustrated by the licensing of MINIX, which at the time limited it to educational use only, he began to work on his own operating system kernel, which became the Linux kernel. Torvalds began the development of the Linux kernel on MINIX and applications written for MINIX were used on Linux. Linux matured and further Linux kernel development took place on Linux systems. GNU applications replaced all MINIX components, because it was advantageous to use the available code from the GNU Project with the fledgling operating system. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, making a functional and free operating system. Linus Torvalds had wanted to call his invention "Freax", a portmant
The Teletype Corporation, a part of American Telephone and Telegraph Company's Western Electric manufacturing arm since 1930, came into being in 1928 when the Morkrum-Kleinschmidt Company changed its name to the name of its trademark equipment. Teletype Corporation, of Skokie, was responsible for the research and manufacture of data and record communications equipment, but it is remembered for the manufacture of electromechanical teleprinters; because of the nature of its business, as stated in the corporate charter, Teletype Corporation was allowed a unique mode of operation within Western Electric. It was organized as a separate entity, contained all the elements necessary for a separate corporation. Teletype's charter permitted the sale of equipment to customers outside the AT&T Bell System, which explained their need for a separate sales force; the primary customer outside of the Bell System was the United States Government. The Teletype Corporation continued in this manner until January 8, 1982, the date of settlement of United States v. AT&T, a 1974 United States Department of Justice antitrust suit against AT&T.
At that time, Western Electric was absorbed into AT&T as AT&T Technologies, the Teletype Corporation became AT&T Teletype. The last vestiges of what had been the Teletype Corporation ceased in 1990, bringing to a close the dedicated teleprinter business. One of the three Teletype manufacturing buildings in Skokie remains in use as a parking garage for a shopping center; every other floor of the building has been removed. The other two buildings were demolished; the Teletype Corporation had its roots in the Morkrum Company. In 1902, electrical engineer Frank Pearne approached Joy Morton, head of Morton Salt, seeking a sponsor for Pearne's research into the practicalities of developing a printing telegraph system. Joy Morton needed to determine whether this was worthwhile and so consulted mechanical engineer Charles Krum, vice president of the Western Cold Storage Company, run by Morton’s brother Mark Morton. Krum was interested in helping Pearne, so space was set up in a laboratory in the attic of Western Cold Storage.
Frank Pearne lost interest in the project after a year, left to become a teacher. Krum was prepared to continue Pearne’s work, in August 1903 a patent was filed for a "typebar page printer". In 1904, Krum filed a patent for a "type wheel printing telegraph machine", issued in August 1907. In 1906, the Morkrum Company was formed, with the company name combining the Morton and Krum names and reflecting the financial assistance provided by Joy Morton; this is the time when Howard Krum, joined his father in this work. It was Howard who developed and patented the start-stop synchronizing method for code telegraph systems, which made possible the practical teleprinter. In 1908, a working teleprinter was produced, called the Morkrum Printing Telegraph, field tested with the Alton Railroad. In 1910, the Morkrum Company designed and installed the first commercial teletypewriter system on Postal Telegraph Company lines between Boston and New York City using the "Blue Code Version" of the Morkrum Printing Telegraph.
In 1925, the Morkrum Company and the Kleinschmidt Electric Company merged to form the Morkrum-Kleinschmidt Company. In December 1928, the company changed its name to the less cumbersome "Teletype Corporation". In 1930, the Teletype Corporation was purchased by the American Telephone and Telegraph Company for $30,000,000 in stock and became a subsidiary of the Western Electric Company. While other principals in the Teletype Corporation retired, Howard Krum stayed on as a consultant. Morkrum Printing Telegraph – This was the first mechanically successful teleprinter used to 1908 for the Alton Railroad trials. A "Blue Code Version" was used in 1910 as a part of the first commercial teleprinter circuit that ran on Postal Telegraph Company lines between Boston and New York City. In 1914, a "Green Code Version" was installed using Western Union Telegraph Company lines for the Associated Press and was used to distribute news to competing newspapers in New York City. Morkrum Model 11 Tape Printer – The Model 11 Typewheel Tape Printer, at about 45 words-per-minute, was a bit faster the Morkrum Printing Telegraph Blue and Green-Code printers, was modeled after the European Baudot Telegraph System printer.
The Model 11 was a Tape Printer which used gummed paper tape that could be pasted onto a telegram blank. This was the first teleprinter to operate from an airplane. Morkrum Model GPE Perforator – The Morkrum Company Model GPE "Green Code" Perforator was designed about 1913 and a US Patent was filed in 1914; this equipment continued to be produced for the next 50 years. Morkrum Model 12 Typebar Page Printer – This equipment known as the Model 12 Page Printer, based on an Underwood typewriter mechanism, was the first commercially viable machine; this printer was produced from 1922 to 1925 under the Morkrum Company name, from 1925 to 1929 under the Morkrum-Kleinschmidt name, from 1929 to 1943 under Teletype Corp. In 1916, Kleinschmidt filed a patent application for a type-bar page printer This printer utilized Baudot code but did not utilize the start-stop synchronization technology that Howard Krum had patented; the type-bar printer was intended for use on multiplex circuits, its printing was controlled from a local segment on a receiving distributor of the sunflower type.
In 1919, Kleinschmidt appeared to be concerned chiefly with development of multiplex transmitters for use with this printer. 10-A Printing Telegraph – The Western Electric Company made a line of printing telegraph equipment prior to acquiring the Teletype Corporation in 1930. The design for this equipment was provided by the Bell Telephone L
RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of reliable mainframe disk drives referred to as "single large expensive disk". Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance; the different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives; the term "RAID" was invented by David Patterson, Garth A. Gibson, Randy Katz at the University of California, Berkeley in 1987.
In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks", presented at the SIGMOD conference, they argued that the top performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives, developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication, including the following: Mirroring was well established in the 1970s including, for example, Tandem NonStop Systems. In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4. Around 1983, DEC began. In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5. Around 1988, the Thinking Machines' DataVault used error correction codes in an array of disk drives.
A similar approach was used in the early 1960s on the IBM 353. Industry manufacturers redefined the RAID acronym to stand for "Redundant Array of Independent Disks". Many RAID levels employ an error protection scheme called "parity", a used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based on addition and multiplication in a particular Galois field or Reed–Solomon error correction. RAID can provide data security with solid-state drives without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this "hybrid RAID". A number of standard schemes have evolved; these are called levels. There were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels.
RAID levels and their associated data formats are standardized by the Storage Networking Industry Association in the Common RAID Disk Drive Format standard: RAID 0 RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same, but because striping distributes the contents of each file among all disks in the set, the failure of any disk causes all files, the entire RAID 0 volume, to be lost. A broken spanned volume at least preserves the files on the unfailing disks; the benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of disks because, unlike spanned volumes and writes are done concurrently, the cost is complete vulnerability to drive failures. RAID 1 RAID 1 consists of data mirroring, without striping. Data is written identically to two drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first, improving performance.
Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID. Write throughput is always slower because every drive must be updated, the slowest drive limits the write performance; the array continues to operate as long. RAID 2 RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is stored on at least one parity drive; this level is of historical significance only. RAID 3 RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is stored on a dedicated parity drive. Although implementations exist, RAID 3 is not
A system monitor is a hardware or software component used to monitor system resources and performance in a computer system. A hardware monitor is common component of modern motherboards, which can either come as a separate chip interfaced through I²C or SMBus, or as part of a Super I/O solution interfaced through Low Pin Count; these devices make it possible to monitor temperature in the chassis, voltage supplied to the motherboard by the power supply unit and the speed of the computer fans that are connected directly to one of the fan headers on the motherboard. Many of these hardware monitors have fan controlling capabilities. System monitoring software like SpeedFan on Windows, lm_sensors on GNU/Linux, envstat on NetBSD, sysctl hw.sensors on OpenBSD and DragonFly can interface with these chips to relay this environmental sensor information to the user. Software monitors occur more sometimes as a part of a widget engine; these monitoring systems are used to keep track of system resources, such as CPU usage and frequency, or the amount of free RAM.
They are used to display items such as free space on one or more hard drives, the temperature of the CPU and other important components, networking information including the system IP address and current rates of upload and download. Other possible displays may include the date and time, system uptime, computer name, hard drive S. M. A. R. T. Data, fan speeds, the voltages being provided by the power supply. Less common are hardware-based systems monitoring similar information. Customarily these occupy one or more drive bays on the front of the computer case, either interface directly with the system hardware or connect to a software data-collection system via USB. With either approach to gathering data, the monitoring system displays information on a small LCD panel or on series of small analog or LED numeric displays; some hardware-based system monitors allow direct control of fan speeds, allowing the user to customize the cooling in the system. A few high-end models of hardware system monitor are designed to interface with only a specific model of motherboard.
These systems directly utilize the sensors built into the system, providing more detailed and accurate information than less-expensive monitoring systems customarily provide. Single system: Distributed: Application performance management Application service management I²C and SMBus Network monitoring Mean time between failures Intelligent Platform Management Interface System profiler Website monitoring Monitoring a Linux System with X11/Console/Web-Based Tools
USB is an industry standard that establishes specifications for cables and protocols for connection and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 2.0 and USB 3.x. USB was designed to standardize the connection of peripherals like keyboards, pointing devices, digital still and video cameras, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, it has replaced interfaces such as serial ports and parallel ports, has become commonplace on a wide range of devices. USB connectors have been replacing other types for battery chargers of portable devices; this section is intended to allow fast identification of USB receptacles on equipment. Further diagrams and discussion of plugs and receptacles can be found in the main article above; the Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, when compared with existing standard or ad-hoc proprietary interfaces.
From the computer user's perspective, the USB interface improved ease of use in several ways. The USB interface is self-configuring, so the user need not adjust settings on the device and interface for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels. USB connectors are standardized at the host, so any peripheral can use any available receptacle. USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves; the USB interface is "hot pluggable", meaning devices can be exchanged without rebooting the host computer. Small devices can be powered directly from displacing extra power supply cables; because use of the USB logos is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration. Installation of a device relying on the USB standard requires minimal operator action.
When a device is plugged into a port on a running personal computer system, it is either automatically configured using existing device drivers, or the system prompts the user to locate a driver, installed and configured automatically. For hardware manufacturers and software developers, the USB standard eliminates the requirement to develop proprietary interfaces to new peripherals; the wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces. A USB interface can be designed to provide the best available latency for time-critical functions, or can be set up to do background transfers of bulk data with little impact on system resources; the USB interface is generalized with no signal lines dedicated to only one function of one device. USB cables are limited in length, as the standard was meant to connect to peripherals on the same table-top, not between rooms or between buildings. However, a USB port can be connected to a gateway.
USB has "master-slave" protocol for addressing peripheral devices. Some extension to this limitation is possible through USB On-The-Go. A host cannot "broadcast" signals to all peripherals at once, each must be addressed individually; some high speed peripheral devices require sustained speeds not available in the USB standard. While converters exist between certain "legacy" interfaces and USB, they may not provide full implementation of the legacy hardware. For a product developer, use of USB requires implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale must obtain a USB ID which requires a fee paid to the Implementers' Forum. Developers of products that use the USB specification must sign an agreement with Implementer's Forum. Use of the USB logos on the product require annual fees and membership in the organization. A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Microsoft, NEC, Nortel.
The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. Ajay Bhatt and his team worked on the standard at Intel; the original USB 1.0 specification, introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first used version of USB was 1.1, released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, the lower 1.5 Mbit/s rate for low data
Unix is a family of multitasking, multiuser computer operating systems that derive from the original AT&T Unix, development starting in the 1970s at the Bell Labs research center by Ken Thompson, Dennis Ritchie, others. Intended for use inside the Bell System, AT&T licensed Unix to outside parties in the late 1970s, leading to a variety of both academic and commercial Unix variants from vendors including University of California, Microsoft, IBM, Sun Microsystems. In the early 1990s, AT&T sold its rights in Unix to Novell, which sold its Unix business to the Santa Cruz Operation in 1995; the UNIX trademark passed to The Open Group, a neutral industry consortium, which allows the use of the mark for certified operating systems that comply with the Single UNIX Specification. As of 2014, the Unix version with the largest installed base is Apple's macOS. Unix systems are characterized by a modular design, sometimes called the "Unix philosophy"; this concept entails that the operating system provides a set of simple tools that each performs a limited, well-defined function, with a unified filesystem as the main means of communication, a shell scripting and command language to combine the tools to perform complex workflows.
Unix distinguishes itself from its predecessors as the first portable operating system: the entire operating system is written in the C programming language, thus allowing Unix to reach numerous platforms. Unix was meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmers; the system grew larger as the operating system started spreading in academic circles, as users added their own tools to the system and shared them with colleagues. At first, Unix was not designed to be multi-tasking. Unix gained portability, multi-tasking and multi-user capabilities in a time-sharing configuration. Unix systems are characterized by various concepts: the use of plain text for storing data; these concepts are collectively known as the "Unix philosophy". Brian Kernighan and Rob Pike summarize this in The Unix Programming Environment as "the idea that the power of a system comes more from the relationships among programs than from the programs themselves".
In an era when a standard computer consisted of a hard disk for storage and a data terminal for input and output, the Unix file model worked quite well, as I/O was linear. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, semaphores, network sockets were added to support communication with other hosts; as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. By the early 1980s, users began seeing Unix as a potential universal operating system, suitable for computers of all sizes; the Unix environment and the client–server program model were essential elements in the development of the Internet and the reshaping of computing as centered in networks rather than in individual computers. Both Unix and the C programming language were developed by AT&T and distributed to government and academic institutions, which led to both being ported to a wider variety of machine families than any other operating system.
Under Unix, the operating system consists of many libraries and utilities along with the master control program, the kernel. The kernel provides services to start and stop programs, handles the file system and other common "low-level" tasks that most programs share, schedules access to avoid conflicts when programs try to access the same resource or device simultaneously. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space - although in microkernel implementations, like MINIX or Redox, functions such as network protocols may run in user space; the origins of Unix date back to the mid-1960s when the Massachusetts Institute of Technology, Bell Labs, General Electric were developing Multics, a time-sharing operating system for the GE-645 mainframe computer. Multics featured several innovations, but presented severe problems. Frustrated by the size and complexity of Multics, but not by its goals, individual researchers at Bell Labs started withdrawing from the project.
The last to leave were Ken Thompson, Dennis Ritchie, Douglas McIlroy, Joe Ossanna, who decided to reimplement their experiences in a new project of smaller scale. This new operating system was without organizational backing, without a name; the new operating system was a single-tasking system. In 1970, the group coined the name Unics for Uniplexed Information and Computing Service, as a pun on Multics, which stood for Multiplexed Information and Computer Services. Brian Kernighan takes credit for the idea, but adds that "no one can remember" the origin of the final spelling Unix. Dennis Ritchie, Doug McIlroy, Peter G. Neumann credit Kernighan; the operating system was written in assembly language, but in 1973, Version 4 Unix was rewritten in C. Version 4 Unix, still had many PDP-11 dependent codes, is not suitable for porting; the first port to other platform was made five years f
Version 7 Unix
Seventh Edition Unix called Version 7 Unix, Version 7 or just V7, was an important early release of the Unix operating system. V7, released in 1979, was the last Bell Laboratories release to see widespread distribution before the commercialization of Unix by AT&T Corporation in the early 1980s. V7 was developed for Digital Equipment Corporation's PDP-11 minicomputers and was ported to other platforms. Unix versions from Bell Labs were designated by the edition of the user's manual with which they were accompanied. Released in 1979, the Seventh Edition was preceded by Sixth Edition, the first version licensed to commercial users. Development of the Research Unix line continued with the Eighth Edition, which incorporated development from 4.1BSD, through the Tenth Edition, after which the Bell Labs researchers concentrated on developing Plan 9. V7 was the first portable version of Unix; as this was the era of minicomputers, with their many architectural variations, the beginning of the market for 16-bit microprocessors, many ports were completed within the first few years of its release.
The first Sun workstations ran a V7 port by UniSoft. The VAX port of V7, called UNIX/32V, was the direct ancestor of the popular 4BSD family of Unix systems; the group at University of Wollongong that had ported V6 to the Interdata 7/32 ported V7 to that machine as well. Interdata sold the port as Edition VII. DEC distributed their own PDP-11 version of V7, called V7M. V7M, developed by DEC's original Unix Engineering Group, contained many enhancements to the kernel for the PDP-11 line of computers including improved hardware error recovery and many additional device drivers. UEG evolved into the group that developed Ultrix. Due to its power yet elegant simplicity, many old-time Unix users remember V7 as the pinnacle of Unix development and have dubbed it "the last true Unix", an improvement over all preceding and following Unices. At the time of its release, its extended feature set came at the expense of a decrease in performance compared to V6, to be corrected by the user community; the number of system calls in Version 7 was only around 50, while Unix and Unix-like systems continued to add many more: Version 7 of the Research UNIX System provided about 50 system calls, 4.4BSD provided about 110, SVR4 had around 120.
The exact number of system calls varies depending on the operating system version. More recent systems have seen incredible growth in the number of supported system calls. Linux 3.2.0 has 380 system calls and FreeBSD 8.0 has over 450. In 2002, Caldera International released V7 as FOSS under a permissive BSD-like software license. Bootable images for V7 can still be downloaded today, can be run on modern hosts using PDP-11 emulators such as SIMH. An x86 port has been developed by Nordier & Associates. Paul Allen maintains several publicly accessible historic computer systems, including a PDP-11/70 running Unix Version 7. Request a login from Living Computers: Museum + Labs and try running Version 7 Unix on the original equipment. Many new features were introduced in Version 7. Programming tools: lex and make; the Portable C Compiler was provided along with the earlier, C compiler by Ritchie. These first appeared in the Research Unix lineage in Version 7, although early versions of some of them had been picked up by PWB/UNIX.
New commands: the Bourne shell, at, calendar, f77, tar, touch Networking support, in the form of uucp and Datakit New system calls: access, alarm, exece, lseek, utime New library calls: The new stdio routines, getenv, popen/system Environment variables A maximum file size of just over one gigabyte, through a system of indirect addressing A feature that did not survive long was a second way to do inter-process communication: multiplexed files. A process could create a special type of file with the mpx system call. Mpx files were considered experimental, not enabled in the default kernel, disappeared from versions, which offered sockets or CB UNIX's IPC facilities instead. Version 6 Unix Seventh Edition Unix terminal interface Ancient UNIX Unix Seventh Edition manual Browsable source code PDP Unix Preservation Society