PowerPC is a reduced instruction set computing instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC, as an evolving instruction set, has since 2006 been named Power ISA, while the old name lives on as a trademark for some implementations of Power Architecture-based processors. PowerPC was the cornerstone of AIM's PReP and Common Hardware Reference Platform initiatives in the 1990s. Intended for personal computers, the architecture is well known for being used by Apple's Power Macintosh, PowerBook, iMac, iBook, Xserve lines from 1994 until 2006, when Apple migrated to Intel's x86, it has since become a niche in personal computers, but remains popular for embedded and high-performance processors. Its use in 7th generation of video game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in third party AmigaOS 4 personal computers. PowerPC is based on IBM's earlier POWER instruction set architecture, retains a high level of compatibility with it.
The history of RISC began with IBM's 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products becoming the 16-register IBM ROMP processor used in the IBM RT PC. The RT PC was a rapid design implementing the RISC architecture. Between the years of 1982–1984, IBM started a project to build the fastest microprocessor on the market; the result is the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990. The original POWER microprocessor, one of the first superscalar RISC implementations, is a high performance, multi-chip design. IBM soon realized that a single-chip microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work began on a one-chip POWER microprocessor, designated the RSC. In early 1991, IBM realized its design could become a high-volume microprocessor used across the industry. Apple had realized the limitations and risks of its dependency upon a single CPU vendor at a time when Motorola was falling behind on delivering the 68040 CPU.
Furthermore, Apple had conducted its own research and made an experimental quad-core CPU design called Aquarius, which convinced the company's technology leadership that the future of computing was in the RISC methodology. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture. Soon after, being one of Motorola's largest customers of desktop-class microprocessors, asked Motorola to join the discussions due to their long relationship, Motorola having had more extensive experience with manufacturing high-volume microprocessors than IBM, to form a second source for the microprocessors; this three-way collaboration between Apple, IBM, Motorola became known as the AIM alliance. In 1991, the PowerPC was just one facet of a larger alliance among these three companies. At the time, most of the personal computer industry was shipping systems based on the Intel 80386 and 80486 chips, which have a complex instruction set computer architecture, development of the Pentium processor was well underway.
The PowerPC chip was one of several joint ventures involving the three alliance members, in their efforts to counter the growing Microsoft-Intel dominance of personal computing. For Motorola, POWER looked like an unbelievable deal, it allowed the company to sell a tested and powerful RISC CPU for little design cash on its own part. It maintained ties with an important customer and seemed to offer the possibility of adding IBM too, which might buy smaller versions from Motorola instead of making its own. At this point Motorola had its own RISC design in the form of the 88000, doing poorly in the market. Motorola was doing well with its 68000 family and the majority of the funding was focused on this; the 88000 effort was somewhat starved for resources. The 88000 was in production, however; the 88000 had achieved a number of embedded design wins in telecom applications. If the new POWER one-chip version could be made bus-compatible at a hardware level with the 88000, that would allow both Apple and Motorola to bring machines to market far faster since they would not have to redesign their board architecture.
The result of these various requirements is the PowerPC specification. The differences between the earlier POWER instruction set and that of PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02. Since 1991, IBM had a long-standing desire for a unifying operating system that would host all existing operating systems as personalities upon one microkernel. From 1991 to 1995, the company designed and aggressively evangelized what would become Workplace OS targeting PowerPC; when the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors. Microsoft released Windows NT 3.51 for the architecture, used in Motorola's
Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
X86 is a family of instruction set architectures based on the Intel 8086 microprocessor and its 8088 variant. The 8086 was introduced in 1978 as a 16-bit extension of Intel's 8-bit 8080 microprocessor, with memory segmentation as a solution for addressing more memory than can be covered by a plain 16-bit address; the term "x86" came into being because the names of several successors to Intel's 8086 processor end in "86", including the 80186, 80286, 80386 and 80486 processors. Many additions and extensions have been added to the x86 instruction set over the years consistently with full backward compatibility; the architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many other companies. Of those, only Intel, AMD, VIA hold x86 architectural licenses, are producing modern 64-bit designs; the term is not synonymous with IBM PC compatibility, as this implies a multitude of other computer hardware. As of 2018, the majority of personal computers and laptops sold are based on the x86 architecture, while other categories—especially high-volume mobile categories such as smartphones or tablets—are dominated by ARM.
In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, the term x86 represented any 8086 compatible CPU. Today, however, x86 implies a binary compatibility with the 32-bit instruction set of the 80386; this is due to the fact that this instruction set has become something of a lowest common denominator for many modern operating systems and also because the term became common after the introduction of the 80386 in 1985. A few years after the introduction of the 8086 and 8088, Intel added some complexity to its naming scheme and terminology as the "iAPX" of the ambitious but ill-fated Intel iAPX 432 processor was tried on the more successful 8086 family of chips, applied as a kind of system-level prefix. An 8086 system, including coprocessors such as 8087 and 8089, as well as simpler Intel-specific system chips, was thereby described as an iAPX 86 system. There were terms iRMX, iSBC, iSBX – all together under the heading Microsystem 80. However, this naming scheme was quite temporary.
Although the 8086 was developed for embedded systems and small multi-user or single-user computers as a response to the successful 8080-compatible Zilog Z80, the x86 line soon grew in features and processing power. Today, x86 is ubiquitous in both stationary and portable personal computers, is used in midrange computers, workstations and most new supercomputer clusters of the TOP500 list. A large amount of software, including a large list of x86 operating systems are using x86-based hardware. Modern x86 is uncommon in embedded systems and small low power applications as well as low-cost microprocessor markets, such as home appliances and toys, lack any significant x86 presence. Simple 8-bit and 16-bit based architectures are common here, although the x86-compatible VIA C7, VIA Nano, AMD's Geode, Athlon Neo and Intel Atom are examples of 32- and 64-bit designs used in some low power and low cost segments. There have been several attempts, including by Intel itself, to end the market dominance of the "inelegant" x86 architecture designed directly from the first simple 8-bit microprocessors.
Examples of this are the iAPX 432, the Intel 960, Intel 860 and the Intel/Hewlett-Packard Itanium architecture. However, the continuous refinement of x86 microarchitectures and semiconductor manufacturing would make it hard to replace x86 in many segments. AMD's 64-bit extension of x86 and the scalability of x86 chips such as the eight-core Intel Xeon and 12-core AMD Opteron is underlining x86 as an example of how continuous refinement of established industry standards can resist the competition from new architectures; the table below lists processor models and model series implementing variations of the x86 instruction set, in chronological order. Each line item is characterized by improved or commercially successful processor microarchitecture designs. At various times, companies such as IBM, NEC, AMD, TI, STM, Fujitsu, OKI, Cyrix, Intersil, C&T, NexGen, UMC, DM&P started to design or manufacture x86 processors intended for personal computers as well as embedded systems; such x86 implementations are simple copies but employ different internal microarchitectures as well as different solutions at the electronic and physical levels.
Quite early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors named to Intel's original chips. Other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, Weitek. Following the pipelined i486, Intel introduced the Pentium brand name for their new set of superscalar x86 designs.
SPARC is a reduced instruction set computing instruction set architecture developed by Sun Microsystems. Its design was influenced by the experimental Berkeley RISC system developed in the early 1980s. First released in 1987, SPARC was one of the most successful early commercial RISC systems, its success led to the introduction of similar RISC designs from a number of vendors through the 1980s and 90s; the first implementation of the original 32-bit architecture was used in Sun's Sun-4 workstation and server systems, replacing their earlier Sun-3 systems based on the Motorola 68000 series of processors. SPARC V8 added a number of improvements that were part of the SuperSPARC series of processors released in 1992. SPARC V9, released in 1993, introduced a 64-bit architecture and was first released in Sun's UltraSPARC processors in 1995. SPARC processors were used in symmetric multiprocessing and non-uniform memory access servers produced by Sun and Fujitsu, among others; the design was turned over to the SPARC International trade group in 1989, since its architecture has been developed by its members.
SPARC International is responsible for licensing and promoting the SPARC architecture, managing SPARC trademarks, providing conformance testing. SPARC International was intended to grow the SPARC architecture to create a larger ecosystem. Due to SPARC International, SPARC is open, non-proprietary and royalty-free; as of September 2017, the latest commercial high-end SPARC processors are Fujitsu's SPARC64 XII and SPARC64 XIfx. On Friday, September 1, 2017, after a round of layoffs that started in Oracle Labs in November 2016, Oracle terminated SPARC design after the completion of the M8. Much of the processor core development group in Austin, was dismissed, as were the teams in Santa Clara and Burlington, Massachusetts. SPARC development continues with Fujitsu returning to the role of leading provider of SPARC servers, with a new CPU due in the 2020 time frame; the SPARC architecture was influenced by the earlier RISC designs, including the RISC I and II from the University of California and the IBM 801.
These original RISC designs were minimalist, including as few features or op-codes as possible and aiming to execute instructions at a rate of one instruction per clock cycle. This made them similar to the MIPS architecture in many ways, including the lack of instructions such as multiply or divide. Another feature of SPARC influenced by this early RISC movement is the branch delay slot; the SPARC processor contains as many as 160 general purpose registers. According to the "Oracle SPARC Architecture 2015" specification an "implementation may contain from 72 to 640 general-purpose 64-bit" registers. At any point, only 32 of them are visible to software — 8 are a set of global registers and the other 24 are from the stack of registers; these 24 registers form what is called a register window, at function call/return, this window is moved up and down the register stack. Each window has 8 local shares 8 registers with each of the adjacent windows; the shared registers are used for passing function parameters and returning values, the local registers are used for retaining local values across function calls.
The "Scalable" in SPARC comes from the fact that the SPARC specification allows implementations to scale from embedded processors up through large server processors, all sharing the same core instruction set. One of the architectural parameters that can scale is the number of implemented register windows. Other architectures that include similar register file features include Intel i960, IA-64, AMD 29000; the architecture has gone through several revisions. It gained hardware multiply and divide functionality in Version 8. 64-bit were added to the version 9 SPARC specification published in 1994. In SPARC Version 8, the floating point register file has 16 double-precision registers; each of them can be used as two single-precision registers, providing a total of 32 single precision registers. An odd-even number pair of double precision registers can be used as a quad-precision register, thus allowing 8 quad precision registers. SPARC Version 9 added 16 more double precision registers, but these additional registers can not be accessed as single precision registers.
No SPARC CPU implements quad-precision operations in hardware as of 2004. Tagged add and subtract instructions perform adds and subtracts on values checking that the bottom two bits of both operands are 0 and reporting overflow if they are not; this can be useful in the implementation of the run time for ML, similar languages that might use a tagged integer format. The endianness of the 32-bit SPARC V8 architecture is purely big-endian; the 64-bit SPARC V9 architecture uses big-endian instructions, but can access data in either big-endian or little-endian byte order, cho
Burroughs large systems
In the 1970s, Burroughs Corporation was organized into three divisions with different product line architectures for high-end, mid-range, entry-level business computer systems. Each division's product line grew from a different concept for how to optimize a computer's instruction set for particular programming languages; the Burroughs Large Systems Group designed large mainframes using stack machine instruction sets with dense syllables and 48-bit data words. The first such design is the B5000 in 1961, it is optimized for running ALGOL 60 well, using simple compilers. It evolved into the B5500. Subsequent major redesigns include the B6500/B6700 line and its successors, the separate B8500 line.'Burroughs Large Systems' referred to all of these product lines together, in contrast to the COBOL-optimized Medium Systems or the flexible-architecture Small Systems. Founded in the 1880s, Burroughs was the oldest continuously operating entity in computing, but by the late 1950s its computing equipment was still limited to electromechanical accounting machines such as the Sensimatic.
While in 1956 it branded as the B205 a machine produced by a company it bought, its first internally developed machine, the B5000, was designed in 1961 and Burroughs sought to address its late entry in the market with the strategy of a different design based on the most advanced computing ideas available at the time. While the B5000 architecture is dead, it inspired the B6500. Computers using that architecture are still in production as the Unisys ClearPath Libra servers which run an evolved but compatible version of the MCP operating system first introduced with the B6700; the third and largest line, the B8500, had no commercial success. In addition to a proprietary CMOS processor design, Unisys uses Intel Xeon processors and runs MCP, Microsoft Windows and Linux operating systems on their Libra servers; the first member of the first series, the B5000, was designed beginning in 1961 by a team under the leadership of Robert Barton. It was a unique machine, well ahead of its time, it has been listed by the influential computing scientist John Mashey as one of the architectures that he admires the most.
"I always thought it was one of the most innovative examples of combined hardware/software design I've seen, far ahead of its time." The B5000 was succeeded by the B5500 and the B5700. While there was no successor to the B5700, the B5000 line influenced the design of the B6500, Burroughs ported the Master Control Program to that machine. All code automatically reentrant: programmers don't have to do anything more to have any code in any language spread across processors than to use just the two shown simple primitives; this is the canonical but no means the only benefit of these major distinguishing features of this architecture: Partially data-driven tagged and descriptor-based design Hardware was designed to support software requirements Hardware designed to support high-level programming languages No Assembly language or assembler. However, ESPOL had statements for each of the syllables in the architecture. Few programmer accessible registers Simplified instruction set Stack architecture Support for high-level operating system Support for asymmetric multiprocessing Support for other languages such as COBOL Powerful string manipulation An attempt at a secure architecture prohibiting unauthorized access of data or disruptions to operations Early error-detection supporting development and testing of software First commercial implementation of virtual memory Successors still exist in the Unisys ClearPath/MCP machines Influenced many of today's computing techniques The B5000 was revolutionary at the time in that the architecture and instruction set were designed with the needs of software taken into consideration.
This was a large departure from the computer system design of the time, where a processor and its instruction set would be designed and handed over to the software people, is still. That is, most other instruction sets, such as the IBM System/360 instruction set of that era, instruction set designs such as the x86, PPC, ARM instruction set architectures, are traditional instruction set based architectures rather than holistic designs like the original Burroughs systems; the B5000, B5500 and B5700 in Word Mode has two different addressing modes, depending on whether it is executing a main program or a subroutine. For a main program, the T field of an Operand Call or Descriptor Call syllable is relative to the Program Reference Table. For subroutines, the type of addressing is dependent on the high three bits of T and on the Mark Stack FlipFlop, as shown in B5x00 Relative Addressing; the B5000 was designed to support high-level languages. This was at a time when such languages were just coming to prominence with FORTRAN and COBOL.
FORTRAN and COBOL were considered weaker languages by some, when it comes to modern software techniques, so a newer untried language was adopted, ALGOL-60. The ALGOL dialect chosen for the B5000 was Elliott ALGOL, first designed and implemente
The IBM System/360 is a family of mainframe computer systems, announced by IBM on April 7, 1964, delivered between 1965 and 1978. It was the first family of computers designed to cover the complete range of applications, from small to large, both commercial and scientific; the design made a clear distinction between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the incompatible Model 44 and the most expensive systems used microcode to implement the instruction set, which featured 8-bit byte addressing and binary and floating-point calculations; the launch of the System/360 family introduced IBM's Solid Logic Technology, a new technology, the start of more powerful but smaller computers. The slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High performance models came later; the 1967 IBM System/360 Model 91 could do up to 16.6 million instructions per second.
The larger 360 models could have up to 8 MB of main memory, though main memory that big was unusual—a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower Large Capacity Storage was available; the IBM 360 was successful in the market, allowing customers to purchase a smaller system with the knowledge they would always be able to migrate upward if their needs grew, without reprogramming of application software or replacing peripheral devices. Many consider the design one of the most successful computers in history, influencing computer design for years to come; the chief architect of System/360 was Gene Amdahl, the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr; the commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM’s System 360 mainframe family in 1964. Application-level compatibility for System/360 software is maintained to the present day with the System z mainframe servers.
Contrasting with at-the-time normal industry practice, IBM created an entire new series of computers, from small to large, low- to high-performance, all using the same instruction set. This feat allowed customers to use a cheaper model and upgrade to larger systems as their needs increased without the time and expense of rewriting software. Before the introduction of System/360, business and scientific applications used different computers with different instruction sets and operating systems. Different-sized computers had their own instruction sets. IBM was the first manufacturer to exploit microcode technology to implement a compatible range of computers of differing performance, although the largest, models had hard-wired logic instead; this flexibility lowered barriers to entry. With most other vendors customers had to choose between machines they could outgrow and machines that were too powerful and thus too costly; this meant that many companies did not buy computers. IBM announced a series of six computers and forty common peripherals.
IBM delivered fourteen models, including rare one-off models for NASA. The least expensive model was the Model 20 with as little as 4096 bytes of core memory, eight 16-bit registers instead of the sixteen 32-bit registers of other System/360 models, an instruction set, a subset of that used by the rest of the range; the initial announcement in 1964 included Models 30, 40, 50, 60, 62, 70. The first three were low- to middle-range systems aimed at the IBM 1400 series market. All three first shipped in mid-1965; the last three, intended to replace the 7000 series machines, never shipped and were replaced with the 65 and 75, which were first delivered in November 1965, January 1966, respectively. Additions to the low-end included models 20, 22, 25; the Model 20 had several sub-models. The Model 22 was a recycled Model 30 with minor limitations: a smaller maximum memory configuration, slower I/O channels, which limited it to slower and lower-capacity disk and tape devices than on the 30; the Model 44 was a specialized model, designed for scientific computing and for real-time computing and process control, featuring some additional instructions, with all storage-to-storage instructions and five other complex instructions eliminated.
A succession of high-end machines included the Model 67, 85, 91, 95, 195. The 85 design was intermediate between the System/360 line and the follow-on System/370 and was the basis for the 370/165. There was a System/370 version of the 195; the implementations differed using different native data path widths, presence or absence of microcode, yet were compatible. Except where documented, the models were architecturally compatible; the 91, for example, was designed for scientific computing and provided out-of-order instruction execution, but lacked the decimal instruction set used in commercial applications. New features could be added without violating architectural definitions: the 65 had a dual-processor version with extensions for inter-CPU signalling. Models 44, 75, 91, 95, 195 were implemented with hardwired logic, rather than microcoded as
Multics is an influential early time-sharing operating system, based on the concept of a single-level memory. All modern operating systems were influenced by Multics – through Unix, created by some of the people who had worked on Multics – either directly or indirectly. Initial planning and development for Multics started in Cambridge, Massachusetts, it was a cooperative project led by MIT along with General Electric and Bell Labs. It was developed on the GE 645 computer, specially designed for it. Multics was conceived as a commercial product for General Electric, became one for Honeywell, albeit not successfully. Due to its many novel and valuable ideas, Multics had a significant impact on computer science despite its faults. Multics had numerous features intended to ensure high availability so that it would support a computing utility similar to the telephone and electricity utilities. Modular hardware structure and software architecture were used to achieve this; the system could grow in size by adding more of the appropriate resource, be it computing power, main memory, or disk storage.
Separate access control lists on every file provided flexible information sharing, but complete privacy when needed. Multics had a number of standard mechanisms to allow engineers to analyze the performance of the system, as well as a number of adaptive performance optimization mechanisms. Multics implemented a single-level store for data access, discarding the clear distinction between files and process memory; the memory of a process consisted of segments that were mapped into its address space. To read or write to them, the process used normal central processing unit instructions, the operating system took care of making sure that all the modifications were saved to disk. In POSIX terminology, it was. All memory in the system was part of some segment. One disadvantage of this was that the size of segments was limited to 256 kilowords, just over 1 MiB; this was due to the particular hardware architecture of the machines on which Multics ran, having a 36-bit word size and index registers of half that size.
Extra code had to be used to work on files larger than this, called multisegment files. In the days when one megabyte of memory was prohibitively expensive, before large databases and huge bitmap graphics, this limit was encountered. Another major new idea of Multics was dynamic linking, in which a running process could request that other segments be added to its address space, segments which could contain code that it could execute; this allowed applications to automatically use the latest version of any external routine they called, since those routines were kept in other segments, which were dynamically linked only when a process first tried to begin execution in them. Since different processes could use different search rules, different users could end up using different versions of external routines automatically. With the appropriate settings on the Multics security facilities, the code in the other segment could gain access to data structures maintained in a different process. Thus, to interact with an application running in part as a daemon, a user's process performed a normal procedure-call instruction to a code segment to which it had dynamically linked.
The code in that segment could modify data maintained and used in the daemon. When the action necessary to commence the request was completed, a simple procedure return instruction returned control of the user's process to the user's code; the single-level store and dynamic linking are still not available to their full power in other used operating systems, despite the rapid and enormous advance in the computer field since the 1960s. They are becoming more accepted and available in more limited forms, for example, dynamic linking. Multics supported aggressive on-line reconfiguration: central processing units, memory banks, disk drives, etc. could be added and removed while the system continued operating. At the MIT system, where most early software development was done, it was common practice to split the multiprocessor system into two separate systems during off-hours by incrementally removing enough components to form a second working system, leaving the rest still running the original logged-in users.
System software development testing could be done on the second system the components of the second system were added back to the main user system, without having shut it down. Multics supported multiple CPUs. Multics was the first major operating system. Despite this, early versions of Multics were broken into repeatedly; this led to further work that made the system much more secure and prefigured modern security engineering techniques. Break-ins became rare once the second-generation hardware base was adopted. Multics was the first operating system to provide a hierarchical file system, file names could be of alm