VAX is a discontinued instruction set architecture developed by Digital Equipment Corporation in the mid-1970s. The VAX-11/780, introduced on October 25, 1977, was the first of a range of popular and influential computers implementing that architecture. A 32-bit system with a complex instruction set computer architecture based on DEC's earlier PDP-11, VAX was designed to extend or replace DEC's various Programmed Data Processor ISAs; the VAX architecture's primary features were its orthogonal instruction set. VAX was succeeded by the DEC Alpha instruction set architecture. VAX has been perceived as the quintessential CISC ISA, with its large number of assembly-language-programmer-friendly addressing modes and machine instructions orthogonal architecture, instructions for complex operations such as queue insertion or deletion and polynomial evaluation; the name "VAX" originated as an acronym for virtual address extension, both because the VAX was seen as a 32-bit extension of the older 16-bit PDP-11 and because it was an early adopter of virtual memory to manage this larger address space.
Early versions of the VAX processor implement a "compatibility mode" that emulates many of the PDP-11's instructions, are in fact called VAX-11 to highlight this compatibility and that VAX-11 was an outgrowth of the PDP-11 family. Versions offloaded the compatibility mode and some of the less used CISC instructions to emulation in the operating system software; the VAX instruction set was designed to be orthogonal. When it was introduced, many programs were written in assembly language, so having a "programmer-friendly" instruction set was important. In time, as more programs were written in higher-level language, the instruction set became less visible, the only ones much concerned about it were compiler writers. One unusual aspect of the VAX instruction set is the presence of register masks at the start of each subprogram; these are arbitrary bit patterns that specify, when control is passed to the subprogram, which registers are to be preserved. Since register masks are a form of data embedded within the executable code, they can make linear parsing of the machine code difficult.
This can complicate optimization techniques. The "native" VAX operating system is Digital's VAX/VMS; the VAX architecture and OpenVMS operating system were "engineered concurrently" to take maximum advantage of each other, as was the initial implementation of the VAXcluster facility. Other VAX operating systems have included various releases of BSD UNIX up to 4.3BSD, Ultrix-32, VAXELN, Xinu. More NetBSD and OpenBSD have supported various VAX models and some work has been done on porting Linux to the VAX architecture. OpenBSD discontinued support for the architecture in September 2016; the first VAX model sold was the VAX-11/780, introduced on October 25, 1977 at the Digital Equipment Corporation's Annual Meeting of Shareholders. Bill Strecker, C. Gordon Bell's doctoral student at Carnegie Mellon University, was responsible for the architecture. Many different models with different prices, performance levels, capacities were subsequently created. VAX superminicomputers were popular in the early 1980s.
For a while the VAX-11/780 was used as a standard in CPU benchmarks. It was described as a one-MIPS machine, because its performance was equivalent to an IBM System/360 that ran at one MIPS, the System/360 implementations had been de facto performance standards; the actual number of instructions executed in 1 second was about 500,000, which led to complaints of marketing exaggeration. The result was the definition of a "VAX MIPS," the speed of a VAX-11/780. Within the Digital community the term VUP was the more common term, because MIPS do not compare well across different architectures; the related term cluster VUPs was informally used to describe the aggregate performance of a VAXcluster. The VAX-11/780 included a subordinate stand-alone LSI-11 computer that performed microcode load and diagnostic functions for the parent computer; this was dropped from subsequent VAX models. Enterprising VAX-11/780 users could therefore run three different Digital Equipment Corporation operating systems: VMS on the VAX processor, either RSX-11M or RT-11 on the LSI-11.
The VAX went through many different implementations. The original VAX 11/780 was implemented in TTL and filled a four-by-five-foot cabinet with a single CPU. CPU implementations that consisted of multiple ECL gate array or macrocell array chips included the VAX 8600 and 8800 superminis and the VAX 9000 mainframe class machines. CPU implementations that consisted of multiple MOSFET custom chips included the 8100 and 8200 class machines; the VAX 11-730 and 725 low-end machines were built using bit-slice components. The MicroVAX I represented a major transition within the VAX family. At the time of its design, it was not yet possible to implement the full VAX architecture as a single VLSI chip. Instead, the MicroVAX I was the first VAX implementation to move some of the more complex VAX instructions (such as th
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Internet protocol suite
The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol and the Internet Protocol, it is known as the Department of Defense model because the development of the networking method was funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, transmitted and received; this functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment; the technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force.
The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, instead of the network being responsible for reliability, as in the ARPANET, this function was delegated to the hosts.
Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The protocol was implemented as the Transmission Control Program, first published in 1974; the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments, the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would lead to scalability issues; the Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect any network to the ARPANET, irrespective of the local characteristics, thereby solving Kahn's initial internetworking problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years as a joke, the IP over Avian Carriers formal protocol specification was created and tested. A computer called, it forwards network packets forth between them. A router was called gateway, but the term was changed to avoid confusion with other types of gateways. From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.
A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA contracted with BBN Technologies, Stanford University, the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, TCP/IP v4; the last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol; the migration of the ARPANET to TCP/IP was completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board held a three-day TCP/
A minicomputer, or colloquially mini, is a class of smaller computers, developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, The New York Times suggested a consensus definition of a minicomputer as a machine costing less than US$25,000, with an input-output device such as a teleprinter and at least four thousand words of memory, capable of running programs in a higher level language, such as Fortran or BASIC; the class formed a distinct group with its own software architectures and operating systems. Minis were designed for control, human interaction, communication switching as distinct from calculation and record keeping. Many were sold indirectly to original equipment manufacturers for final end use application. During the two decade lifetime of the minicomputer class 100 companies formed and only a half dozen remained; when single-chip CPU microprocessors appeared, beginning with the Intel 4004 in 1971, the term "minicomputer" came to mean a machine that lies in the middle range of the computing spectrum, in between the smallest mainframe computers and the microcomputers.
The term "minicomputer" is little used today. The term "minicomputer" developed in the 1960s to describe the smaller computers that became possible with the use of transistors and core memory technologies, minimal instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33 ASR, they took up one or a few 19-inch rack cabinets, compared with the large mainframes that could fill a room. The definition of minicomputer is vague with the consequence that there are a number of candidates for the first minicomputer. An early and successful minicomputer was Digital Equipment Corporation's 12-bit PDP-8, built using discrete transistors and cost from US$16,000 upwards when launched in 1964. Versions of the PDP-8 took advantage of small-scale integrated circuits; the important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, the PDP-1. DEC gave rise to a number of minicomputer companies along Massachusetts Route 128, including Data General, Wang Laboratories, Apollo Computer, Prime Computer.
Minicomputers were known as midrange computers. They grew to have high processing power and capacity, they were used in manufacturing process control, telephone switching and to control laboratory equipment. In the 1970s, they were the hardware, used to launch the computer-aided design industry and other similar industries where a smaller dedicated system was needed; the 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 arithmetic logic unit was used in the CPU data paths; each 74181 had a bus width of hence the popularity of bit-slice architecture. Some scientific computers, such as the Nicolet 1080, would use the 7400 series in groups of five ICs for their uncommon twenty bits architecture; the 7400 series offered data-selectors, three-state buffers, etc. in dual in-line packages with one-tenth inch spacing, making major system components and architecture evident to the naked eye. Starting in the 1980s, many minicomputers used VLSI circuits.
At the launch of the MITS Altair 8800 in 1975, Radio Electronics magazine referred to the system as a "minicomputer", although the term microcomputer soon became usual for personal computers based on single-chip microprocessors. At the time, microcomputers were 8-bit single-user simple machines running simple program-launcher operating systems like CP/M or MS-DOS, while minis were much more powerful systems that ran full multi-user, multitasking operating systems, such as VMS and Unix, although the classical mini was a 16-bit computer, the emerging higher performance superminis were 32-bit; the decline of the minis happened due to the lower cost of microprocessor-based hardware, the emergence of inexpensive and deployable local area network systems, the emergence of the 68020, 80286 and the 80386 microprocessors, the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments or "data centers". The result was that minicomputers and computer terminals were replaced by networked workstations, file servers and PCs in some installations, beginning in the latter half of the 1980s.
During the 1990s, the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of Unix and Unix-like systems that ran on the Intel x86 microprocessor architecture, including Solaris, FreeBSD, NetBSD and OpenBSD. The Microsoft Windows series of operating systems, beginning with, now included server versions that supported preemptive multitasking and other features required for servers; as microprocessors have become more powerful, the CPUs built up from multiple components – once the distinguishing feature differentiating mainframes and midrange systems from microcomputers – have become obsolete in the largest mainframe computers. Digital Equipment Corporation was once the leading minicomputer manufacturer, at one time the second-largest computer company after IBM, but as the minicomputer declined in the face of generic Unix servers and Intel-based PCs, not only DEC, but every other minicomputer company including Data General, Computervision and Wang Laboratories, many based in New England collapsed or merg
The PDP-11 is a series of 16-bit minicomputers sold by Digital Equipment Corporation from 1970 into the 1990s, one of a succession of products in the PDP series. In total, around 600,000 PDP-11s of all models were sold, making it one of DEC's most successful product lines; the PDP-11 is considered by some experts to be the most popular minicomputer ever. The PDP-11 included a number of innovative features in its instruction set and additional general-purpose registers that made it much easier to program than earlier models in the series. Additionally, the innovative Unibus system allowed external devices to be interfaced to the system using direct memory access, opening the system to a wide variety of peripherals; the PDP-11 replaced the PDP-8 in many real-time applications, although both product lines lived in parallel for more than 10 years. But the ease of programming of the PDP-11 made it popular for general purpose computing uses as well; the design of the PDP-11 inspired the design of late-1970s microprocessors including the Intel x86 and the Motorola 68000.
Design features of PDP-11 operating systems, as well as other operating systems from Digital Equipment, influenced the design of other operating systems such as CP/M and hence MS-DOS. The first named version of Unix ran on the PDP-11/20 in 1970, it is stated that the C programming language took advantage of several low-level PDP-11–dependent programming features, albeit not by design. An effort to expand the PDP-11 from 16 to 32-bit addressing led to the VAX-11 design, which took part of its name from the PDP-11. In 1963, DEC introduced what is considered to be the first commercial minicomputer in the form of the PDP-5; this was a 12-bit design adapted from the 1962 LINC machine, intended to be used in a lab setting. DEC simplified the LINC system and instruction set, aiming the PDP-5 at smaller settings that did not need the power of their larger 18-bit PDP-4; the PDP-5 was a success selling about 50,000 examples. During this period, the computer market was moving from computer word lengths based on units of 6-bits to units of 8-bits, following the introduction of the 7-bit ASCII standard.
In 1967–68, DEC engineers designed a 16-bit machine, the PDP-X, but management cancelled the project. Several of the engineers from the PDP-X formed Data General; the next year they introduced the 16-bit Data General Nova. The Nova was a major success, selling tens of thousands of units and launching what would become one of DEC's major competitors through the 1970s and 80s. A subsequent effort, code-named "Desk Calculator", looked at a variety of options before choosing what became the 16-bit PDP-11. DEC sold over 170,000 PDP-11s in the 1970s. Manufactured of small-scale transistor–transistor logic, a single-board large scale integration version of the processor was developed in 1975. A two-or-three-chip processor, the J-11 was developed in 1979; the last models of the PDP-11 line were the PDP-11/94 and -11/93 introduced in 1990. The PDP-11 processor architecture has a orthogonal instruction set. For example, instead of instructions such as load and store, the PDP-11 has a move instruction for which either operand can be memory or register.
There are output instructions. More complex instructions such as add can have memory, input, or output as source or destination. Most operands can apply any of eight addressing modes to eight registers; the addressing modes provide register, absolute, relative and indexed addressing, can specify autoincrementation and autodecrementation of a register by one or two. Use of relative addressing lets a machine-language program be position-independent. Early models of the PDP-11 had no dedicated bus for input/output, but only a system bus called the Unibus, as input and output devices were mapped to memory addresses. An input/output device determined the memory addresses to which it would respond, specified its own interrupt vector and interrupt priority; this flexible framework provided by the processor architecture made it unusually easy to invent new bus devices, including devices to control hardware that had not been contemplated when the processor was designed. DEC published the basic Unibus specifications offering prototyping bus interface circuit boards, encouraging customers to develop their own Unibus-compatible hardware.
The Unibus made the PDP-11 suitable for custom peripherals. One of the predecessors of Alcatel-Lucent, the Bell Telephone Manufacturing Company, developed the BTMC DPS-1500 packet-switching network and used PDP-11s in the regional and national network management system, with the Unibus directly connected to the DPS-1500 hardware. Higher-performance members of the PDP-11 family, starting with the PDP-11/45 Unibus and 11/83 Q-bus systems, departed from the single-bus approach. Instead, memory was interfaced by dedicated circuitry and space in the CPU cabinet, while the Unibus continued to be used for I/O only. In the PDP-11/70, this was taken a step further, with the addition of a dedicated interface between disks and tapes and memory, via the Massbus. Although input/output devices continued to be mapped into memory addresses, some additional programming was necessary to set up the added bus interfaces; the PDP-11 supports hardware interrupts at four priority levels. Interrupts are serviced by software service routines, which could specify
Token Ring local area network technology is a communications protocol for local area networks. It uses a special three-byte frame called a "token" that travels around a logical "ring" of workstations or servers; this token passing is a channel access method providing fair access for all stations, eliminating the collisions of contention-based access methods. Introduced by IBM in 1984, it was standardized with protocol IEEE 802.5 and was successful in corporate environments, but eclipsed by the versions of Ethernet. A wide range of different local area network technologies were developed in the early 1970s, of which one, the Cambridge Ring had demonstrated the potential of a token passing ring topology, many teams worldwide began working on their own implementations. At the IBM Zurich Research Laboratory Werner Bux and Hans Müller in particular worked on the design and development of IBM's Token Ring technology, while early work at MIT led to the Proteon 10 Mbit/s ProNet-10 Token Ring network in 1981 – the same year that workstation vendor Apollo Computer introduced their proprietary 12 Mbit/s Apollo Token Ring network running over 75-ohm RG-6U coaxial cabling.
Proteon evolved a 16 Mbit/s version that ran on unshielded twisted pair cable. IBM launched their own proprietary Token Ring product on October 15, 1985, it ran at 4 Mbit/s, attachment was possible from IBM PCs, midrange computers and mainframes. It used a convenient star-wired physical topology, ran over shielded twisted-pair cabling, shortly thereafter became the basis for the /IEEE standard 802.5. During this time, IBM argued that Token Ring LANs were superior to Ethernet under load, but these claims were fiercely debated. In 1988 the faster 16 Mbit/s Token Ring was standardized by the 802.5 working group, an increase to 100 Mbit/s was standardized and marketed during the wane of Token Ring's existence. However it was never used, while a 1000 Mbit/s standard was approved in 2001, no products were brought to market and standards activity came to a standstill as Fast Ethernet and Gigabit Ethernet dominated the local area networking market. Ethernet and Token Ring have some notable differences: Token Ring access is more deterministic, compared to Ethernet's contention-based CSMA/CD Ethernet supports a direct cable connection between two network interface cards by the use of a crossover cable or through auto-sensing if supported.
Token Ring does not inherently support this feature and requires additional software and hardware to operate on a direct cable connection setup. Token Ring eliminates collision by the use of a single-use token and early token release to alleviate the down time. Ethernet alleviates collision by carrier sense multiple access and by the use of an intelligent switch. Token Ring network interface cards contain all of the intelligence required for speed autodetection and can drive themselves on many Multistation Access Units that operate without power. Ethernet network interface cards can theoretically operate on a passive hub to a degree, but not as a large LAN and the issue of collisions is still present. Token Ring employs ` access priority'. Unswitched Ethernet does not have provisioning for an access priority system as all nodes have equal contest for traffic. Multiple identical MAC addresses are supported on Token Ring. Switched Ethernet cannot support duplicate MAC addresses without reprimand.
Token Ring was more complex than Ethernet, requiring a specialized processor and licensed MAC/LLC firmware for each interface. By contrast, Ethernet included both the lower licensing cost in the MAC chip; the cost of a token Ring interface using the Texas Instruments TMS380C16 MAC and PHY was three times that of an Ethernet interface using the Intel 82586 MAC and PHY. Both networks used expensive cable, but once Ethernet was standardized for unshielded twisted pair with 10BASE-T and 100BASE-TX, it had a distinct advantage and sales of it increased markedly. More significant when comparing overall system costs was the much-higher cost of router ports and network cards for Token Ring vs Ethernet; the emergence of Ethernet switches may have been the final straw. Stations on a Token Ring LAN are logically organized in a ring topology with data being transmitted sequentially from one ring station to the next with a control token circulating around the ring controlling access. Similar token passing mechanisms are used by ARCNET, token bus, 100VG-AnyLAN and FDDI, they have theoretical advantages over the CSMA/CD of early Ethernet.
A Token Ring network can be modeled as a polling system where a single server provides service to queues in a cyclic order. The data transmission process goes as follows: Empty information frames are continuously circulated on the ring; when a computer has a message to send, it seizes the token. The computer will be able to send the frame; the frame is examined by each successive workstation. The workstation that identifies itself to be the destination for the message copies it from the frame and changes the token back to 0; when the frame gets back to the originat
The Macintosh is a family of personal computers designed and sold by Apple Inc. since January 1984. The original Macintosh was the first mass-market personal computer that featured a graphical user interface, built-in screen and mouse. Apple sold the Macintosh alongside its popular Apple II family of computers for ten years before they were discontinued in 1993. Early Macintosh models were expensive, hindering its competitiveness in a market dominated by the Commodore 64 for consumers, as well as the IBM Personal Computer and its accompanying clone market for businesses. Macintosh systems still found success in education and desktop publishing and kept Apple as the second-largest PC manufacturer for the next decade. In the early 1990s, Apple introduced models such as the Macintosh LC II and Color Classic which were price-competitive with Wintel machines at the time. However, the introduction of Windows 3.1 and Intel's Pentium processor which beat the Motorola 68040 in most benchmarks took market share from Apple, by the end of 1994 Apple was relegated to third place as Compaq became the top PC manufacturer.
After the transition to the superior PowerPC-based Power Macintosh line in the mid-1990s, the falling prices of commodity PC components, poor inventory management with the Macintosh Performa, the release of Windows 95 saw the Macintosh user base decline. Prompted by the returning Steve Jobs' belief that the Macintosh line had become too complex, Apple consolidated nearly twenty models in mid-1997 down to four in mid-1999: The Power Macintosh G3, iMac, 14.1" PowerBook G3, 12" iBook. All four products were critically and commercially successful due to their high performance, competitive prices and aesthetic designs, helped return Apple to profitability. Around this time, Apple phased out the Macintosh name in favor of "Mac", a nickname, in common use since the development of the first model. Since their transition to Intel processors in 2006, the complete lineup is based on said processors and associated systems, its current lineup includes four desktops, three laptops. Its Xserve server was discontinued in 2011 in favor of the Mac Mac Pro.
Apple has developed a series of Macintosh operating systems. The first versions had no name but came to be known as the "Macintosh System Software" in 1988, "Mac OS" in 1997 with the release of Mac OS 7.6, retrospectively called "Classic Mac OS". In 2001, Apple released Mac OS X, a modern Unix-based operating system, rebranded to OS X in 2012, macOS in 2016; the current version is macOS Mojave, released on September 24, 2018. Intel-based Macs are capable of running non-Apple operating systems such as Linux, OpenBSD, Microsoft Windows with the aid of Boot Camp or third-party software. Apple produced a Unix-based operating system for the Macintosh called A/UX from 1988 to 1995, which resembled contemporary versions of the Macintosh system software. Apple does not license macOS for use on non-Apple computers, however System 7 was licensed to various companies through Apple's Macintosh clone program from 1995 to 1997. Only one company, UMAX Technologies was licensed to ship clones running Mac OS 8.
Since Apple's transition to Intel processors, there is a sizeable community around the world that specialises in hacking macOS to run on non-Apple computers, which are called "Hackintoshes". The Macintosh project began in 1979 when Jef Raskin, an Apple employee, envisioned an easy-to-use, low-cost computer for the average consumer, he wanted to name the computer after his favorite type of apple, the McIntosh, but the spelling was changed to "Macintosh" for legal reasons as the original was the same spelling as that used by McIntosh Laboratory, Inc. the audio equipment manufacturer. Steve Jobs requested that McIntosh Laboratory give Apple a release for the newly spelled name, thus allowing Apple to use it; the request was denied, forcing Apple to buy the rights to use this name. In 1978, Apple began to organize the Apple Lisa project, aiming to build a next-generation machine similar to an advanced Apple II or the yet-to-be-introduced IBM PC. In 1979, Steve Jobs learned of the advanced work on graphical user interfaces taking place at Xerox PARC.
He arranged for Apple engineers to be allowed to visit PARC to see the systems in action. The Apple Lisa project was redirected to utilize a GUI, which at that time was well beyond the state of the art for microprocessor capabilities. Things had changed with the introduction of the 32-bit Motorola 68000 in 1979, which offered at least an order of magnitude better performance than existing designs, made a software GUI machine a practical possibility; the basic layout of the Lisa was complete by 1982, at which point Jobs's continual suggestions for improvements led to him being kicked off the project. At the same time that the Lisa was becoming a GUI machine in 1979, Jef Raskin started the Macintosh project; the design at that time was for a easy-to-use machine for the average consumer. In