A motherboard is the main printed circuit board found in general purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit and memory, provides connectors for other peripherals. Unlike a backplane, a motherboard contains significant sub-systems such as the central processor, the chipset's input/output and memory controllers, interface connectors, other components integrated for general purpose use and applications. Motherboard refers to a PCB with expansion capability and as the name suggests, this board is referred to as the "mother" of all components attached to it, which include peripherals, interface cards, daughtercards: sound cards, video cards, network cards, hard drives, or other forms of persistent storage; the term mainboard is applied to devices with a single board and no additional expansions or capability, such as controlling boards in laser printers, washing machines, mobile phones and other embedded systems with limited expansion abilities.
Prior to the invention of the microprocessor, the digital computer consisted of multiple printed circuit boards in a card-cage case with components connected by a backplane, a set of interconnected sockets. In old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice; the Central Processing Unit and peripherals were housed on individual printed circuit boards, which were plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system; the most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1981s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard.
In the late 1980s, personal computer motherboards began to include single ICs capable of supporting a set of low-speed peripherals: keyboard, floppy disk drive, serial ports, parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video and networking functions without the need for any expansion cards at all. Business PCs, servers were more to need expansion cards, either for more robust functions, or for higher speeds. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals; this included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century. Memory, network controllers, power source, storage would be integrated into some systems. A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, peripheral devices may be attached to the motherboard as plug-in cards or via cables. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components; this chipset determines, to an extent, the capabilities of the motherboard. Modern motherboards include: Sockets. In the case of CPUs in ball grid array packages, such as the VIA C3, the CPU is directly soldered to the motherboard. Memory Slots into which the system's main memory is to be installed in the form of DIMM modules containing DRAM chips A chipset which forms an interface between the CPU's front-side bus, main memory, peripheral buses Non-volatile memory chips containing the system's firmware or BIOS A clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, main memory, expansion cards.
As of 2007, some graphics cards require more power than the motherboard can provide, thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives SATA only. Disk drives connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support used input devices, such as USB for mouse devices and keyboards. Early personal computers
Micro Channel architecture
Micro Channel architecture, or the Micro Channel bus, was a proprietary 16- or 32-bit parallel computer bus introduced by IBM in 1987, used on PS/2 and other computers until the mid-1990s. Its name is abbreviated as "MCA", although not by IBM. In IBM products, it superseded the ISA bus and was itself subsequently superseded by the PCI bus architecture; the development of Micro Channel was driven by both technical and business pressures. The IBM AT bus, which became known as the Industry Standard Architecture bus, had a number of technical design limitations, including: A slow bus speed. A limited number of interrupts, fixed in hardware. A limited number of I/O device addresses fixed in hardware. Hardwired and complex configuration with no conflict resolution. Deep links to the architecture of the 80x86 chip familyIn addition, it suffered from other problems: Poor grounding and power distribution. Undocumented bus interface standards that varied between manufacturers; these limitations became more serious as the range of tasks and peripherals, the number of manufacturers for IBM PC-compatibles, grew.
IBM was investigating the use of RISC processors in desktop machines, could, in theory, save considerable money if a single well-documented bus could be used across their entire computer lineup. It was thought that by creating a new standard, IBM would regain control of standards via the required licensing; as patents can take three years or more to be granted, only those relating to ISA could be licensed when Micro Channel was announced. Patents on important Micro Channel features, such as Plug and Play automatic configuration, were not granted to IBM until after PCI had replaced Micro Channel in the marketplace; the Micro Channel architecture was designed by engineer Chet Heath. A lot of the Micro Channel cards that were developed used the CHIPS P82C612 MCA interface controller; the Micro Channel was a 32-bit bus, but the system supported a 16-bit mode designed to lower the cost of connectors and logic in Intel-based machines like the IBM PS/2. The situation was never that simple, however, as both the 32-bit and 16-bit versions had a number of additional optional connectors for memory cards which resulted in a huge number of physically incompatible cards for bus attached memory.
In time, memory moved to the CPU's local bus. On the upside, signal quality was improved as Micro Channel added ground and power pins and arranged the pins to minimize interference. Another connector extension was included for graphics cards; this extension was used for analog output from the video card, routed through the system board to the system's own monitor output. The advantage of this was that Micro Channel system boards could have a basic VGA or MCGA graphics system on board, higher level graphics could share the same port; the add-on cards were able to be free of'legacy' VGA modes, making use of the on-board graphics system when needed, allowing a single system board connector for graphics that could be upgraded. Micro Channel cards featured a unique, 16-bit software-readable ID, which formed the basis of an early plug and play system; the BIOS and/or OS can read IDs, compare against a list of known cards, perform automatic system configuration to suit. This led to boot failures whereby an older BIOS would not recognize a newer card, causing an error at startup.
In turn, this required IBM to release updated Reference Disks on a regular basis. A complete list of known IDs is available. To accompany these reference disks were ADF files which were read by setup which in turn provided configuration information for the card; the ADF was a simple text file, containing information about the card's memory addressing and interrupts. Although MCA cards cost nearly double the price of comparable non-MCA cards, the marketing stressed that it was simple for any user to upgrade or add more cards to their PC, thus saving the considerable expense of a technician. In this critical area, Micro Channel architecture's biggest advantage was its greatest disadvantage, one of the major reasons for its demise. To add a new card the user plugged in the MCA card and inserted a customized floppy disk to blend the new card into the original hardware automatically, rather than bringing in an expensive trained technician who could manually make all the needed changes. All choices for interrupts and other changes were accomplished automatically by the PC reading the old configuration from the floppy disk, which made necessary changes in software wrote the new configuration to the floppy disk.
In practice, this meant that the user must keep that same floppy disk matched to that PC. For a small company with a few PCs, this was practical, but for large organizations with hundreds or thousands of PCs, permanently matching each PC with its own floppy disk was logistically unlikely or impossible. Without the original, updated floppy disk, no changes could be made to the PC's cards. After this experience repeated itself thousands of times, business leaders realized their dream scenario for upgrade simplicity did not work in the corporate world, they sought a better process; the basic data rate of the Micro Channel was increased from ISA's 8 MHz to 10 MHz. This may have been a modest increase in terms of clock rate, but the greater bus width, coupled with a dedicated bus controller that utilized b
The Indigo, introduced as the IRIS Indigo, is a line of workstation computers developed and manufactured by Silicon Graphics, Inc.. SGI first announced the system in July 1991; the Indigo is considered one of the most capable graphics workstations of its era, was peerless in the realm of hardware-accelerated three-dimensional graphics rendering. For use as a graphics workstation, the Indigo was equipped with a two-dimensional framebuffer or, for use as a 3D graphics workstation, with the Elan graphics subsystem including one to four Geometry Engines. SGI sold a server version with no video adapter; the Indigo's design is based on a simple cube motif in indigo hue. Graphics and other peripheral expansions are accomplished via the GIO32 expansion bus; the Indigo was superseded by the SGI Indigo2, in the low-cost market segment by the SGI Indy. The first Indigo model, code-named "Hollywood", was introduced on July 22, 1991, it is based on the IP12 processor board, which contains a 32-bit MIPS R3000A microprocessor soldered on the board and proprietary memory slots supporting up to 96 MB of RAM.
The version is based on the IP20 processor board, which has a removable processor module containing a 64-bit MIPS R4000 or R4400 processor that implements the MIPS-III instruction set. The IP20 uses standard 72-pin SIMMs with parity, has 12 SIMM slots for a total of 384 MB of RAM at maximum. A Motorola 56000 DSP is used for Audio IO. Ethernet is supported onboard by the SEEQ 80c03 chipset coupled with the HPC, which provides the DMA engine; the HPC interfaces between the GIObus and the Ethernet, SCSI and the 56000 DSP. The GIO bus interface is implemented by the PIC on IP12 and MC on IP20. Much of the hardware design can be traced back to the SGI IRIS 4D/3x series, which shared the same memory controller, Ethernet, SCSI, optionally DSP as the IP12 Indigo; the 4D/30, 4D/35 and Indigo R3000 run the same IRIX kernel. The Indigo R3000 is a reduced cost 4D/35 without a VME bus; the PIC supports GIO expansion slots. In all IP12, IP20, IP22/IP24 systems the HPC attached to the GIO bus. For entry graphics, the 8-bit color frame buffer comes in three versions.
One version uses the system's GIO expansion bus. Another uses the main backplane like the XS, XZ, Elan graphics options; the final is the same, but adds a second video output, giving the computer the ability to have two "heads", or monitors. The Indigo's XS Graphics option has a single GE7 Geometry Engine, a RE3 Raster engine, a HQ2 Command engine, VC1, XMAP5, it is ideal for low-cost wireframe operations, compared to more powerful, expensive options for textured graphics. Part of SGI's Express line of graphics, four XS graphics options were produced for the Indigo: the XS-8 offers 8-bit color, with one VM2 video RAM module; the XZ graphics option is a member of SGI's Express graphics line. It is similar to the XS-24z, but it includes a second GE7 Geometry Engine ASIC, doubling its geometry performance; the highest performance graphics option offered for the Indigo, it is a member of SGI's Express graphics line. It is like the XS-24z and XZ, but has 4 GE7 Geometry Engine ASICs, giving it twice the performance of the XZ option.
The Indigo was designed to run SGI's version of Unix. The Indigos with R3000 processors are supported up to IRIX version 5.3, Indigo equipped with an R4000 or R4400 processor can run up to IRIX 6.5.22. Additionally, the free Unix-like operating system NetBSD has support for both the IP12 and IP20 Indigos as part of the sgimips port. NetBSD/sgimips IP12 - LinuxMIPS Technolust: The Indigo Page SGI Indigo / Silicon Graphics R4000 Architecture http://www.irisindigo.com/
Computer data storage
Computer data storage called storage or memory, is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers; the central processing unit of a computer is. In practice all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away; the fast volatile technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". In the Von Neumann architecture, the CPU consists of two main parts: The control unit and the arithmetic logic unit; the former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Without a significant amount of memory, a computer would be able to perform fixed operations and output the result, it would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, other specialized devices.
Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can be reprogrammed with new in-memory instructions. Most modern computers are von Neumann machines. A modern digital computer represents data using the binary numeral system. Text, pictures and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 1 or 0; the most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes with one byte per character. Data are encoded by assigning a bit pattern to digit, or multimedia object.
Many standards exist for encoding. By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value, or due to errors in inter or intra-computer communication. A random bit flip is corrected upon detection. A bit, or a group of malfunctioning physical bits is automatically fenced-out, taken out of use by the device, replaced with another functioning equivalent group in the device, where the corrected bit values are restored; the cyclic redundancy check method is used in communications and storage for error detection. A detected error is retried. Data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed; this utilizes less storage for many types of data at the cost of more computation.
Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots; the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary and off-line storage is guided by cost per bit. In contemporary usage, "memory" is semiconductor storage read-write random-access memory DRAM or other forms of fast but temporary storage. "Storage" consists of storage devices and their media not directly accessible by the CPU hard disk drives, optical disc drives, other devices slower than RAM but non-volatile. Memory has been called core memory, main memory, real storage or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory or auxiliary/peripheral storage.
Primary storage referred to as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions executes them as required. Any data operated on is stored there in uniform manner. Early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were replaced by magnetic core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive; this led to modern random-access memo
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Conventional PCI shortened to PCI, is a local computer bus for attaching hardware devices in a computer. PCI is part of the PCI Local Bus standard; the PCI bus supports the functions found on a processor bus but in a standardized format, independent of any particular processor's native bus. Devices connected to the PCI bus appear to a bus master to be connected directly to its own bus and are assigned addresses in the processor's address space, it is a parallel bus, synchronous to a single bus clock. Attached devices can take either the form of an integrated circuit fitted onto the motherboard itself or an expansion card that fits into a slot; the PCI Local Bus was first implemented in IBM PC compatibles, where it displaced the combination of several slow ISA slots and one fast VESA Local Bus slot as the bus configuration. It has subsequently been adopted for other computer types. Typical PCI cards used in PCs include: network cards, sound cards, extra ports such as USB or serial, TV tuner cards and disk controllers.
PCI video cards replaced ISA and VESA cards until growing bandwidth requirements outgrew the capabilities of PCI. The preferred interface for video cards became AGP, itself a superset of conventional PCI, before giving way to PCI Express; the first version of conventional PCI found in consumer desktop computers was a 32-bit bus using a 33 MHz bus clock and 5 V signalling, although the PCI 1.0 standard provided for a 64-bit variant as well. These have one locating notch in the card. Version 2.0 of the PCI standard introduced 3.3 V slots, physically distinguished by a flipped physical connector to prevent accidental insertion of 5 V cards. Universal cards, which can operate on either voltage, have two notches. Version 2.1 of the PCI standard introduced optional 66 MHz operation. A server-oriented variant of conventional PCI, called PCI-X operated at frequencies up to 133 MHz for PCI-X 1.0 and up to 533 MHz for PCI-X 2.0. An internal connector for laptop cards, called Mini PCI, was introduced in version 2.2 of the PCI specification.
The PCI bus was adopted for an external laptop connector standard – the CardBus. The first PCI specification was developed by Intel, but subsequent development of the standard became the responsibility of the PCI Special Interest Group. Conventional PCI and PCI-X are sometimes called Parallel PCI in order to distinguish them technologically from their more recent successor PCI Express, which adopted a serial, lane-based architecture. Conventional PCI's heyday in the desktop computer market was 1995–2005. PCI and PCI-X have become obsolete for most purposes. Many kinds of devices available on PCI expansion cards are now integrated onto motherboards or available in USB and PCI Express versions. Work on PCI began at Intel's Architecture Development Lab c. 1990. A team of Intel engineers defined the architecture and developed a proof of concept chipset and platform partnering with teams in the company's desktop PC systems and core logic product organizations. PCI was put to use in servers, replacing MCA and EISA as the server expansion bus of choice.
In mainstream PCs, PCI was slower to replace VESA Local Bus, did not gain significant market penetration until late 1994 in second-generation Pentium PCs. By 1996, VLB was all but extinct, manufacturers had adopted PCI for 486 computers. EISA continued to be used alongside PCI through 2000. Apple Computer adopted PCI for professional Power Macintosh computers in mid-1995, the consumer Performa product line in mid-1996; the 64-bit version of plain PCI remained rare in practice though, although it was used for example by all G3 and G4 Power Macintosh computers. Revisions of PCI added new features and performance improvements, including a 66 MHz 3.3 V standard and 133 MHz PCI-X, the adaptation of PCI signaling to other form factors. Both PCI-X 1.0b and PCI-X 2.0 are backward compatible with some PCI standards. These revisions were used on server hardware but consumer PC hardware remained nearly all 32 bit, 33 MHz and 5 volt; the PCI-SIG introduced the serial PCI Express in c. 2004. At the same time, they renamed PCI as Conventional PCI.
Since motherboard manufacturers have included progressively fewer Conventional PCI slots in favor of the new standard. Many new motherboards do not provide conventional PCI slots at all, as of late 2013. PCI provides separate memory and I/O port address spaces for the x86 processor family, 64 and 32 bits, respectively. Addresses in these address spaces are assigned by software. A third address space, called the PCI Configuration Space, which uses a fixed addressing scheme, allows software to determine the amount of memory and I/O address space needed by each device; each device can request up to six areas of memory space or I/O port space via its configuration space registers. In a typical system, the firmware queries all PCI buses at startup time to find out what devices are present and what system resources each needs, it allocates the resources and tells each device what its allocation is. The PCI configuration space contains a small amount of device type information, which helps an operating system choose device drivers for it, or at least to have a dialogue with a user about the system configuration.
Devices may have an on-board ROM containing executable code for x86 or PA-RISC processors, an Op
Fiber Distributed Data Interface
Fiber Distributed Data Interface is a standard for data transmission in a local area network. It uses optical fiber as its standard underlying physical medium, although it was later specified to use copper cable, in which case it may be called CDDI, standardized as TP-PMD referred to as TP-DDI. FDDI provides a 100 Mbit/s optical standard for data transmission in local area network that can extend in range up to 200 kilometers. Although FDDI logical topology is a ring-based token network, it did not use the IEEE 802.5 token ring protocol as its basis. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. FDDI offers both a Dual-Attached Station, counter-rotating token ring topology and a Single-Attached Station, token bus passing ring topology. FDDI, as a product of American National Standards Institute X3T9.5, conforms to the Open Systems Interconnection model of functional layering using other protocols. The standards process started in the mid 1980s.
FDDI-II, a version of FDDI described in 1989, added circuit-switched service capability to the network so that it could handle voice and video signals. Work started to connect FDDI networks to synchronous optical networking technology. A FDDI network contains two rings; the primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the secondary ring to do backup, it can carry data, extending capacity to 200 Mbit/s; the single ring can extend the maximum distance. FDDI had a larger maximum-frame size than the standard Ethernet family, which only supports a maximum-frame size of 1,500 bytes, allowing better effective data rates in some cases. Designers constructed FDDI rings in a network topology such as a "dual ring of trees". A small number of devices infrastructure devices such as routers and concentrators rather than host computers, were "dual-attached" to both rings. Host computers connect as single-attached devices to the routers or concentrators; the dual ring in its most degenerate form collapses into a single device.
A computer-room contained the whole dual ring, although some implementations deployed FDDI as a metropolitan area network. FDDI requires this network topology because the dual ring passes through each connected device and requires each such device to remain continuously operational; the standard allows for optical bypasses, but network engineers consider these unreliable and error-prone. Devices such as workstations and minicomputers that might not come under the control of the network managers are not suitable for connection to the dual ring; as an alternative to using a dual-attached connection, a workstation can obtain the same degree of resilience through a dual-homed connection made to two separate devices in the same FDDI ring. One of the connections becomes active. If the first connection fails, the backup link takes over with no perceptible delay; the FDDI data frame format is: Where PA is the preamble, SD is a start delimiter, FC is frame control, DA is the destination address, SA is the source address, PDU is the protocol data unit, FCS is the frame check Sequence, ED/FS are the end delimiter and frame status.
The Internet Engineering Task Force defined a standard for transmission of the Internet Protocol over FDDI. It was first proposed in June 1989 and revised in 1990; some aspects of the protocol were compatible with the IEEE 802.2 standard for logical link control. For example, the 48-bit MAC addresses, thus other protocols such as the Address Resolution Protocol could be common as well. FDDI was considered an attractive campus backbone network technology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and token ring networks only offered 4 Mbit/s or 16 Mbit/s rates, thus it was a high-speed choice of that era. By 1994, vendors included Cisco Systems, National Semiconductor, Network Peripherals, SysKonnect, 3Com. FDDI was made obsolete in local networks by Fast Ethernet which offered the same 100 Mbit/s speeds, but at a much lower cost and, since 1998, by Gigabit Ethernet due to its speed, lower cost, ubiquity. FDDI standards included: ANSI X3.139-1987, Media Access Control — ISO 9314-2 ANSI X3.148-1988, Physical Layer Protocol — ISO 9314-1 ANSI X3.166-1989, Physical Medium Dependent — ISO 9314-3 ANSI X3.184-1993, Single Mode Fiber Physical Medium Dependent — ISO 9314-4 ANSI X3.229-1994, Station Management — ISO 9314-6 This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later