ESCON is a data connection created by IBM, is used to connect their mainframe computers to peripheral devices such as disk storage and tape drives. ESCON is an optical fiber, serial interface, it operated at a rate of 10 Mbyte/s, increased to 17Mbyte/s. The current maximum distance is 43 kilometers. ESCON was introduced by IBM in the early 1990s, it replaced the older, copper-based, parallel, IBM System/360 Bus and Tag channels technology of 1960-1990 era mainframes. Optical fiber is smaller in diameter and weight, hence could save installation costs. Space and labor could be reduced when fewer physical links were required - due to ESCON's switching features. ESCON is being supplanted by the faster FICON, which runs over Fibre Channel. ESCON allows the establishment and reconfiguration of channel connections dynamically, without having to take equipment off-line and manually move the cables. ESCON supports channel connections using serial transmission over a pair of fibers; the ESCON Director supports dynamic switching.
It allows the distance between units to be extended up to 60 km over a dedicated fiber. “Permanent virtual circuits” are supported through the switch. ESCON switching has advantages over a collection of point-to-point links. A peripheral capable of accessing a single mainframe can now be connected to up to eight mainframes, providing peripheral sharing; the ESCON interface specifications were adopted in 1996 by ANSI X3T1 committee as the SBCON standard, now managed by X3T11. Direct access storage device Most important DASD with ESCON interfaces: EMC Symmetrix, DMX and VMAX families. Hewlett Packard Enterprise XP Storage family. Hitachi Data Systems Lightning IBM Enterprise Storage Server IBM Storage DS8000 Sun StorageTek SVA
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Fibre Channel is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data used to connect computer data storage to servers. Fibre Channel is used in storage area networks in commercial data centers. Fibre Channel networks form a switched fabric. Fibre Channel runs on optical fiber cables within and between data centers, but can run on copper cabling. Most block storage supports many upper level protocols. Fibre Channel Protocol is a transport protocol that predominantly transports SCSI commands over Fibre Channel networks. Mainframe computers run the FICON command set over Fibre Channel because of its high reliability and throughput. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands; when the technology was devised, it ran over optical fiber cables only and, as such, was called "Fiber Channel". The ability to run over copper cabling was added to the specification.
In order to avoid confusion and to create a unique name, the industry decided to change the spelling and use the British English fibre for the name of the standard. Fibre Channel is standardized in the T11 Technical Committee of the International Committee for Information Technology Standards, an American National Standards Institute -accredited standards committee. Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON. Fibre Channel was designed as a serial interface to overcome limitations of the SCSI and HIPPI interfaces. FC was developed with leading edge multi-mode optical fiber technologies that overcame the speed limitations of the ESCON protocol. By appealing to the large base of SCSI disk drives and leveraging mainframe technologies, Fibre Channel developed economies of scale for advanced technologies and deployments became economical and widespread. Commercial products were released.
By the time the standard was ratified lower speed versions were growing out of use. Fibre Channel was the first serial storage transport to achieve gigabit speeds where it saw wide adoption, its success grew with each successive speed. Fibre Channel has doubled in speed every few years since 1996. Fibre Channel has seen active development since its inception, with numerous speed improvements on a variety of underlying transport media; the following table shows the progression of native Fibre Channel speeds: In addition to a modern physical layer, Fibre Channel added support for any number of "upper layer" protocols, including ATM, IP and FICON, with SCSI being the predominant usage. Two major characteristics of Fibre Channel networks is that they provide in-order and lossless delivery of raw block data. Lossless delivery of raw data block is achieved based on a credit mechanism. There are three major Fibre Channel topologies, describing how a number of ports are connected together. A port in Fibre Channel terminology is any entity that communicates over the network, not a hardware port.
This port is implemented in a device such as disk storage, a Host Bus Adapter network connection on a server or a Fibre Channel switch. Point-to-point. Two devices are connected directly to each other; this is the simplest topology, with limited connectivity. Arbitrated loop. In this design, all devices are in a ring, similar to token ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted; the failure of one device causes a break in the ring. Fibre Channel hubs may bypass failed ports. A loop may be made by cabling each port to the next in a ring. A minimal loop containing only two ports, while appearing to be similar to point-to-point, differs in terms of the protocol. Only one pair of ports can communicate concurrently on a loop. Maximum speed of 8GFC. Arbitrated Loop has been used after 2010. Switched Fabric. In this design, all devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. Advantages of this topology over point-to-point or Arbitrated Loop include: The Fabric can scale to tens of thousands of ports.
The switches manage the state of the Fabric, providing optimized paths via Fabric Shortest Path First data routing protocol. The traffic between two ports flows through the switches and not through any other ports like in Arbitrated Loop. Failure of a port should not affect operation of other ports. Multiple pairs of ports may communicate in a Fabric. Fibre Channel does not follow the OSI model layering, is split into five layers: FC-4 – Protocol-mapping layer, in which upper level protocols such as NVMe, SCSI, IP or FICON, are encapsulated into Information Units for delivery to FC-2. Current FC-4s include FCP-4, FC-SB-5, FC-NVMe. FC-3 – Common services layer, a thin layer that could implement functions like encryption or RAID redundancy algorithms. Layers FC-0 are defined in Fibre Channel Physical Interfaces, the
In computer networking, Gigabit Ethernet is the various technologies for transmitting Ethernet frames at a rate of a gigabit per second, as defined by the IEEE 802.3-2008 standard. It came into use beginning in 1999 supplanting Fast Ethernet in wired local networks, as a result of being faster; the cables and equipment are similar to previous standards and have been common and economical since 2010. Half-duplex gigabit links connected through repeater hubs were part of the IEEE specification, but the specification is not updated anymore and full-duplex operation with switches is used exclusively. Ethernet was the result of the research done at Xerox PARC in the early 1970s. Ethernet evolved into a implemented physical and link layer protocol. Fast Ethernet increased speed from 10 to 100 megabits per second. Gigabit Ethernet was the next iteration; the initial standard for Gigabit Ethernet was produced by the IEEE in June 1998 as IEEE 802.3z, required optical fiber. 802.3z is referred to as 1000BASE-X, where -X refers to either -CX, -SX, -LX, or -ZX.
For the history behind the "X" see Fast Ethernet. IEEE 802.3ab, ratified in 1999, defines Gigabit Ethernet transmission over unshielded twisted pair category 5, 5e or 6 cabling, became known as 1000BASE-T. With the ratification of 802.3ab, Gigabit Ethernet became a desktop technology as organizations could use their existing copper cabling infrastructure. IEEE 802.3ah, ratified in 2004 added two more gigabit fiber standards, 1000BASE-LX10 and 1000BASE-BX10. This was part of a larger group of protocols known as Ethernet in the First Mile. Gigabit Ethernet was deployed in high-capacity backbone network links. In 2000, Apple's Power Mac G4 and PowerBook G4 were the first mass-produced personal computers featuring the 1000BASE-T connection, it became a built-in feature in many other computers. There are five physical layer standards for Gigabit Ethernet using optical fiber, twisted pair cable, or shielded balanced copper cable; the IEEE 802.3z standard includes 1000BASE-SX for transmission over multi-mode fiber, 1000BASE-LX for transmission over single-mode fiber, the nearly obsolete 1000BASE-CX for transmission over shielded balanced copper cabling.
These standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit/s, to ensure a DC balanced signal. The symbols are sent using NRZ. Optical fiber transceivers are most implemented as user-swappable modules in SFP form or GBIC on older devices. IEEE 802.3ab, which defines the used 1000BASE-T interface type, uses a different encoding scheme in order to keep the symbol rate as low as possible, allowing transmission over twisted pair. IEEE 802.3ap defines Ethernet Operation over Electrical Backplanes at different speeds. Ethernet in the First Mile added 1000BASE-LX10 and -BX10. 1000BASE-X is used in industry to refer to Gigabit Ethernet transmission over fiber, where options include 1000BASE-SX, 1000BASE-LX, 1000BASE-LX10, 1000BASE-BX10 or the non-standard -EX and -ZX implementations. Included are copper variants using the same 8b/10b line code. 1000BASE-CX is an initial standard for Gigabit Ethernet connections with maximum distances of 25 meters using balanced shielded twisted pair and either DE-9 or 8P8C connector.
The short segment length is due to high signal transmission rate. Although it is still used for specific applications where cabling is done by IT professionals, for instance the IBM BladeCenter uses 1000BASE-CX for the Ethernet connections between the blade servers and the switch modules, 1000BASE-T has succeeded it for general copper wiring use. 1000BASE-KX is part of the IEEE 802.3ap standard for Ethernet Operation over Electrical Backplanes. This standard defines one to four lanes of backplane links, one RX and one TX differential pair per lane, at link bandwidth ranging from 100Mbit to 10Gbit per second; the 1000BASE-KX variant uses 1.25 GBd electrical signalling speed. 1000BASE-SX is an optical fiber Gigabit Ethernet standard for operation over multi-mode fiber using a 770 to 860 nanometer, near infrared light wavelength. The standard specifies a maximum length of 220 meters for 62.5 µm/160 MHz×km multi-mode fiber, 275 m for 62.5 µm/200 MHz×km, 500 m for 50 µm/400MHz×km, 550 m for 50 µm/500 MHz×km multi-mode fiber.
In practice, with good quality fiber and terminations, 1000BASE-SX will work over longer distances. This standard is popular for intra-building links in large office buildings, co-location facilities and carrier-neutral Internet exchanges. Optical power specifications of SX interface: Minimum output power = −9.5 dBm. Minimum receive sensitivity = −17 dBm. 1000BASE-LX is an optical fiber Gigabit Ethernet standard specified in IEEE 802.3 Clause 38 which uses a long wavelength laser, a maximum RMS spectral width of 4 nm. 1000BASE-LX is specified to work over a distance of up to 5 km over 10 µm single-mode fiber. 1000BASE-LX can run over all common types of multi-mode fiber with a maximum segment length of 550 m. For link distances greater than 300 m, the use of a special launch conditioning patch cord may be required; this launches the laser at a precise offset from the center of the fiber which causes it to spread across the diameter of the fiber core, reducing the effect known as differential mode delay which occurs when the laser couples onto only a small number of available modes in multi-mode f
Frame check sequence
A frame check sequence refers to an error-detecting code added to a frame in a communications protocol. Frames are used to send payload data from a source to a destination. All frames and the bits and fields contained within them, are susceptible to errors from a variety of sources; the FCS field contains a number, calculated by the source node based on the data in the frame. This number is added to the end of a frame, sent; when the destination node receives the frame the FCS number is recalculated and compared with the FCS number included in the frame. If the two numbers are different, an error is assumed and the frame is discarded; the FCS provides error detection only. Error recovery must be performed through separate means. Ethernet, for example, specifies that a damaged frame should be discarded and does not specify any action to cause the frame to be retransmitted. Other protocols, notably the Transmission Control Protocol, can notice the data loss and initiate retransmission and error recovery.
The FCS is transmitted in such a way that the receiver can compute a running sum over the entire frame, together with the trailing FCS, expecting to see a fixed result when it is correct. For Ethernet and other IEEE 802 protocols, this fixed result known as the magic number or CRC32 residue, is 0xC704DD7B; when transmitted and used in this way, the FCS appears before the frame-ending delimiter. By far the most popular FCS algorithm is a cyclic redundancy check, used in Ethernet and other IEEE 802 protocols with 32 bits, in X.25 with 16 or 32 bits, in HDLC with 16 or 32 bits, in Frame Relay with 16 bits, in Point-to-Point Protocol with 16 or 32 bits, in other data link layer protocols. Protocols of the Internet protocol suite tend to use checksums. Ethernet frame § Preamble and start frame delimiter Syncword