1.
Crossover cable
–
A crossover cable connects two devices of the same type, for example DTE-DTE or DCE-DCE, usually connected asymmetrically, by a modified cable called a crosslink. Such distinction of devices was introduced by IBM. e, pCs, linking two or more hubs, switches or routers together, possibly to work as one wider device. Null modem of RS-232 Ethernet crossover cable Rollover cable A loopback is a type of degraded one side crosslinked connection connecting a port to itself, usually for test purposes. A T1 cable uses T568B pairs 1 and 2, so to connect two T1 CSU/DSU devices back-to-back requires a cable that swaps these pairs. Specifically, pins 1,2,4, and 5 are connected to 4,5,1, a 56K DDS cable uses T568B pairs 02 and 04, so a crossover cable for these devices swaps those pairs. Electrical connector wiring Pinout Structured cabling TIA/EIA-568-B
2.
Ethernet
–
Ethernet /ˈiːθərnɛt/ is a family of computer networking technologies commonly used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE802.3, over time, Ethernet has largely replaced competing wired LAN technologies such as token ring, FDDI and ARCNET. The original 10BASE5 Ethernet uses coaxial cable as a medium, while the newer Ethernet variants use twisted pair. Over the course of its history, Ethernet data transfer rates have increased from the original 2.94 megabits per second to the latest 100 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet, systems communicating over Ethernet divide a stream of data into shorter pieces called frames. As per the OSI model, Ethernet provides services up to, since its commercial release, Ethernet has retained a good degree of backward compatibility. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols, the primary alternative for some uses of contemporary LANs is Wi-Fi, a wireless protocol standardized as IEEE802.11. Ethernet was developed at Xerox PARC between 1973 and 1974 and it was inspired by ALOHAnet, which Robert Metcalfe had studied as part of his PhD dissertation. In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, in 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper. Metcalfe left Xerox in June 1979 to form 3Com and he convinced Digital Equipment Corporation, Intel, and Xerox to work together to promote Ethernet as a standard. The so-called DIX standard, for Digital/Intel/Xerox, specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and it was published on September 30,1980 as The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications, version 2 was published in November,1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the time and resulted in the publication of IEEE802.3 on June 23,1983. Ethernet initially competed with two largely proprietary systems, Token Ring and Token Bus, in the process, 3Com became a major company. 3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, an Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced for a time, with drivers for DOS, by the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, and Ethernet ports began to appear on some PCs and most workstations. This process was sped up with the introduction of 10BASE-T and its relatively small modular connector. Since then, Ethernet technology has evolved to meet new bandwidth, in addition to computers, Ethernet is now used to interconnect appliances and other personal devices
3.
Network interface controller
–
A network interface controller is a computer hardware component that connects a computer to a computer network. Early network interface controllers were commonly implemented on expansion cards that plugged into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. The network controller implements the electronic circuitry required to communicate using a physical layer and data link layer standard such as Ethernet, Fibre Channel. The NIC allows computers to communicate over a network, either by using cables or wirelessly. Although other network technologies exist, IEEE802 networks including the Ethernet variants have achieved near-ubiquity since the mid-1990s, newer server motherboards may even have dual network interfaces built-in. The Ethernet capabilities are integrated into the motherboard chipset or implemented via a low-cost dedicated Ethernet chip. A separate network card is not required unless additional interfaces are needed or some type of network is used. The NIC may use one or more of the techniques to indicate the availability of packets to transfer. Interrupt-driven I/O is where the peripheral alerts the CPU that it is ready to transfer data, also, NICs may use one or more of the following techniques to transfer packet data, Programmed input/output is where the CPU moves the data to or from the NIC to memory. Direct memory access is where some other device other than the CPU assumes control of the bus to move data to or from the NIC to memory. This removes load from the CPU but requires more logic on the card, in addition, a packet buffer on the NIC may not be required and latency can be reduced. There are two types of DMA, third-party DMA in which a DMA controller other than the NIC performs transfers, an Ethernet network controller typically has an 8P8C socket where the network cable is connected. Older NICs also supplied BNC, or AUI connections, a few LEDs inform the user of whether the network is active, and whether or not data transmission occurs. Ethernet network controllers typically support 10 Mbit/s Ethernet,100 Mbit/s Ethernet, such controllers are designated as 10/100/1000, meaning that they can support a notional maximum transfer rate of 10,100 or 1000 Mbit/s. 10 Gigabit Ethernet NICs are also available, and, as of November 2014, are beginning to be available on computer motherboards, multiqueue NICs provide multiple transmit and receive queues, allowing packets received by the NIC to be assigned to one of its receive queues. The hardware-based distribution of the interrupts, described above, is referred to as receive-side scaling, purely software implementations also exist, such as the receive packet steering and receive flow steering. Examples of such implementations are the RFS and Intel Flow Director, with multiqueue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs/cores, various operating systems internal contentions can be avoided and those NICs support, Accessing local and remote memory without involving the remote CPU
4.
Patch cable
–
A patch cable or patch cord or patch lead is an electrical or optical cable used to connect one electronic or optical device to another for signal routing. Devices of different types are connected with patch cords, patch cords are usually produced in many different colors so as to be easily distinguishable, and are relatively short, perhaps no longer than two metres. However, patch cords typically refer only to short cords used with patch panels, patch cord cable differs from standard structured cabling cable in that patch cable is stranded for flexibility, whereas standard cable is solid copper. Because the patch cord is stranded copper construction the attenuation is higher on patch cords than solid cable so short lengths should be adhered to. Patch cords can be as short as 3 inches, to connect stacked components or route signals through a patch bay, as length increases, the cables are usually thicker and/or made with more shielding, to prevent signal loss and the introduction of unwanted radio frequencies and hum. Patch cords are made of coaxial cables, with the signal carried through a shielded core. Each end of the cable is attached to a connector so that the cord may be plugged in, connector types may vary widely, particularly with adapting cables. A pigtail is similar to a cord and is the informal name given to a cable fitted with a connector at one end. In the context of copper cabling, these cables are sometimes referred to as blunt patch cords, optical fiber pigtails, in contrast to copper pigtails, can be more accurately described as a connector than a cable or cord. A fiber pigtail is a single, short, usually tight-buffered, optical fiber that has an optical connector pre-installed on one end, the end of the fiber pigtail is stripped and fusion spliced to a single fiber of a multi-fiber trunk. Splicing of pigtails to each fiber in the trunk breaks out the cable into its component fibers for connection to the end equipment. A variety of cables are used to carry electrical signals in sound recording studios, microphones are typically connected to mixing boards or PA systems with XLR microphone cables which use three-pin XLR connectors. DJs using record players connect their turntables to mixers or PA systems with stereo RCA connectors, to resolve this problem, DJs can either use adapters or special cables. Heavier-gauge cables are used for carrying amplified signals from amplifiers to speakers, ¼ TRS phone connector cables can carry stereo signals, so they are used for stereo headphones and for some patching purposes. The patch bay is a panel of audio connectors where XLR cables. The cables could get tangled or mixed up, and it would be hard to know, when faced with 20 connectors at the end of the cable run, which cable was associated with which microphone or instrument. The patch bay is numbered, so that the engineer can note which microphone or instrument is plugged into each numbered connection, Cable management Crossover cable White Papers - Cable Anatomy
5.
Network switch
–
A network switch is a computer networking device that connects devices together on a computer network by using packet switching to receive, process, and forward data to the destination device. Unlike less advanced network hubs, a network switch forwards data only to one or multiple devices that need to receive it, a network switch is a multiport network bridge that uses hardware addresses to process and forward data at the data link layer of the OSI model. Switches for Ethernet are the most common form and the first Ethernet switch was introduced by Kalpana in 1990, Switches also exist for other types of networks including Fibre Channel, Asynchronous Transfer Mode, and InfiniBand. A switch is a device in a network that electrically and logically connects together other devices. Multiple data cables are plugged into a switch to enable communication between different networked devices, Switches manage the flow of data across a network by transmitting a received network packet only to the one or more devices for which the packet is intended. Each networked device connected to a switch can be identified by its network address and this maximizes the security and efficiency of the network. Because broadcasts are still being forwarded to all connected devices, the newly formed network segment continues to be a broadcast domain, an Ethernet switch operates at the data link layer of the OSI model to create a separate collision domain for each switch port. In full duplex mode, each switch port can simultaneously transmit and receive, in the case of using a repeater hub, only a single transmission could take place at a time for all ports combined, so they would all share the bandwidth and run in half duplex. Necessary arbitration would result in collisions, requiring retransmissions. The network switch plays an role in most modern Ethernet local area networks. Mid-to-large sized LANs contain a number of linked managed switches, in most of these cases, the end-user device contains a router and components that interface to the particular physical broadband technology. User devices may include a telephone interface for Voice over IP protocol. Segmentation involves the use of a bridge or a switch to split a larger collision domain into smaller ones in order to reduce collision probability, in the extreme case, each device is located on a dedicated switch port. In contrast to an Ethernet hub, there is a separate collision domain on each of the switch ports and this allows computers to have dedicated bandwidth on point-to-point connections to the network and also to run in full-duplex without collisions. Full-duplex mode has one transmitter and one receiver per collision domain. Switches may operate at one or more layers of the OSI model, including the data link, a device that operates simultaneously at more than one of these layers is known as a multilayer switch. In switches intended for use, built-in or modular interfaces make it possible to connect different types of networks, including Ethernet, Fibre Channel, RapidIO, ATM. This connectivity can be at any of the layers mentioned, while the layer-2 functionality is adequate for bandwidth-shifting within one technology, interconnecting technologies such as Ethernet and token ring is performed easier at layer 3 or via routing
6.
Ethernet hub
–
It has multiple input/output ports, in which a signal introduced at the input of any port appears at the output of every port except the original incoming. A hub works at the layer of the OSI model. Repeater hubs also participate in collision detection, forwarding a jam signal to all if it detects a collision. In addition to standard 8P8C ports, some hubs may also come with a BNC or Attachment Unit Interface connector to allow connection to legacy 10BASE2 or 10BASE5 network segments, hubs are now largely obsolete, having been replaced by network switches except in very old installations or specialized applications. As of 2011, connecting network segments by repeaters or hubs is deprecated by IEEE802.3, a network hub is an unsophisticated device in comparison with a switch. As a multiport repeater it works by repeating bits received from one of its ports to all other ports. It is aware of physical layer packets, that is it can detect their start, a hub cannot further examine or manage any of the traffic that comes through it, any packet entering any port is rebroadcast on all other ports. A hub/repeater has no memory to store any data in – a packet must be transmitted while it is received or is lost when a collision occurs, due to this, hubs can only run in half duplex mode. Consequently, due to a larger collision domain, packet collisions are frequent in networks connected using hubs than in networks connected using more sophisticated devices. The need for hosts to be able to detect collisions limits the number of hubs, for 10 Mbit/s networks built using repeater hubs, the 5-4-3 rule must be followed, up to five segments are allowed between any two end stations. For 10BASE-T networks, up to five segments and four repeaters are allowed between any two hosts, for 100 Mbit/s networks, the limit is reduced to 3 segments between any two end stations, and even that is only allowed if the hubs are of Class II. Most hubs detect typical problems, such as collisions and jabbering on individual ports. Thus, hub-based twisted-pair Ethernet is generally more robust than coaxial cable-based Ethernet, to pass data through the repeater in a usable fashion from one segment to the next, the framing and data rate must be the same on each segment. This means that a repeater cannot connect an 802.3 segment,100 Mbit/s hubs and repeaters come in two different speed grades, Class I delay the signal for a maximum of 140 bit times and Class II hubs delay the signal for a maximum of 92 bit times. In the early days of Fast Ethernet, Ethernet switches were relatively expensive devices, hubs suffered from the problem that if there were any 10BASE-T devices connected then the whole network needed to run at 10 Mbit/s. Therefore, a compromise between a hub and a switch was developed, known as a dual-speed hub and these devices make use of an internal two-port switch, bridging the 10 Mbit/s and 100 Mbit/s segments. When a network device becomes active on any of the physical ports and this obviated the need for an all-or-nothing migration to Fast Ethernet networks. These devices are considered hubs because the traffic between devices connected at the speed is not switched
7.
Medium-dependent interface
–
A medium dependent interface describes the interface in a computer network from a physical layer implementation to the physical medium used to carry the transmission. Ethernet over twisted pair also defines a medium dependent interface crossover interface, auto MDI-X ports on newer network interfaces detect if the connection would require a crossover, and automatically chooses the MDI or MDI-X configuration to properly match the other end of the link. The popular Ethernet family defines common medium dependent interfaces, for 10BASE5, connection to the coaxial cable was made with either a vampire tap or a pair of N connectors. For 10BASE2 the connection to the cable was typically made with a single BNC connector to which a T-piece was attached. For twisted pair cabling 8P8C modular connectors are used, for fiber a variety of connectors are used depending on manufacturer and physical space availability. With 10BASE-T and 100BASE-TX separate twisted pairs are used for the two directions of communication, since twisted pair cables are conventionally wired pin to pin there are two different pinouts used for the medium dependent interface. These are referred to as MDI and MDI-X, when connecting an MDI port to an MDI-X port a straight through cable is used while to connect two MDI ports or two MDI-X ports a crossover cable must be used. Conventionally MDI is used on end devices while MDI-X is used on hubs, some network hubs or switches have an MDI port to connect to other hubs or switches without a crossover cable. The terminology generally refers to variants of the Ethernet over twisted pair technology that use a female 8P8C port connection on a computer, the X refers to the fact that transmit wires on an MDI device must be connected to receive wires on an MDI-X device. Straight through cables connect pins 1 and 2 on an MDI device to pins 1 and 2 on an MDI-X device, similarly pins 3 and 6 are receive pins on an MDI device and transmit pins on an MDI-X device. The general convention was for network hubs and switches to use the MDI-X configuration, while all other such as personal computers, workstations. Some routers and other devices had a switch to go back. Thus, connecting MDI to MDI-X requires a straight-through cable, connecting MDI to MDI or MDI-X to MDI-X requires a crossover in the cable to get an odd number. When using more complicated setups through multiple patch panels in structured cabling and it is a good idea to have all necessary crossovers on one side, i. e. either on the central hub/switch or on each secondary hub/switch. The confusion of needing two different kinds of cables for anything but hierarchical star network topologies prompted a more automatic solution, as long as it is enabled on either end of a link, either type of cable can be used. For auto MDI-X to operate correctly, the rate on the interface. Auto MDI-X was developed by Hewlett-Packard engineers Daniel Joseph Dove and Bruce W. Melvin, a pseudo-random number generator decides whether or not a network port will attach its transmitter, or its receiver to each of the twisted pairs used to auto-negotiate the link. When two auto MDI-X ports are connected together, which is normal for modern products, the resolution time is typically <500 ms
8.
Modular connector
–
A modular connector is an electrical connector that was originally designed for use in telephone wiring, but has since been used for many other purposes. Many applications that used a bulkier, more expensive connector have converted to modular connectors. Probably the most well known applications of modular connectors are for telephone jacks and for Ethernet jacks, Modular connectors were originally used in the Registration Interface system, mandated by the Federal Communications Commission in 1976 in which they became known as registered jacks. The registered jack specifications define the wiring patterns of the jacks, instead, these latter aspects are covered by ISO standard 8877, first used in ISDN systems. TIA/EIA-568 is a standard for data circuits wired on modular connectors, other systems exist for assigning signals to modular connectors, physical interchangeability of plugs and jacks does not ensure interoperation, nor protection from electrical damage to circuits. For example, modular cables and connectors have used to supply low-voltage AC or DC power. Modular connectors also go by the names modular phone jack/plug, RJ connector, the term modular connector arose from its original use in a novel system of cabling designed to make telephone equipment more modular. This includes the 4P4C handset connector, a very popular use of 8P8C today is Ethernet over twisted pair, and that may be the most well known context in which the name RJ45 is known, even though it has nothing to do with the RJ45 standard. Likewise, the 4P4C connector is sometimes called RJ9 or RJ22, Modular connectors were originally developed and patented by General Cable Corp in 1974. They replaced the hard-wired connections on most Western Electric telephones around 1976, at the same time, they began to replace screw terminals and larger 3 and 4 pin telephone jacks in buildings. Modular connectors have gender, plugs are considered to be male, while jacks or sockets are considered to be female, plugs are used to terminate loose cables and cords, while jacks are used for fixed locations on surfaces such as walls and panels, and on equipment. Other than telephone extension cables, cables with a plug on one end. Instead, cables are connected using an adapter, which consists of two female jacks wired back-to-back. Modular connectors are designed to latch together, a spring-loaded tab on the plug snaps into a jack so that the plug cannot be pulled out. To remove the plug, the latching tab must be depressed, the standard and most common way to install a jack in a wall or panel is with the tab side down. This usually makes it easier to operate the tab when removing the plug, because the person grabs the plug with thumb on top, the modular connector suffers from a design flaw or weakness however, as the fragile latching tab easily snags on other cables and breaks off. When this happens, the connector is still functional, but the crucial latching feature is lost, some higher quality cables have a flexible sleeve called a boot over the plug, or a special tab design, to prevent this. These cables are marketed as snagless
9.
Ethernet over twisted pair
–
Ethernet over twisted pair technologies use twisted-pair cables for the physical layer of an Ethernet computer network. Early Ethernet had used various grades of coaxial cable, but in 1984 and this led to the development of 10BASE-T and its successors 100BASE-TX and 1000BASE-T, supporting speeds of 10,100 and 1,000 Mbit/s respectively. All these three standards define both full-duplex and half-duplex communication, however, half-duplex operation for gigabit speed isnt supported by any existing hardware. All these standards use 8P8C connectors, and the cables from Cat3 to Cat8 have four pairs of wires, though 10BASE-T, the Institute of Electrical and Electronics Engineers standards association ratified several versions of the technology. The first two designs were StarLAN, standardized in 1986, at one megabit per second, and LattisNet, developed in January 1987. Both were developed before the 10BASE-T standard and used different signalling, in 1988 AT&T released StarLAN10, named for working at 10 Mbit/s. The StarLAN10 signalling was used as the basis of 10BASE-T, the centralized star topology was a more common approach to cabling than the bus in earlier standards and easier to manage. Using point-to-point links instead of a shared bus greatly simplified troubleshooting and was prone to failure. Exchanging cheap repeater hubs for more advanced switching hubs provided an upgrade path. Mixing different speeds in a single network became possible with the arrival of Fast Ethernet, depending on cable grades, subsequent upgrading to Gigabit Ethernet or faster may be as easy as replacing the network switches. The common names for the standards derive from aspects of the physical media, the leading number refers to the transmission speed in Mbit/s. BASE denotes that baseband transmission is used, the T designates twisted pair cable, where the pair of wires for each signal is twisted together to reduce radio frequency interference and crosstalk between pairs. Where there are standards for the same transmission speed, they are distinguished by a letter or digit following the T, such as TX, referring to the encoding method. Twisted-pair Ethernet standards are such that the majority of cables can be wired straight-through and it is conventional to wire cables for 10- or 100-Mbit/s Ethernet to either the T568A or T568B standards. The terms used in the explanations of the 568 standards, tip and ring, refer to older communication technologies, and equate to the positive and negative parts of the connections. A 10BASE-T or 100BASE-TX node such as a PC uses a connector wiring called medium dependent interfaces, transmitting on pin 1 and 2, an infrastructure node accordingly uses a connector wiring called MDI-X, transmitting on pin 3 and 6 and receiving on pin 1 and 2. These ports are connected using a cable, so each transmitter talks to the receiver on the other side. Nodes can have two types of ports, MDI or MDI-X, hubs and switches have regular ports
10.
Fast Ethernet
–
In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s. Of the Fast Ethernet standards, 100BASE-TX is by far the most common, Fast Ethernet was introduced in 1995 as the IEEE802. 3u standard and remained the fastest version of Ethernet for three years before the introduction of Gigabit Ethernet. The acronym GE/FE is sometimes used for supporting both standards. Fast Ethernet is an extension of the 10-megabit Ethernet standard and it runs on UTP data or optical fiber cable in a star wired bus topology, similar to 10BASE-T where all cables are attached to a hub. Fast Ethernet devices are backward compatible with existing 10BASE-T systems. Fast Ethernet is sometimes referred to as 100BASE-X, where X is a placeholder for the FX, the standard specifies the use of CSMA/CD for media access control. A full-duplex mode is also specified and in all modern networks use Ethernet switches. The 100 in the type designation refers to the transmission speed of 100 Mbit/s. The letter following the dash refers to the medium that carries the signal. A Fast Ethernet adapter can be divided into a media access controller, which deals with the higher-level issues of medium availability. The MAC may be linked to the PHY by a four-bit 25 MHz synchronous parallel interface known as a media-independent interface, in rare cases the MII may be an external connection but is usually a connection between ICs in a network adapter or even within a single IC. The specs are based on the assumption that the interface between MAC and PHY will be a MII but they do not require it. Repeaters may use the MII to connect to multiple PHYs for their different interfaces, the MII fixes the theoretical maximum data bit rate for all versions of Fast Ethernet to 100 Mbit/s. 100BASE-T is any of several Fast Ethernet standards for twisted pair cables, including, 100BASE-TX, 100BASE-T4, the segment length for a 100BASE-T cable is limited to 100 metres. All are or were standards under IEEE802.3, almost all 100BASE-T installations are 100BASE-TX. In the early days of Fast Ethernet, much vendor advertising centered on claims by competing standards that said vendors standards will work better with existing cables than other standards. Thus most networks had to be rewired for 100 Megabit speed whether or not there had supposedly been CAT3 or CAT5 cable runs, 100BASE-TX is the predominant form of Fast Ethernet, and runs over two wire-pairs inside a category 5 or above cable. Like 10BASE-T, the pairs in a standard connection are terminated on pins 1,2,3 and 6
11.
Twisted pair
–
It was invented by Alexander Graham Bell. In balanced pair operation, the two wires carry equal and opposite signals, and the destination detects the difference between the two and this is known as differential mode transmission. Noise sources introduce signals into the wires by coupling of electric or magnetic fields, the noise thus produces a common-mode signal which is canceled at the receiver when the difference signal is taken. This problem is especially apparent in telecommunication cables where pairs in the same cable lie next to each other for many miles, one pair can induce crosstalk in another and it is additive along the length of the cable. Twisting the pairs counters this effect as on each half twist the wire nearest to the noise-source is exchanged, providing the interfering source remains uniform, or nearly so, over the distance of a single twist, the induced noise will remain common-mode. Differential signaling also reduces electromagnetic radiation from the cable, along with the associated attenuation allowing for greater distance between exchanges, the twist rate makes up part of the specification for a given type of cable. When nearby pairs have equal twist rates, the conductors of the different pairs may repeatedly lie next to each other. For this reason it is specified that, at least for cables containing small numbers of pairs. In contrast to shielded or foiled twisted pair, UTP cable is not surrounded by any shielding, UTP is the primary wire type for telephone usage and is very common for computer networking, especially as patch cables or temporary network connections due to the high flexibility of the cables. The earliest telephones used telegraph lines, or open-wire single-wire earth return circuits, in the 1880s electric trams were installed in many cities, which induced noise into these circuits. Lawsuits being unavailing, the telephone companies converted to balanced circuits, as electrical power distribution became more commonplace, this measure proved inadequate. Two wires, strung on either side of cross bars on utility poles, within a few years, the growing use of electricity again brought an increase of interference, so engineers devised a method called wire transposition, to cancel out the interference. In wire transposition, the wires exchange position once every several poles, in this way, the two wires would receive similar EMI from power lines. This represented an early implementation of twisting, with a twist rate of about four twists per kilometre, such open-wire balanced lines with periodic transpositions still survive today in some rural areas. Twisted-pair cabling was invented by Alexander Graham Bell in 1881, by 1900, the entire American telephone line network was either twisted pair or open wire with transposition to guard against interference. UTP cables are found in many Ethernet networks and telephone systems, for indoor telephone applications, UTP is often grouped into sets of 25 pairs according to a standard 25-pair color code originally developed by AT&T Corporation. A typical subset of these colors shows up in most UTP cables, for urban outdoor telephone cables containing hundreds or thousands of pairs, the cable is divided into small but identical bundles. Each bundle consists of twisted pairs that have different twist rates, the bundles are in turn twisted together to make up the cable
12.
Upstream (networking)
–
In computer networking, upstream refers to the direction in which data can be transferred from the client to the server. This differs greatly from downstream not only in theory and usage, upstream speeds are also important to users of peer-to-peer software. ADSL and cable modems are asymmetric, with the data rate much lower than that of its downstream. Symmetric connections such as Symmetric Digital Subscriber Line and T1, however, offer identical upstream and downstream rates. If a node A on the Internet is closer to the Internet backbone than a node B, then A is said to be upstream of B or conversely, related to this is the idea of upstream providers. An upstream provider is usually a large ISP that provides Internet access to a local ISP, hence, the word upstream also refers to the data connection between two ISPs
13.
TIA/EIA-568
–
ANSI/TIA-568 is a set of telecommunications standards from the Telecommunications Industry Association. The standards address commercial building cabling for telecommunications products and services, as of 2017, the standard is at revision D, replacing the 2009 revision C,2001 revision B, the 1995 revision A, and the initial issue of 1991, which are now obsolete. Perhaps the best known features of ANSI/TIA-568 are the pin/pair assignments for eight-conductor 100-ohm balanced twisted pair cabling and these assignments are named T568A and T568B. An IEC standard ISO/IEC11801 provides similar standards for network cables, ANSI/TIA-568 was developed through the efforts of more than 60 contributing organizations including manufacturers, end-users, and consultants. Work on the standard began with the EIA, to standards for telecommunications cabling systems. EIA agreed to develop a set of standards, and formed the TR-42 committee, the work continues to be maintained by TR-42 within the TIA, EIA is not longer in existence and hence EIA removed from the name. The first revision of the standard, TIA/EIA-568-A. 1-1991 was released in 1991, the standard was updated to revision B in 1995. The demands placed upon commercial wiring systems increased dramatically over this due to the adoption of personal computers and data communication networks. The development of twisted pair cabling and the popularization of fiber optic cables also drove significant change in the standards. These changes were first released in a revision C in 2009 which has subsequently replaced by the D series. ANSI/TIA-568 defines structured cabling system standards for buildings, and between buildings in campus environments. The main standard, ANSI/TIA-568. 0-D defines general requirements, while ANSI/TIA-568-C.2 focuses on components of balanced twisted-pair cable systems, ANSI/TIA-568. 3-D addresses components of fiber optic cable systems, and ANSI/TIA-568-C.4, addressed coaxial cabling components. The intent of these standards is to provide recommended practices for the design and installation of cabling systems that support a wide variety of existing. Developers hope the standards provide a lifespan for commercial cabling systems in excess of ten years. This effort has been successful, as evidenced by the definition of category 5 cabling in 1991. Thus, the process can reasonably be said to have provided at least a nine-year lifespan for premises cabling. All these documents accompany related standards that define commercial pathways and spaces, residential cabling, administration standards, grounding and bonding, the standard defines categories of unshielded twisted pair cable systems, with different levels of performance in signal bandwidth, insertion loss, and cross-talk. Generally increasing category numbers correspond with a cable system suitable for higher rates of data transmission, Category 3 cable was suitable for telephone circuits and data rates up to 16 million bits per second
14.
Gigabit Ethernet
–
In computer networking, Gigabit Ethernet is a term describing various technologies for transmitting Ethernet frames at a rate of a gigabit per second, as defined by the IEEE802. 3-2008 standard. It came into use beginning in 1999, gradually supplanting Fast Ethernet in wired local networks, the cables and equipment are very similar to previous standards and have been very common and economical since 2010. Ethernet was the result of the research done at Xerox PARC in the early 1970s, Ethernet later evolved into a widely implemented physical and link layer protocol. Fast Ethernet increased speed from 10 to 100 megabits per second, Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for Gigabit Ethernet was produced by the IEEE in June 1998 as IEEE802. 3z,802. 3z is commonly referred to as 1000BASE-X, where -X refers to either -CX, -SX, -LX, or -ZX. For the history behind the X see Fast Ethernet, IEEE802. 3ab, ratified in 1999, defines Gigabit Ethernet transmission over unshielded twisted pair category 5, 5e or 6 cabling, and became known as 1000BASE-T. With the ratification of 802. 3ab, Gigabit Ethernet became a desktop technology as organizations could use their existing copper cabling infrastructure, IEEE802. 3ah, ratified in 2004 added two more gigabit fiber standards, 1000BASE-LX10 and 1000BASE-BX10. This was part of a group of protocols known as Ethernet in the First Mile. Initially, Gigabit Ethernet was deployed in high-capacity backbone network links, in 2000, Apples Power Mac G4 and PowerBook G4 were the first mass-produced personal computers featuring the 1000BASE-T connection. It quickly became a feature in many other computers. There are five physical layer standards for Gigabit Ethernet using optical fiber, twisted pair cable and these standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit/s, to ensure a DC balanced signal. The symbols are sent using NRZ. Optical fiber transceivers are most often implemented as modules in SFP form or GBIC on older devices. IEEE802. 3ab, which defines the widely used 1000BASE-T interface type, uses a different encoding scheme in order to keep the rate as low as possible. IEEE802. 3ap defines Ethernet Operation over Electrical Backplanes at different speeds, Ethernet in the First Mile later added 1000BASE-LX10 and -BX10. 1000BASE-CX is a standard for Gigabit Ethernet connections with maximum distances of 25 meters using balanced shielded twisted pair. The short segment length is due to high signal transmission rate. 1000BASE-KX is part of the IEEE802. 3ap standard for Ethernet Operation over Electrical Backplanes and this standard defines one to four lanes of backplane links, one RX and one TX differential pair per lane, at link bandwidth ranging from 100Mbit to 10Gbit per second
15.
Telephone hybrid
–
A telephone hybrid is the component at the ends of a subscriber line of the public switched telephone network that converts between two-wire and four-wire forms of bidirectional audio paths. When used in broadcast facilities to enable the airing of telephone callers, the need for hybrids comes from the nature of analog plain old telephone service home or small business telephone lines, where the two audio directions are combined on a single two-wire pair. Within the telephone network, switching and transmission are almost always four-wire circuits with the two signals being separated, in older analog networks, conversion to four-wire was required so that repeater amplifiers could be inserted in long-distance links. In todays digital systems, each speech direction must be processed and transported independently, the line cards in a telephone central office switch that are interfaced to analog lines include hybrids that adapt the four-wire network to the two-wire circuits that connect most subscribers. The search for better telephone hybrids and echo cancelers was an important motive for the development of DSP algorithms and hardware at Bell Labs, NEC, the fundamental principle is that of impedance matching. The incoming signal is applied to both the line and a balancing network that is designed to have the same impedance as the line. The outgoing signal is derived by subtracting the two, thus canceling the incoming signal from the outgoing signal, early hybrids were made with transformers configured as hybrid coils that had an extra winding that could be connected out of phase. The name hybrid comes from these special mixed-winding transformers, an effective hybrid would have high trans-hybrid loss, which means that relatively little of the incoming audio would appear on the outgoing port. Too much leakage can cause echoes when there is a delay in the path, as there is with satellite, mobile phone. This is a result of a talkers voice traversing to the far-end hybrid, ITU-T Recommendation G.131 describes the relationship of echo delay vs. amplitude to listener annoyance. At 100ms,45 dB return loss is required for less than 1% of test subjects to express dissatisfaction, good cancellation depends upon the balancing network having a frequency-vs. -impedance characteristic that accurately matches the line. Since telephone line impedances vary depending upon many factors and the relationship is not always smooth and these may reach greater than 30 dB trans-hybrid loss, measured with white noise as the send signal. DSP hybrids are called line echo cancellers. Hybrids and cancellers are sometimes combined with echo suppressors and these work on the assumption that usually only one of the two parties to a conversation is speaking at a given time. The suppressor switches a loss into the inactive speech path, thus enhancing the effect of the hybrid at the expense of simultaneous two-way conversation. Despite being inherently four-wire, VoIP systems require hybrids when they interface to two-wire lines, a VoIP-to-Telco gateway used to interface a VoIP PBX to analog lines would contain hybrids to perform the required conversion. End-end VoIP needs no hybrids unless adaptation to a line is required. These devices often include processing in addition to the function, such as dynamics control, filtering
16.
10 Gigabit Ethernet
–
10 Gigabit Ethernet is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10 gigabits per second. It was first defined by the IEEE802. 3ae-2002 standard, half duplex operation and repeater hubs do not exist in 10GbE. Like previous versions of Ethernet, 10GbE can use either copper or fiber cabling, however, because of its bandwidth requirements, higher-grade copper cables are required, category 6a or Class F/Category 7 cables for lengths up to 100 meters. The 10 Gigabit Ethernet standard encompasses a number of different physical layer standards, a networking device, such as a switch or a network interface controller may have different PHY types through pluggable PHY modules, such as those based on SFP+. At the time that the 10 Gigabit Ethernet standard was developed, the WAN PHY encapsulates Ethernet packets in SONET OC-192c frames and operates at a slightly slower data-rate than the local area network PHY. Over the years the Institute of Electrical and Electronics Engineers 802.3 working group has published several standards relating to 10GbE, to implement different 10GbE physical layer standards, many interfaces consist of a standard socket into which different PHY modules may be plugged. Physical layer modules are not specified in a standards body. Relevant MSAs for 10GbE include XENPAK, XFP and SFP+, when choosing a PHY module, a designer considers cost, reach, media type, power consumption, and size. A single point-to-point link can have different MSA pluggable formats on either end as long as the 10GbE optical or copper port type inside the pluggable is identical, XENPAK was the first MSA for 10GE and had the largest form factor. X2 and XPAK were later competing standards with smaller form factors, X2 and XPAK have not been as successful in the market as XENPAK. XFP came after X2 and XPAK and it is also smaller, the newest module standard is the enhanced small form-factor pluggable transceiver, generally called SFP+. Based on the small form-factor pluggable transceiver and developed by the ANSI T11 fibre channel group, it is smaller still, SFP+ has become the most popular socket on 10GE systems. SFP+ modules do only optical to electrical conversion, no clock and data recovery, SFP+ modules share a common physical form factor with legacy SFP modules, allowing higher port density than XFP and the re-use of existing designs for 24 or 48 ports in a 19 rack width blade. Optical modules are connected to a host by either a XAUI, XENPAK, X2, and XPAK modules use XAUI to connect to their hosts. XAUI uses a data channel and is specified in IEEE802.3 Clause 48. XFP modules use a XFI interface and SFP+ modules use an SFI interface, XFI and SFI use a single lane data channel and the 64b/66b encoding specified in IEEE802.3 Clause 49. SFP+ modules can further be grouped into two types of host interfaces, linear or limiting, limiting modules are preferred except when using old fiber infrastructure which requires the use of the linear interface provided by 10GBASE-LRM modules. There are two classifications for optical fiber, single-mode and multi-mode, in SMF light follows a single path through the fiber while in MMF it takes multiple paths resulting in differential mode delay
17.
100 Gigabit Ethernet
–
40 Gigabit Ethernet and 100 Gigabit Ethernet are groups of computer networking technologies for transmitting Ethernet frames at rates of 40 and 100 gigabits per second, respectively. The technology was first defined by the IEEE802. 3ba-2010 standard and later by the 802. 3bg-2011,802. 3bj-2014, the standards define numerous port types with different optical and electrical interfaces and different numbers of optical fiber strands per port. Short distances over twinaxial cable are supported, 40GBASE-T uses twisted pair cabling for 40 Gbit/s over up to 30 m. On July 18,2006, a call for interest for a High Speed Study Group to investigate new standards for high speed Ethernet was held at the IEEE802.3 plenary meeting in San Diego. The first 802.3 HSSG study group meeting was held in September 2006, in June 2007, a trade group called Road to 100G was formed after the NXTcomm trade show in Chicago. The project is to provide for the interconnection of equipment satisfying the requirements of the intended applications. The 802. 3ba task force met for the first time in January 2008 and this standard was approved at the June 2010 IEEE Standards Board meeting under the name IEEE Std 802. 3ba-2010. On June 17,2010, the IEEE802. 3ba standard was approved In March 2011 the IEEE802. 3bg standard was approved, on September 10,2011, the P802. 3bj 100 Gbit/s Backplane and Copper Cable task force was approved. The scope of project is to specify additions to and appropriate modifications of IEEE Std 802. On May 10,2013, the P802. 3bm 40 Gbit/s and 100 Gbit/s Fiber Optic Task Force was approved and this project is to specify additions to and appropriate modifications of IEEE Std 802. In addition, to add 40 Gb/s Physical Layer specifications and management parameters for operation on extended reach single-mode fiber optic cables, also on May 10,2013, the P802. 3bq 40GBASE-T Task Force was approved. On June 12,2014, the IEEE802. 3bj standard was approved, on February 16,2015, the IEEE802. 3bm standard was approved. On May 12,2016, the IEEE P802. 3cd Task Force started working to define next generation two-lane 100 Gbit/s PHY, the IEEE802.3 working group is concerned with the maintenance and extension of the Ethernet data communications standard. Additions to the 802.3 standard are performed by forces which are designated by one or two letters. For example, the 802. 3z task force drafted the original Gigabit Ethernet standard,802. 3ba is the designation given to the higher speed Ethernet task force which completed its work to modify the 802.3 standard to support speeds higher than 10 Gbit/s in 2010. The speeds chosen by 802. 3ba were 40 and 100 Gbit/s to support both end-point and link aggregation needs and this was the first time two different Ethernet speeds were specified in a single standard. The decision to include both speeds came from pressure to support the 40 Gbit/s rate for local applications and the 100 Gbit/s rate for internet backbones. The standard was announced in July 2007 and was ratified on June 17,2010, the 40/100 Gigabit Ethernet standards encompass a number of different Ethernet physical layer specifications
18.
Optical fiber cable
–
An optical fiber cable, also known as fiber optic cable, is a cable containing one or more optical fibers that are used to carry light. The optical fiber elements are typically coated with plastic layers. Different types of cable are used for different applications, for long distance telecommunication. Optical fiber consists of a core and a layer, selected for total internal reflection due to the difference in the refractive index between the two. In practical fibers, the cladding is coated with a layer of acrylate polymer or polyimide. This coating protects the fiber from damage but does not contribute to its optical waveguide properties, individual coated fibers then have a tough resin buffer layer and/or core tube extruded around them to form the cable core. Several layers of protective sheathing, depending on the application, are added to form the cable, rigid fiber assemblies sometimes put light-absorbing glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces cross-talk between the fibers, or reduces flare in fiber bundle imaging applications, for indoor applications, the jacketed fiber is generally enclosed, with a bundle of flexible fibrous polymer strength members like aramid, in a lightweight plastic cover to form a simple cable. Each end of the cable may be terminated with an optical fiber connector to allow it to be easily connected and disconnected from transmitting and receiving equipment. For use in more environments, a much more robust cable construction is required. In loose-tube construction the fiber is laid helically into semi-rigid tubes and this protects the fiber from tension during laying and due to temperature changes. Loose-tube fiber may be dry block or gel-filled, dry block offers less protection to the fibers than gel-filled, but costs considerably less. Instead of a tube, the fiber may be embedded in a heavy polymer jacket. Tight buffer cables are offered for a variety of applications, Breakout cables normally contain a ripcord, two non-conductive dielectric strengthening members, an aramid yarn, and 3 mm buffer tubing with an additional layer of Kevlar surrounding each fiber. The ripcord is a cord of strong yarn that is situated under the jacket of the cable for jacket removal. Distribution cables have an overall Kevlar wrapping, a ripcord, and these fiber units are commonly bundled with additional steel strength members, again with a helical twist to allow for stretching. A critical concern in outdoor cabling is to protect the fiber from contamination by water and this is accomplished by use of solid barriers such as copper tubes, and water-repellent jelly or water-absorbing powder surrounding the fiber. Finally, the cable may be armored to protect it from environmental hazards, in September 2012, NTT Japan demonstrated a single fiber cable that was able to transfer 1 petabit per second over a distance of 50 kilometers
19.
Duplex (telecommunications)
–
A duplex communication system requires a pair of channels/frequencies hence the term duplex meaning two parts. The two channels are defined as uplink/downlink or reverse/forward, in a full-duplex system simultaneous transmission/reception is available, i. e. One can transmit and receive simultaneously, in a half-duplex system, each party can communicate with the other but not simultaneously, the communication is one direction at a time. Half duplex systems utilize separate channels for uplink and downlink, i. e. a transmit, in a half duplex communications system one user is allowed to transmit on the uplink channel at a time. The transmitted uplink signal is frequency translated via a radio/repeater to the downlink receive frequency which is received by all other radios tuned to the downlink/receive frequency. A half-duplex system is defined as system which operates two, hence duplex, dedicated uplink/downlink channels/frequencies. In a half duplex system a single path is provided for uplink, all uplink messages are broadcast via the downlink channel to all users simultaneously via a repeater which performs uplink to downlink channel/frequency translation. All cellular and land line PSTNs and PDSNs are full duplex systems, all full duplex systems require a channel/frequency translator via a radio/repeater. This is required in order to translate the uplink/transmit transmission from one to the downlink/receive channel/frequency of user two. Full duplex systems are one to one private systems unlike half duplex systems which broadcast to all users and this effectively makes the cable itself a collision-free environment and doubles the maximum total transmission capacity supported by each Ethernet connection. Time-division duplexing is commonly referred to as simplex communications, a single channel/frequency is employed for bidirectional communications. The term simplex communication as applied to TDM single channel systems predates the term TDD by at least 80 years, frequency-division duplexing as with any other duplex system is defined by two channel/frequency simultaneous communication. A channel/frequency pair are assigned to individual user on the system. An FDD system requires frequency translation from user 1 uplink/reverse frequency to user 2 downlink/forward frequency, full-duplex audio systems like telephones can create echo, which needs to be removed. Echo occurs when the coming out of the speaker, originating from the far end. The sound then reappears at the source end, but delayed. This feedback path may be acoustic, through the air, or it may be mechanically coupled, echo cancellation is a signal-processing operation that subtracts the far-end signal from the microphone signal before it is sent back over the network. Echo cancellation is important to the V.32, V.34, V.56, echo cancelers are available as both software and hardware implementations
20.
Optical fiber connector
–
An optical fiber connector terminates the end of an optical fiber, and enables quicker connection and disconnection than splicing. The connectors mechanically couple and align the cores of fibers so light can pass, better connectors lose very little light due to reflection or misalignment of the fibers. In all, about 100 fiber optic connectors have been introduced to the market, optical fiber connectors are used to join optical fibers where a connect/disconnect capability is required. Due to the polishing and tuning procedures that may be incorporated into optical connector manufacturing, however, the assembly and polishing operations involved can be performed in the field, for example, to make cross-connect jumpers to size. Most optical fiber connectors are spring-loaded, so the fiber faces are pressed together when the connectors are mated, the resulting glass-to-glass or plastic-to-plastic contact eliminates signal losses that would be caused by an air gap between the joined fibers. Every fiber connection has two values, Attenuation or insertion loss Reflection or return loss, measurements of these parameters are now defined in IEC standard 61753-1. The standard gives five grades for insertion loss from A to D, the other parameter is return loss, with grades from 1 to 5. A variety of optical fiber connectors are available, but SC, typical connectors are rated for 500–1,000 mating cycles. The main differences among types of connectors are dimensions and methods of mechanical coupling, generally, organizations will standardize on one kind of connector, depending on what equipment they commonly use. Different connectors are required for multimode, and for single-mode fibers, in many data center applications, small and multi-fiber connectors are replacing larger, older styles, allowing more fiber ports per unit of rack space. In such settings, protective enclosures are used, and fall into two broad categories, hermetic and free-breathing. Hermetic cases prevent entry of moisture and air but, lacking ventilation, free-breathing enclosures, on the other hand, allow ventilation, but can also admit moisture, insects and airborne contaminants. Selection of the correct housing depends on the cable and connector type, the location, careful assembly is required to ensure good protection against the elements. To ensure the integrity of optical fiber connections and housing seals, many types of optical connector have been developed at different times, and for different purposes. Many of them are summarized in the tables below, modern connectors typically use a physical contact polish on the fiber and ferrule end. This is a convex surface with the apex of the curve accurately centered on the fiber. Higher grades of polish give less insertion loss and lower back reflection, many connectors are available with the fiber end face polished at an angle to prevent light that reflects from the interface from traveling back up the fiber. Because of the angle, the light does not stay in the fiber core
21.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
22.
Local area network
–
By contrast, a wide area network, not only covers a larger geographic distance, but also generally involves leased telecommunication circuits or Internet links. An even greater contrast is the Internet, which is a system of globally connected business, Ethernet and Wi-Fi are the two most common transmission technologies in use for local area networks. Historical technologies include ARCNET, Token ring, and AppleTalk, the increasing demand and use of computers in universities and research labs in the late 1960s generated the need to provide high-speed interconnections between computer systems. A1970 report from the Lawrence Radiation Laboratory detailing the growth of their Octopus network gave an indication of the situation. A number of experimental and early commercial LAN technologies were developed in the 1970s, Cambridge Ring was developed at Cambridge University starting in 1974. Ethernet was developed at Xerox PARC in 1973–1975, and filed as U. S, in 1976, after the system was deployed at PARC, Robert Metcalfe and David Boggs published a seminal paper, Ethernet, Distributed Packet-Switching for Local Computer Networks. ARCNET was developed by Datapoint Corporation in 1976 and announced in 1977 and it had the first commercial installation in December 1977 at Chase Manhattan Bank in New York. The initial driving force for networking was generally to share storage and printers, there was much enthusiasm for the concept and for several years, from about 1983 onward, computer industry pundits would regularly declare the coming year to be, “The year of the LAN”. In practice, the concept was marred by proliferation of incompatible physical layer and network protocol implementations, typically, each vendor would have its own type of network card, cabling, protocol, and network operating system. Netware dominated the personal computer LAN business from early after its introduction in 1983 until the mid-1990s when Microsoft introduced Windows NT Advanced Server, of the competitors to NetWare, only Banyan Vines had comparable technical strengths, but Banyan never gained a secure base. During the same period, Unix workstations were using TCP/IP networking, early LAN cabling had generally been based on various grades of coaxial cable. This led to the development of 10BASE-T and structured cabling which is still the basis of most commercial LANs today, while fiber-optic cabling is common for links between switches, use of fiber to the desktop is rare. Many LANs use wireless technologies that are built into Smartphones, tablet computers, in a wireless local area network, users may move unrestricted in the coverage area. Wireless networks have become popular in residences and small businesses, because of their ease of installation, guests are often offered Internet access via a hotspot service. Network topology describes the layout of interconnections between devices and network segments, at the data link layer and physical layer, a wide variety of LAN topologies have been used, including ring, bus, mesh and star. At the higher layers, NetBEUI, IPX/SPX, AppleTalk and others were once common, simple LANs generally consist of cabling and one or more switches. A switch can be connected to a router, cable modem, a LAN can include a wide variety of other network devices such as firewalls, load balancers, and network intrusion detection. LANs can maintain connections with other LANs via leased lines, leased services, depending on how the connections are established and secured, and the distance involved, such linked LANs may also be classified as a metropolitan area network or a wide area network
23.
Ethernet physical layer
–
The Ethernet physical layer evolved over a considerable time span and encompasses quite a few physical media interfaces and several magnitudes of speed. The speed ranges from 1 Mbit/s to 100 Gbit/s, while the medium can range from bulky coaxial cable to twisted pair. In general, network protocol stack software will work similarly on all physical layers,10 Gigabit Ethernet was already used in both enterprise and carrier networks by 2007, with 40 Gbit/s and 100 Gigabit Ethernet ratified. Robert Metcalfe, one of the co-inventors of Ethernet, in 2008 said he believed commercial applications using Terabit Ethernet may occur by 2015, though it might require new Ethernet standards. Many Ethernet adapters and switch ports support multiple speeds, using autonegotiation to set the speed, while this can practically be taken for granted for ports supporting twisted-pair cabling, only few optical-fiber ports support multiple speeds. If auto-negotiation fails, some multiple-speed devices sense the speed used by their partner, a 10/100 Ethernet port supports 10BASE-T and 100BASE-TX. A 10/100/1000 Ethernet port supports 10BASE-T, 100BASE-TX, and 1000BASE-T, generally, layers are named by their specifications,10,100,1000, 10G. – the nominal, usable speed for the MAC layer, encoded PHY sublayers usually run higher bitrates BASE, BROAD, PASS – indicates baseband, broadband, or passband signaling -T, -S, -L, -C, -K. g. X for 8b/10b block encoding, R for large blocks 1,2,4,10 – number of lanes used per link or reach for WAN PHYs For 10 Mbit/s, most twisted pair layers use unique encoding, so most often just -T is used. The following sections provide a summary of official Ethernet media types. In addition to official standards, many vendors have implemented proprietary media types for various reasons—often to support longer distances over fiber optic cabling. Early Ethernet standards used Manchester coding so that the signal was self-clocking, all Fast Ethernet variants use a star topology. All Gigabit Ethernet variants use a star topology, initially, half-duplex mode was included in the standard but has been abandoned since. Very few devices support gigabit speed in half-duplex,2. 5GBASE-T and 5GBASE-T are scaled-down variants of 10GBASE-T. These physical layers support twisted pair copper cabling only,10 Gigabit Ethernet defines a version of Ethernet with a nominal data rate of 10 Gbit/s, ten times as fast as Gigabit Ethernet. In 2002, the first 10 Gigabit Ethernet standard was published as IEEE Std 802. 3ae-2002, subsequent standards encompass media types for single-mode fibre, multi-mode fibre, copper backplane and copper twisted pair. All 10-gigabit standards were consolidated into IEEE Std 802. 3-2008, as of 2009,10 Gigabit Ethernet is predominantly deployed in carrier networks, where 10GBASE-LR and 10GBASE-ER enjoy significant market shares. Single-lane 25-gigabit Ethernet is based on one 25.78125 GBd lane of the four from the 100 Gigabit Ethernet standard developed by task force P802. 3by, 25GBASE-T over twisted pair was approved alongside 40GBASE-T within IEEE802. 3bq
24.
Autonegotiation
–
Autonegotiation is an Ethernet procedure by which two connected devices choose common transmission parameters, such as speed, duplex mode, and flow control. In this process, the devices first share their capabilities regarding these parameters. In the OSI model, autonegotiation resides in the physical layer, for Ethernet over twisted pair it is defined in clause 28 of IEEE802.3. Autonegotiation was originally defined as an component in the fast Ethernet standard. It is backwards compatible with the normal link pulses used by 10BASE-T, the protocol was significantly extended in the gigabit Ethernet standard, and is mandatory for 1000BASE-T gigabit Ethernet over twisted pair. In 1995, a standard was released to allow connected network adapters to negotiate the best possible shared mode of operation, the initial autonegotiation standard contained a mechanism for detecting the speed but not the duplex setting of Ethernet peers that did not use autonegotiation. Autonegotiation can be used by devices that are capable of different transmission rates, different duplex modes, parallel detection is used when a device that is capable of autonegotiation is connected to one that is not. This happens if the device does not support autonegotiation or autonegotiation is administratively disabled. In this condition, the device that is capable of autonegotiation can determine and this procedure cannot determine the presence of full duplex, so half duplex is always assumed. The standards for 1000BASE-T, 1000BASE-TX and 10GBASE-T require autonegotiation to be always present, other than speed and duplex mode, autonegotiation is used to communicate the port type and the master-slave parameters. Auto-negotiation is based on similar to those used by 10BASE-T devices to detect the presence of a connection to another device. These connection present pulses are sent by Ethernet devices when they are not sending or receiving any frames and they are unipolar positive-only electrical pulses of a nominal duration of 100 ns, with a maximum pulse width of 200 ns, generated at a 16 ms time interval. These pulses are called link integrity test pulses in the 10BASE-T terminology, a device detects the failure of a link if neither a frame nor two of the LIT pulses is received for 50-150 ms. For this scheme to work, devices must send LIT pulses regardless of receiving any, auto-negotiation uses similar pulses labeled as NLP. NLP are still unipolar, positive-only, and of the duration of 100 ns. Each such pulse burst is called a fast link pulse burst, the time interval between the start of each FLP burst is the same 16 milliseconds as between normal link pulses. The FLP burst consists of 17 NLP at a 125 µs time interval, between each pair of two consecutive NLP an additional positive pulse may be present. The presence of this additional pulse indicates a logical 1, its absence a logical 0, as a result, every FLP contains a data word of 16 bits
25.
EtherType
–
EtherType is a two-octet field in an Ethernet frame. It is used to indicate which protocol is encapsulated in the payload of the frame, the same field is also used to indicate the size of some Ethernet frames. EtherType was first defined by the Ethernet II framing standard, in modern implementations of Ethernet, the field within the Ethernet frame used to describe the EtherType also can be used to represent the size of the payload of the Ethernet Frame. Historically, depending on the type of Ethernet framing that was in use on an Ethernet segment, Ethernet II framing considered these octets to represent EtherType while the original IEEE802.3 framing considered these octets to represent the size of the payload in bytes. That value was chosen because the length of the data field of an Ethernet 802.3 frame is 1500 bytes. The interpretation of values 1501–1535, inclusive, is undefined, with 802. 1Q VLAN tagging and QinQ the sparse 16-bit EtherType is being completely used. The 16-bit EtherType not only tags the payload class, it serves to help end any VLAN tagging or QinQ stacking. Via look-ahead peeking in streams, the 16-bit EtherType can help to confirm or package a QinQ 32+32+16=80-bit header between the 48-bit MAC addresses and the payload, of those 80-bits only 32-bits are used for dynamic information. For a full 66-bit addressing system,18 bits are needed beyond the MAC, thus, additional EtherType values are required and used for Triple Tagging QinQinQ. Vendor implementations may avoid wasting bandwidth sending those 48-bits in proprietary link compression schemes, the EtherType usually does not contain any CRC or FCS information. The size of the payload of non-standard jumbo frames, typically ~9000 Bytes long, falls within the range used by EtherType, the proposition to resolve this conflict was to substitute the special EtherType value when a length would otherwise be used. However, the proposition was not accepted and it is defunct, the chair of IEEE802.3 at the time, Geoff Thompson, responded to the draft outlining IEEE802. 3s official position and the reasons behind the position. The draft authors also responded to the letter, but no subsequent answer from the IEEE802.3 has been recorded. However, for Ethernet, the Ethernet II header is still used, not all well known de facto uses of EtherTypes are always recorded in the IEEE list of EtherType values. For example, EtherType 0x0806 appears in the IEEE list only as Symbolics, however, the IEEE Registration Authority lists all the accepted EtherTypes, including the 0x0806. IEEE Registration Authority Tutorials IEEE EtherType Registration Authority
26.
Ethernet flow control
–
Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The first flow control mechanism, the frame, was defined by the IEEE802. 3x standard. The goal of this mechanism is to zero loss under congestion in data center bridging networks. Ethernet is a family of computer network protocols. Flow control can be implemented at the link layer. A sending station may be transmitting data faster than the end of the link can accept it. The first flow control mechanism, the frame, was defined by the Institute of Electrical. The IEEE standard 802. 3x was issued in 1997, an overwhelmed network node can send a pause frame, which halts the transmission of the sender for a specified period of time. A media access control frame is used to carry the pause command, only stations configured for full-duplex operation may send PAUSE frames. The use of a well-known address makes it unnecessary for a station to discover, another advantage of using this multicast address arises from the use of flow control between network switches. The particular multicast address used is selected from a range of address which have been reserved by the IEEE802. 1D standard which specifies the operation of switches used for bridging. Normally, a frame with a multicast destination sent to a switch will be forwarded out to all ports of the switch. However, this range of multicast address is special and will not be forwarded by an 802. 1D-compliant switch, instead, frames sent to this range are understood to be frames meant to be acted upon only within the switch. A pause frame includes the period of time being requested. This number is the duration of the pause. The pause time is measured in units of pause quanta, where each unit is equal to 512 bit times, by 1999, several vendors supported receiving pause frames, but fewer implemented sending them. One original motivation for the frame was to handle network interface controllers that did not have enough buffering to handle full-speed reception. This problem is not as common with advances in bus speeds, a more likely scenario is network congestion within a switch
27.
Ethernet frame
–
A data packet on an Ethernet link is called an Ethernet packet, which transports an Ethernet frame as its payload. An Ethernet frame is preceded by a preamble and start frame delimiter, each Ethernet frame starts with an Ethernet header, which contains destination and source MAC addresses as its first two fields. The middle section of the frame is payload data including any headers for other protocols carried in the frame, the frame ends with a frame check sequence, which is a 32-bit cyclic redundancy check used to detect any in-transit corruption of data. A data packet on the wire and the frame as its payload consist of binary data, Ethernet transmits data with the most-significant octet first, within each octet, however, the least-significant bit is transmitted first. The internal structure of an Ethernet frame is specified in IEEE802.3, the table below shows the complete Ethernet packet and the frame inside, as transmitted, for the payload size up to the MTU of 1500 octets. Some implementations of Gigabit Ethernet and other higher-speed variants of Ethernet support larger frames, the optional 802. 1Q tag consumes additional space in the frame. Field sizes for this option are indicated parenthetically in the table above, IEEE802. 1ad allows for multiple tags in each frame. This option is not illustrated here, an Ethernet packet starts with a seven-octet preamble and one-octet start frame delimiter. The preamble consists of a 56-bit pattern of alternating 1 and 0 bits, allowing devices on the network to easily synchronize their receiver clocks and it is followed by the SFD to provide byte-level synchronization and to mark a new incoming frame. The SFD is the value that marks the end of the preamble, which is the first field of an Ethernet packet. The SFD is designed to break the bit pattern of the preamble, the SFD is immediately followed by the destination MAC address, which is the first field in an Ethernet frame. SFD has the value of 171, which is transmitted with least-significant bit first as 213, physical layer transceiver circuitry is required to connect the Ethernet MAC to the physical medium. The connection between a PHY and MAC is independent of the medium and uses a bus from the media independent interface family. Fast Ethernet transceiver chips utilize the MII bus, which is a wide bus, therefore the preamble is represented as 14 instances of 0x5. Gigabit Ethernet transceiver chips use the GMII bus, which is a wide interface. The header features destination and source MAC addresses, the EtherType field and, optionally, the EtherType field is two octets long and it can be used for two different purposes. When used as EtherType, the length of the frame is determined by the location of the interpacket gap, the IEEE802. 1Q tag, if present, is a four-octet field that indicates virtual LAN membership and IEEE802. 1p priority. The minimum payload is 42 octets when an 802. 1Q tag is present and 46 octets when absent, the maximum payload is 1500 octets
28.
Ethernet Alliance
–
The Ethernet Alliance was incorporated in the US state of California in August 2005 and officially launched in January 2006 as a non-profit industry consortium to promote and support Ethernet. The Ethernet Alliance work groups are called subcommittees and these subcommittees are focused on efforts around specific standards-based Ethernet initiatives. As of March 2011, the working subcommittees within the Ethernet Alliance included,802. 3av-2009 that extended the speed of EPON networks to 10 Gbit/s. 802. 3an-2006 which defined a specification for running 10 Gigabit Ethernet over twisted-pair copper designated 10GBASE-T, Carrier Ethernet helps guide work being done to support the specific, evolving and growing demands of Ethernet from carriers and service providers. Energy Efficient Ethernet work is based up EEE Standard 802. 3az-2010. Ethernet in the Data Center focus includes protocols such as Data Center Bridging, Fibre Channel over Ethernet, iSCSI, Remote direct memory access over Converged Ethernet, higher Speed Ethernet encompasses all aspects of 40 Gbit/s and 100 Gbit/s Ethernet largely based op the work of IEEE Std. High Speed Modular Interconnects helps drive the adoption through demonstrating interoperability of compliant HS Modular Interconnect devices and ports including optical modules, in previous Ethernet technology iterations, an alliance was formed to support the adoption of that new technology into the market. The Ethernet Alliance was preceded by the Fast Ethernet Alliance, the Gigabit Ethernet Alliance, the 10 Gigabit Ethernet Alliance, and these alliances would dissolve a few years after the completion of the standards effort they supported. Unfortunately, this was long before the technology would reach volume adoption. He worked with others in the industry and the bodies to create an alliance that would exist as long as Ethernet technology existed. The Road to 100G Alliance was formally announced on June 19,2007 at the NXTcomm 2007 show in Chicago, the founding members were Bay Microsystems, Enigma Semiconductor, Integrated Device Technology, IP Infusion, and Lattice Semiconductor. It was headquartered in the Silicon Valley area of California, with the expanded charter and the formation of the HSE and Carrier Ethernet subcommittees, the Road to 100G alliance merged with the Ethernet Alliance on December 31,2008. The Ethernet Alliance offers white papers, presentations, frequently asked questions and these materials are available on the Ethernet Alliance public website and are available free of charge. These papers provide educational materials with an industry-based perspective and they may be based upon the work of Ethernet Alliance subcommittees or support the activities inside Ethernet standards bodies. TEFs offer face-to-face events to bring members of the various Ethernet communities to discuss. The Ethernet Alliance offers an opportunity for academic institutions to become involved in the organization for no fee
29.
10BASE5
–
10BASE5 was the first commercially available variant of Ethernet. 10BASE5 uses a thick and stiff coaxial cable up to 500 metres in length, up to 100 stations can be connected to the cable using vampire taps and share a single collision domain with 10 Mbit/s of bandwidth shared among them. The system is difficult to install and maintain, as of 2003, IEEE802.3 has deprecated this standard for new installations. The name 10BASE5 is derived from several characteristics of the physical medium, the 10 refers to its transmission speed of 10 Mbit/s. The BASE is short for baseband signalling, and the 5 stands for the segment length of 500 metres. For its physical layer 10BASE5 uses cable similar to RG-8/U coaxial cable and this is a stiff,0. 375-inch diameter cable with an impedance of 50 ohms, a solid center conductor, a foam insulating filler, a shielding braid, and an outer jacket. The outer jacket is often yellow-to-orange fluorinated ethylene propylene so it often is called yellow cable, orange hose, 10BASE5 coaxial cables had a maximum length of 500 metres. Up to 100 nodes could be connected to a 10BASE5 segment, transceiver nodes can be connected to cable segments with N connectors, or via a vampire tap, which allows new nodes to be added while existing connections are live. Transceivers should be installed only at precise 2. 5-metre intervals and this distance was chosen to not correspond to the wavelength of the signal, this ensures that the reflections from multiple taps are not in phase. These suitable points are marked on the cable with black bands, the cable is required to be one continuous run, T-connections are not allowed. As is the case with most other high-speed buses, segments must be terminated at each end, for coaxial-cable-based Ethernet, each end of the cable has a 50 ohm resistor attached. Typically this resistor is built into a male N connector and attached to the end of the cable just past the last device. With termination missing, or if there is a break in the cable and this reflected signal is indistinguishable from a collision, and prevents communication. Adding new stations to network is complicated by the need to pierce the cable. The cable is stiff and difficult to bend around corners, one improper connection could take down the whole network and finding the source of the trouble is difficult.3 or later
30.
10BASE2
–
As of 2011, IEEE802.3 has deprecated this standard for new installations. The name 10BASE2 is derived from several characteristics of the physical medium, the 10 comes from the transmission speed of 10 Mbit/s. The BASE stands for baseband signalling, and the 2 for a maximum segment length approaching 200 m.10 Mbit/s Ethernet uses Manchester coding. A binary zero is indicated by a low to high transition in the middle of the bit period and this allows the clock to be recovered from the signal. However, the additional transitions double the signal bandwidth, 10BASE2 coax cables have a maximum length of 185 metres. The maximum practical number of nodes that can be connected to a 10BASE2 segment is limited to 30 with a distance of 50 cm. In a 10BASE2 network, each stretch of cable is connected to the using a BNC T-connector. The T-connector must be plugged directly into the network adaptor with no cable in between, as is the case with most other high-speed buses, Ethernet segments have to be terminated with a resistor at each end. Each end of the cable has a 50 ohm resistor attached, typically this resistor is built into a male BNC and attached to the last device on the bus. This is most commonly connected directly to the T-connector on a workstation though it not technically have to be. A few devices such as Digitals DEMPR and DESPR have a built-in terminator, if termination is missing, or if there is a break in the cable, the AC signal on the bus is reflected, rather than dissipated, when it reaches the end. This reflected signal is indistinguishable from a collision, so no communication can take place, when wiring a 10BASE2 network, special care has to be taken to ensure that cables are properly connected to all T-connectors, and appropriate terminators are installed. One, and only one, terminator must be connected to ground via a ground wire, bad contacts or shorts are especially difficult to diagnose, though a time-domain reflectometer will find most problems quickly. A failure at any point of the network cabling tends to prevent all communications, for this reason, 10BASE2 networks can be difficult to maintain and were often replaced by 10BASE-T networks, which also provided a good upgrade path to 100BASE-TX. An alternative, more reliable connection was established by the introduction of EAD sockets, there were proprietary wallport/cable systems that claimed to avoid these problems but these never became widespread, possibly due to a lack of standardization. 10BASE2 systems do have a number of advantages over 10BASE-T, no hub is required as with 10BASE-T, so the hardware cost is minimal, and wiring can be particularly easy since only a single wire run is needed, which can be sourced from the nearest computer. These characteristics mean that 10BASE2 is ideal for a network of two or three machines, perhaps in a home where easily concealed wiring may be an advantage. For a larger complex office network, the difficulties of tracing poor connections make it impractical, unfortunately for 10BASE2, by the time multiple home computer networks became common, the format had already been practically superseded