In computer networking, Gigabit Ethernet is the various technologies for transmitting Ethernet frames at a rate of a gigabit per second, as defined by the IEEE 802.3-2008 standard. It came into use beginning in 1999 supplanting Fast Ethernet in wired local networks, as a result of being faster; the cables and equipment are similar to previous standards and have been common and economical since 2010. Half-duplex gigabit links connected through repeater hubs were part of the IEEE specification, but the specification is not updated anymore and full-duplex operation with switches is used exclusively. Ethernet was the result of the research done at Xerox PARC in the early 1970s. Ethernet evolved into a implemented physical and link layer protocol. Fast Ethernet increased speed from 10 to 100 megabits per second. Gigabit Ethernet was the next iteration; the initial standard for Gigabit Ethernet was produced by the IEEE in June 1998 as IEEE 802.3z, required optical fiber. 802.3z is referred to as 1000BASE-X, where -X refers to either -CX, -SX, -LX, or -ZX.
For the history behind the "X" see Fast Ethernet. IEEE 802.3ab, ratified in 1999, defines Gigabit Ethernet transmission over unshielded twisted pair category 5, 5e or 6 cabling, became known as 1000BASE-T. With the ratification of 802.3ab, Gigabit Ethernet became a desktop technology as organizations could use their existing copper cabling infrastructure. IEEE 802.3ah, ratified in 2004 added two more gigabit fiber standards, 1000BASE-LX10 and 1000BASE-BX10. This was part of a larger group of protocols known as Ethernet in the First Mile. Gigabit Ethernet was deployed in high-capacity backbone network links. In 2000, Apple's Power Mac G4 and PowerBook G4 were the first mass-produced personal computers featuring the 1000BASE-T connection, it became a built-in feature in many other computers. There are five physical layer standards for Gigabit Ethernet using optical fiber, twisted pair cable, or shielded balanced copper cable; the IEEE 802.3z standard includes 1000BASE-SX for transmission over multi-mode fiber, 1000BASE-LX for transmission over single-mode fiber, the nearly obsolete 1000BASE-CX for transmission over shielded balanced copper cabling.
These standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit/s, to ensure a DC balanced signal. The symbols are sent using NRZ. Optical fiber transceivers are most implemented as user-swappable modules in SFP form or GBIC on older devices. IEEE 802.3ab, which defines the used 1000BASE-T interface type, uses a different encoding scheme in order to keep the symbol rate as low as possible, allowing transmission over twisted pair. IEEE 802.3ap defines Ethernet Operation over Electrical Backplanes at different speeds. Ethernet in the First Mile added 1000BASE-LX10 and -BX10. 1000BASE-X is used in industry to refer to Gigabit Ethernet transmission over fiber, where options include 1000BASE-SX, 1000BASE-LX, 1000BASE-LX10, 1000BASE-BX10 or the non-standard -EX and -ZX implementations. Included are copper variants using the same 8b/10b line code. 1000BASE-CX is an initial standard for Gigabit Ethernet connections with maximum distances of 25 meters using balanced shielded twisted pair and either DE-9 or 8P8C connector.
The short segment length is due to high signal transmission rate. Although it is still used for specific applications where cabling is done by IT professionals, for instance the IBM BladeCenter uses 1000BASE-CX for the Ethernet connections between the blade servers and the switch modules, 1000BASE-T has succeeded it for general copper wiring use. 1000BASE-KX is part of the IEEE 802.3ap standard for Ethernet Operation over Electrical Backplanes. This standard defines one to four lanes of backplane links, one RX and one TX differential pair per lane, at link bandwidth ranging from 100Mbit to 10Gbit per second; the 1000BASE-KX variant uses 1.25 GBd electrical signalling speed. 1000BASE-SX is an optical fiber Gigabit Ethernet standard for operation over multi-mode fiber using a 770 to 860 nanometer, near infrared light wavelength. The standard specifies a maximum length of 220 meters for 62.5 µm/160 MHz×km multi-mode fiber, 275 m for 62.5 µm/200 MHz×km, 500 m for 50 µm/400MHz×km, 550 m for 50 µm/500 MHz×km multi-mode fiber.
In practice, with good quality fiber and terminations, 1000BASE-SX will work over longer distances. This standard is popular for intra-building links in large office buildings, co-location facilities and carrier-neutral Internet exchanges. Optical power specifications of SX interface: Minimum output power = −9.5 dBm. Minimum receive sensitivity = −17 dBm. 1000BASE-LX is an optical fiber Gigabit Ethernet standard specified in IEEE 802.3 Clause 38 which uses a long wavelength laser, a maximum RMS spectral width of 4 nm. 1000BASE-LX is specified to work over a distance of up to 5 km over 10 µm single-mode fiber. 1000BASE-LX can run over all common types of multi-mode fiber with a maximum segment length of 550 m. For link distances greater than 300 m, the use of a special launch conditioning patch cord may be required; this launches the laser at a precise offset from the center of the fiber which causes it to spread across the diameter of the fiber core, reducing the effect known as differential mode delay which occurs when the laser couples onto only a small number of available modes in multi-mode f
The resolution of an optical imaging system – a microscope, telescope, or camera – can be limited by factors such as imperfections in the lenses or misalignment. However, there is a principal limit to the resolution of any optical system, due to the physics of diffraction. An optical system with resolution performance at the instrument's theoretical limit is said to be diffraction-limited; the diffraction-limited angular resolution of a telescopic instrument is proportional to the wavelength of the light being observed, inversely proportional to the diameter of its objective's entrance aperture. For telescopes with circular apertures, the size of the smallest feature in an image, diffraction limited is the size of the Airy disk; as one decreases the size of the aperture of a telescopic lens, diffraction proportionately increases. At small apertures, such as f/22, most modern lenses are limited only by diffraction and not by aberrations or other imperfections in the construction. For microscopic instruments, the diffraction-limited spatial resolution is proportional to the light wavelength, to the numerical aperture of either the objective or the object illumination source, whichever is smaller.
In astronomy, a diffraction-limited observation is one that achieves the resolution of a theoretically ideal objective in the size of instrument used. However, most observations from Earth are seeing-limited due to atmospheric effects. Optical telescopes on the Earth work at a much lower resolution than the diffraction limit because of the distortion introduced by the passage of light through several kilometres of turbulent atmosphere; some advanced observatories have started using adaptive optics technology, resulting in greater image resolution for faint targets, but it is still difficult to reach the diffraction limit using adaptive optics. Radiotelescopes are diffraction-limited, because the wavelengths they use are so long that the atmospheric distortion is negligible. Space-based telescopes always work at their diffraction limit, if their design is free of optical aberration; the beam from a laser with near-ideal beam propagation properties may be described as being diffraction-limited.
A diffraction-limited laser beam, passed through diffraction-limited optics, will remain diffraction-limited, will have a spatial or angular extent equal to the resolution of the optics at the wavelength of the laser. The observation of sub-wavelength structures with microscopes is difficult because of the Abbe diffraction limit. Ernst Abbe found in 1873 that light with wavelength λ, traveling in a medium with refractive index n and converging to a spot with half-angle θ will have a minimum resolvable distance of d = λ 2 n sin θ = λ 2 N A The portion of the denominator n sin θ is called the numerical aperture and can reach about 1.4–1.6 in modern optics, hence the Abbe limit is d = λ/2.8. Considering green light around 500 nm and a NA of 1, the Abbe limit is d = λ/2 = 250 nm, small compared to most biological cells, but large compared to viruses and less complex molecules. To increase the resolution, shorter wavelengths can be used such as X-ray microscopes; these techniques offer better resolution but are expensive, suffer from lack of contrast in biological samples and may damage the sample.
In a digital camera, diffraction effects interact with the effects of the regular pixel grid. The combined effect of the different parts of an optical system is determined by the convolution of the point spread functions; the point spread function of a diffraction limited lens is the Airy disk. The point spread function of the camera, otherwise called the instrument response function can be approximated by a rectangle function, with a width equivalent to the pixel pitch. A more complete derivation of the modulation transfer function of image sensors is given by Fliegel. Whatever the exact instrument response function, we may note that it is independent of the f-number of the lens, thus at different f-numbers a camera may operate in three different regimes, as follows: in the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be diffraction limited. In the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited.
In the case where the spread of the PSF and IRF are of the same order of magnitude, in which case both impact the available resolution of the system. The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk, d / 2 = 1.22 λ N, where λ is the wavelength of the light and N is the f-number of the imaging optics. For f/8 and green light, d = 9.76 μm. This is of the same order of magnitude as the pixel size for the majority of commercially available'full frame' cameras and so these will operate in regime 3 for f-numbers around 8. Cameras with smaller sensors will tend to have smaller pixels
10 Gigabit Ethernet
10 Gigabit Ethernet is a group of computer networking technologies for transmitting Ethernet frames at a rate of 10 gigabits per second. It was first defined by the IEEE 802.3ae-2002 standard. Unlike previous Ethernet standards, 10 Gigabit Ethernet defines only full-duplex point-to-point links which are connected by network switches; the 10 Gigabit Ethernet standard encompasses a number of different physical layer standards. A networking device, such as a switch or a network interface controller may have different PHY types through pluggable PHY modules, such as those based on SFP+. Like previous versions of Ethernet, 10GbE can use either fiber cabling. Maximum distance over copper cable is 100 meters but because of its bandwidth requirements, higher-grade cables are required; the adoption of 10 Gigabit Ethernet has been more gradual than previous revisions of Ethernet: in 2007, one million 10GbE ports were shipped, in 2009 two million ports were shipped, in 2010 over three million ports were shipped, with an estimated nine million ports in 2011.
As of 2012, although the price per gigabit of bandwidth for 10 Gigabit Ethernet was about one-third compared to Gigabit Ethernet, the price per port of 10 Gigabit Ethernet still hindered more widespread adoption. Over the years the Institute of Electrical and Electronics Engineers 802.3 working group has published several standards relating to 10GbE. To implement different 10GbE physical layer standards, many interfaces consist of a standard socket into which different PHY modules may be plugged. Physical layer modules are not specified in an official standards body but by multi-source agreements that can be negotiated more quickly. Relevant MSAs for 10GbE include XENPAK, XFP and SFP+; when choosing a PHY module, a designer considers cost, media type, power consumption, size. A single point-to-point link can have different MSA pluggable formats on either end as long as the 10GbE optical or copper port type supported by the pluggable is identical. XENPAK had the largest form factor. X2 and XPAK were competing standards with smaller form factors.
X2 and XPAK have not been as successful in the market as XENPAK. XFP came after X2 and XPAK and it is smaller; the newest module standard is the enhanced small form-factor pluggable transceiver called SFP+. Based on the small form-factor pluggable transceiver and developed by the ANSI T11 fibre channel group, it is smaller still and lower power than XFP. SFP+ has become the most popular socket on 10GE systems. SFP+ modules do only optical to electrical conversion, no clock and data recovery, putting a higher burden on the host's channel equalization. SFP+ modules share a common physical form factor with legacy SFP modules, allowing higher port density than XFP and the re-use of existing designs for 24 or 48 ports in a 19-inch rack width blade. Optical modules are connected to a host by either a XFI or SerDes Framer Interface interface. XENPAK, X2, XPAK modules use XAUI to connect to their hosts. XAUI uses a four-lane data channel and is specified in IEEE 802.3 Clause 47. XFP modules use a XFI SFP + modules use an SFI interface.
XFI and SFI use a single lane data channel and the 64b/66b encoding specified in IEEE 802.3 Clause 49. SFP+ modules can further be grouped into two types of host interfaces: linear or limiting. Limiting modules are preferred for long-reach applications using 10GBASE-LRM modules. There are two basic types of optical fiber used for 10 Gigabit Ethernet: multi-mode. In SMF light follows a single path through the fiber while in MMF it takes multiple paths resulting in differential mode delay. SMF is used for long distance communication and MMF is used for distances of less than 300 m. SMF has a narrower core which requires a more precise termination and connection method. MMF has a wider core; the advantage of MMF is that it can be driven by a low cost Vertical-cavity surface-emitting laser for short distances, multi-mode connectors are cheaper and easier to terminate reliably in the field. The advantage of SMF is. In the 802.3 standard, reference is made to FDDI-grade MMF fiber. This has a minimum modal bandwidth of 160 MHz · km at 850 nm.
It was installed in the early 1990s for FDDI and 100BASE-FX networks. The 802.3 standard references ISO/IEC 11801 which specifies optical MMF fiber types OM1, OM2, OM3 and OM4. OM1 has a 62.5 µm core. At 850 nm the minimum modal bandwidth of OM1 is 200 MHz·km, of OM2 500 MHz·km, of OM3 2000 MHz·km and of OM4 4700 MHz·km. FDDI-grade cable is now obsolete and new structured cabling installations use either OM3 or OM4 cabling. OM3 cable can carry 10 Gigabit Ethernet 300 meters using low cost 10GBASE-SR optics. OM4 can manage 400 meters. To distinguish SMF from MMF cables, SMF cables are yellow, while MMF cables are orange or aqua. However, in fiber optics there is no uniform color for any specific optical speed or technology with the exception being angular physical connector, it being an agreed color of green. There are active optical cables; these have the optical electronics connected eliminating the connectors between the cable and the optical module. They plug into standard SFP+ sockets, they are lower cost than other optical solutions because the manufacturer can match the electronics t
A light-emitting diode is a semiconductor light source that emits light when current flows through it. Electrons in the semiconductor recombine with electron holes, releasing energy in the form of photons; this effect is called electroluminescence. The color of the light is determined by the energy required for electrons to cross the band gap of the semiconductor. White light is obtained by using multiple semiconductors or a layer of light-emitting phosphor on the semiconductor device. Appearing as practical electronic components in 1962, the earliest LEDs emitted low-intensity infrared light. Infrared LEDs are used in remote-control circuits, such as those used with a wide variety of consumer electronics; the first visible-light LEDs were of low intensity and limited to red. Modern LEDs are available across the visible and infrared wavelengths, with high light output. Early LEDs were used as indicator lamps, replacing small incandescent bulbs, in seven-segment displays. Recent developments have produced white-light LEDs suitable for room lighting.
LEDs have led to new displays and sensors, while their high switching rates are useful in advanced communications technology. LEDs have many advantages over incandescent light sources, including lower energy consumption, longer lifetime, improved physical robustness, smaller size, faster switching. Light-emitting diodes are used in applications as diverse as aviation lighting, automotive headlamps, general lighting, traffic signals, camera flashes, lighted wallpaper and medical devices. Unlike a laser, the color of light emitted from an LED is neither coherent nor monochromatic, but the spectrum is narrow with respect to human vision, functionally monochromatic. Electroluminescence as a phenomenon was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector. Russian inventor Oleg Losev reported creation of the first LED in 1927, his research was distributed in Soviet and British scientific journals, but no practical use was made of the discovery for several decades.
In 1936, Georges Destriau observed that electroluminescence could be produced when zinc sulphide powder is suspended in an insulator and an alternating electrical field is applied to it. In his publications, Destriau referred to luminescence as Losev-Light. Destriau worked in the laboratories of Madame Marie Curie an early pioneer in the field of luminescence with research on radium. Hungarian Zoltán Bay together with György Szigeti pre-empted led lighting in Hungary in 1939 by patented a lighting device based on SiC, with an option on boron carbide, that emmitted white, yellowish white, or greenish white depending on impurities present. Kurt Lehovec, Carl Accardo, Edward Jamgochian explained these first light-emitting diodes in 1951 using an apparatus employing SiC crystals with a current source of battery or pulse generator and with a comparison to a variant, crystal in 1953. Rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide and other semiconductor alloys in 1955.
Braunstein observed infrared emission generated by simple diode structures using gallium antimonide, GaAs, indium phosphide, silicon-germanium alloys at room temperature and at 77 kelvins. In 1957, Braunstein further demonstrated that the rudimentary devices could be used for non-radio communication across a short distance; as noted by Kroemer Braunstein "…had set up a simple optical communications link: Music emerging from a record player was used via suitable electronics to modulate the forward current of a GaAs diode. The emitted light was detected by a PbS diode some distance away; this signal was played back by a loudspeaker. Intercepting the beam stopped the music. We had a great deal of fun playing with this setup." This setup presaged the use of LEDs for optical communication applications. In September 1961, while working at Texas Instruments in Dallas, James R. Biard and Gary Pittman discovered near-infrared light emission from a tunnel diode they had constructed on a GaAs substrate. By October 1961, they had demonstrated efficient light emission and signal coupling between a GaAs p-n junction light emitter and an electrically isolated semiconductor photodetector.
On August 8, 1962, Biard and Pittman filed a patent titled "Semiconductor Radiant Diode" based on their findings, which described a zinc-diffused p–n junction LED with a spaced cathode contact to allow for efficient emission of infrared light under forward bias. After establishing the priority of their work based on engineering notebooks predating submissions from G. E. Labs, RCA Research Labs, IBM Research Labs, Bell Labs, Lincoln Lab at MIT, the U. S. patent office issued the two inventors the patent for the GaAs infrared light-emitting diode, the first practical LED. After filing the patent, Texas Instruments began a project to manufacture infrared diodes. In October 1962, TI announced the first commercial LED product, which employed a pure GaAs crystal to emit an 890 nm light output. In October 1963, TI announced the first commercial hemispherical LED, the SNX-110; the first visible-spectrum LED was developed in 1962 by Nick Holonyak, Jr. while working at General Electric. Holonyak first reported his LED in the journal Applied Physics Letters on December 1, 1962.
M. George Craford, a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972. In 1976, T. P. Pearsall created the first high-brightness, high-efficiency LEDs for optical fiber telecommunicat
In optics, dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency. Media having this common property may be termed dispersive media. Sometimes the term chromatic dispersion is used for specificity. Although the term is used in the field of optics to describe light and other electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as acoustic dispersion in the case of sound and seismic waves, in gravity waves, for telecommunication signals along transmission lines or optical fiber. In optics, one important and familiar consequence of dispersion is the change in the angle of refraction of different colors of light, as seen in the spectrum produced by a dispersive prism and in chromatic aberration of lenses. Design of compound achromatic lenses, in which chromatic aberration is cancelled, uses a quantification of a glass's dispersion given by its Abbe number V, where lower Abbe numbers correspond to greater dispersion over the visible spectrum.
In some applications such as telecommunications, the absolute phase of a wave is not important but only the propagation of wave packets or "pulses". The most familiar example of dispersion is a rainbow, in which dispersion causes the spatial separation of a white light into components of different wavelengths. However, dispersion has an effect in many other circumstances: for example, group velocity dispersion causes pulses to spread in optical fibers, degrading signals over long distances. Most chromatic dispersion refers to bulk material dispersion, that is, the change in refractive index with optical frequency. However, in a waveguide there is the phenomenon of waveguide dispersion, in which case a wave's phase velocity in a structure depends on its frequency due to the structure's geometry. More "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure, whether or not the waves are confined to some region. In a waveguide, both types of dispersion will be present, although they are not additive.
For example, in fiber optics the material and waveguide dispersion can cancel each other out to produce a zero-dispersion wavelength, important for fast fiber-optic communication. Material dispersion can be a desirable or undesirable effect in optical applications; the dispersion of light by glass prisms is used to construct spectrometers and spectroradiometers. Holographic gratings are used, as they allow more accurate discrimination of wavelengths. However, in lenses, dispersion causes chromatic aberration, an undesired effect that may degrade images in microscopes and photographic objectives; the phase velocity, v, of a wave in a given uniform medium is given by v = c n where c is the speed of light in a vacuum and n is the refractive index of the medium. In general, the refractive index is some function of the frequency f of the light, thus n = n, or alternatively, with respect to the wave's wavelength n = n; the wavelength dependence of a material's refractive index is quantified by its Abbe number or its coefficients in an empirical formula such as the Cauchy or Sellmeier equations.
Because of the Kramers–Kronig relations, the wavelength dependence of the real part of the refractive index is related to the material absorption, described by the imaginary part of the refractive index. In particular, for non-magnetic materials, the susceptibility χ that appears in the Kramers–Kronig relations is the electric susceptibility χe = n2 − 1; the most seen consequence of dispersion in optics is the separation of white light into a color spectrum by a prism. From Snell's law it can be seen that the angle of refraction of light in a prism depends on the refractive index of the prism material. Since that refractive index varies with wavelength, it follows that the angle that the light is refracted by will vary with wavelength, causing an angular separation of the colors known as angular dispersion. For visible light, refraction indices n of most transparent materials decrease with increasing wavelength λ: 1 < n < n < n, or alternatively: d n d λ < 0. In this case, the medium is said to have normal dispersion.
Whereas, if the index increases with increasing wavelength, the medium is said to have anomalous dispersion. At the interface of such a material with air or vacuum, Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin. Thus, blue light, with a higher refractive index, will be bent more than red light, resulting in the well-known rainbow pattern. Another consequence of dispersion manifests itself as a temporal effect; the formula v = c/n calculates the
International standard ISO/IEC 11801 Information technology — Generic cabling for customer premises specifies general-purpose telecommunication cabling systems that are suitable for a wide range of applications. It covers both balanced copper cabling and optical fibre cabling; the standard was designed for use within commercial premises that may consist of either a single building or of multiple buildings on a campus. It was optimized for premises that span up to 3 km, up to 1 km2 office space, with between 50 and 50,000 persons, but can be applied for installations outside this range. A major revision was released in November 2017, unifying requirements for commercial and industrial networks; the standard defines several link/channel classes and cabling categories of twisted-pair copper interconnects, which differ in the maximum frequency for which a certain channel performance is required: Class A: link/channel up to 100 kHz using Category 1 cable/connectors Class B: link/channel up to 1 MHz using Category 2 cable/connectors Class C: link/channel up to 16 MHz using Category 3 cable/connectors Class D: link/channel up to 100 MHz using Category 5e cable/connectors Class E: link/channel up to 250 MHz using Category 6 cable/connectors Class EA: link/channel up to 500 MHz using Category 6A cable/connectors Class F: link/channel up to 600 MHz using Category 7 cable/connectors Class FA: link/channel up to 1000 MHz using Category 7A cable/connectors Class I: link/channel up to 2000 MHz using Category 8.1 cable/connectors Class II: link/channel up to 2000 MHz using Category 8.2 cable/connectors The standard link impedance is 100 Ω.
The standard defines several classes of optical fiber interconnect: OM1: Multimode fiber type 62.5 µm core. Class F channel and Category 7 cable are backward compatible with Class D/Category 5e and Class E/Category 6. Class F features stricter specifications for crosstalk and system noise than Class E. To achieve this, shielding was added for the cable as a whole. Unshielded cables rely on the quality of the twists to protect from EMI; this involves a tight twist and controlled design. Cables with individual shielding per pair such as category 7 rely on the shield and therefore have pairs with longer twists; the Category 7 cable standard was ratified in 2002 to allow 10 Gigabit Ethernet over 100 m of copper cabling. The cable contains four twisted copper wire pairs, just like the earlier standards. Category 7 cable can be terminated either with 8P8C compatible GG45 electrical connectors which incorporate the 8P8C standard or with TERA connectors; when combined with GG-45 or TERA connectors, Category 7 cable is rated for transmission frequencies of up to 600 MHz.
However, in 2008 Category 6A was ratified and allows 10 Gbit/s Ethernet while still using the traditional 8P8C connector. Therefore, all manufacturers of active equipment and network cards have chosen to support the 8P8C for their 10 Gigabit Ethernet products on copper and not the GG45, ARJ45, or TERA; these products therefore require a Class EA channel. As of 2017 there is no equipment. Category 7 is not recognized by the TIA/EIA. Class FA channels and Category 7A cables, introduced by ISO 11801 Edition 2 Amendment 2, are defined at frequencies up to 1000 MHz, suitable for multiple applications including CATV; the intent of the Class FA was to support the future 40Gigabit Ethernet: 40Gbase-T. Simulation results have shown that 40 Gigabit Ethernet may be possible at 50 meters and 100 Gigabit Ethernet at 15 meters. In 2007, researchers at Pennsylvania State University predicted that either 32 nm or 22 nm circuits would allow for 100 Gigabit Ethernet at 100 meters. However, in 2016, the IEEE 802.3bq working group ratified the amendment 3 which defines 25Gbase-T and 40gbase-T on Category 8 cabling specified to 2000 MHz.
The Class FA therefore does not support 40G Ethernet. As of 2017 there is no equipment. Category 7A is not recognized in TIA/EIA. Category 8 was ratified by the TR43 working group under ANSI/TIA 568-C.2-1. It is defined up 2000 MHz and only for distances from 30 m to 36 m depending on the patch cords used. ISO is expected to ratify the equivalent in 2018 but will have 2 options: Class I channel: minimum cable design U/FTP or F/UTP backward compatible and interoperable with Class EA using 8P8C connectors Class II channel: F/FTP or S/FTP minimum, interoperable with Class FA using TERA o
Optical fiber cable
An optical fiber cable known as a fiber optic cable, is an assembly similar to an electrical cable, but containing one or more optical fibers that are used to carry light. The optical fiber elements are individually coated with plastic layers and contained in a protective tube suitable for the environment where the cable will be deployed. Different types of cable are used for different applications, for example long distance telecommunication, or providing a high-speed data connection between different parts of a building. Optical fiber consists of a core and a cladding layer, selected for total internal reflection due to the difference in the refractive index between the two. In practical fibers, the cladding is coated with a layer of acrylate polymer or polyimide; this coating protects the fiber from damage but does not contribute to its optical waveguide properties. Individual coated fibers have a tough resin buffer layer or core tube extruded around them to form the cable core. Several layers of protective sheathing, depending on the application, are added to form the cable.
Rigid fiber assemblies sometimes put light-absorbing glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces flare in fiber bundle imaging applications. For indoor applications, the jacketed fiber is enclosed, with a bundle of flexible fibrous polymer strength members like aramid, in a lightweight plastic cover to form a simple cable; each end of the cable may be terminated with a specialized optical fiber connector to allow it to be connected and disconnected from transmitting and receiving equipment. For use in more strenuous environments, a much more robust cable construction is required. In loose-tube construction the fiber is laid helically into semi-rigid tubes, allowing the cable to stretch without stretching the fiber itself; this protects the fiber from tension during due to temperature changes. Loose-tube fiber may be gel-filled. Dry block offers less protection to the fibers than gel-filled, but costs less. Instead of a loose tube, the fiber may be embedded in a heavy polymer jacket called "tight buffer" construction.
Tight buffer cables are offered for a variety of applications, but the two most common are "Breakout" and "Distribution". Breakout cables contain a ripcord, two non-conductive dielectric strengthening members, an aramid yarn, 3 mm buffer tubing with an additional layer of Kevlar surrounding each fiber; the ripcord is a parallel cord of strong yarn, situated under the jacket of the cable for jacket removal. Distribution cables have an overall Kevlar wrapping, a ripcord, a 900 micrometer buffer coating surrounding each fiber; these fiber units are bundled with additional steel strength members, again with a helical twist to allow for stretching. A critical concern in outdoor cabling is to protect the fiber from contamination by water; this is accomplished by use of solid barriers such as copper tubes, water-repellent jelly or water-absorbing powder surrounding the fiber. The cable may be armored to protect it from environmental hazards, such as construction work or gnawing animals. Undersea cables are more armored in their near-shore portions to protect them from boat anchors, fishing gear, sharks, which may be attracted to the electrical power, carried to power amplifiers or repeaters in the cable.
Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, dual use as power lines, installation in conduit, lashing to aerial telephone poles, submarine installation, insertion in paved streets. In September 2012, NTT Japan demonstrated a single fiber cable, able to transfer 1 petabit per second over a distance of 50 kilometers. Modern fiber cables can contain up to a thousand fibers in a single cable, with potential bandwidth in the terabytes per second. In some cases, only a small fraction of the fibers in a cable may be "lit". Companies can lease or sell the unused fiber to other providers who are looking for service in or through an area. Companies may "overbuild" their networks for the specific purpose of having a large network of dark fiber for sale, reducing the overall need for trenching and municipal permitting, they may deliberately under-invest to prevent their rivals from profiting from their investment. The highest strand-count singlemode fiber cable manufactured is the 864-count, consisting of 36 ribbons each containing 24 strands of fiber.
Optical fibers are strong, but the strength is drastically reduced by unavoidable microscopic surface flaws inherent in the manufacturing process. The initial fiber strength, as well as its change with time, must be considered relative to the stress imposed on the fiber during handling and installation for a given set of environmental conditions. There are three basic scenarios that can lead to strength degradation and failure by inducing flaw growth: dynamic fatigue, static fatigues, zero-stress aging. Telcordia GR-20, Generic Requirements for Optical Fiber and Optical Fiber Cable, contains reliability and quality criteria to protect optical fiber in all operating conditions; the criteria concentrate on conditions in an outside plant environment. For the indoor plant, similar criteria are in Telcordia GR-409, Generic Requirements for Indoor Fiber Optic Cable. OFC: Optical fiber, conductive OFN: Optical fiber, nonconductive OFCG: Optical fiber, general use OFNG: Optical fiber, general use OFCP: Optical fiber, conductive