In computing, bit numbering is the convention used to identify the bit positions in a binary number or a container of such a value. The bit number is incremented by one for each subsequent bit position. In computing, the least significant bit is the bit position in a binary integer giving the units value, that is, determining whether the number is or odd; the LSB is sometimes referred to as the low-order bit or right-most bit, due to the convention in positional notation of writing less significant digits further to the right. It is analogous to the least significant digit of a decimal integer, the digit in the ones position, it is common to assign each bit a position number, ranging from zero to N-1, where N is the number of bits in the binary representation used. This is the exponent for the corresponding bit weight in base-2. Although a few CPU manufacturers assign bit numbers the opposite way, the term least significant bit itself remains unambiguous as an alias for the unit bit. By extension, the least significant bits are the bits of the number closest to, including, the LSB.
The least significant bits have the useful property of changing if the number changes slightly. For example, if 1 is added to 3, the result will be 4 and three of the least significant bits will change. By contrast, the three most significant bits stay unchanged. Least significant bits are employed in pseudorandom number generators, steganographic tools, hash functions and checksums. In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. In the context of an image, if a user were to manipulate the last two bits of a color in a pixel, the value of the color would change at most +/- 3 value places, to be indistinguishable by the human eye; the user may recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows for the transfer of digital information to be kept concealed. LSB can stand for least significant byte; the meaning is parallel to the above: it is the byte in that position of a multi-byte number which has the least potential value.
If the abbreviation's meaning least significant byte isn't obvious from context, it should be stated explicitly to avoid confusion with least significant bit. To avoid this ambiguity, the less abbreviated terms "lsbit" or "lsbyte" may be used. In computing, the most significant bit is the bit position in a binary number having the greatest value; the MSB is sometimes referred to as the high-order bit or left-most bit due to the convention in positional notation of writing more significant digits further to the left. The MSB can correspond to the sign bit of a signed binary number in one's or two's complement notation, "1" meaning negative and "0" meaning positive, it is common to assign each bit a position number ranging from zero to N-1 where N is the number of bits in the binary representation used. This is the exponent for the corresponding bit weight in base-2. Although a few CPU manufacturers assign bit numbers the opposite way, the MSB unambiguously remains the most significant bit; this may be one of the reasons why the term MSB is used instead of a bit number, although the primary reason is that different number representations use different numbers of bits.
By extension, the most significant bits are the bits closest to, including, the MSB. MSB can stand for "most significant byte"; the meaning is parallel to the above: it is the byte in that position of a multi-byte number which has the greatest potential value. To avoid this ambiguity, the less abbreviated terms "MSbit" or "MSbyte" are used; this table illustrates an example of decimal value of 149 and the location of LSB. In this particular example, the position of unit value is located in bit position 0. MSB stands for Most Significant Bit. Position of LSB is independent of how the bit position is transmitted, a question more of a topic of Endianness; the expressions Most Significant Bit First and Least Significant Bit First are indications on the ordering of the sequence of the bits in the bytes sent over a wire in a transmission protocol or in a stream. Most Significant Bit First means that the most significant bit will arrive first: hence e.g. the hexadecimal number 0x12, 00010010 in binary representation, will arrive as the sequence 0 0 0 1 0 0 1 0.
Least Significant Bit First means that the least significant bit will arrive first: hence e.g. the same hexadecimal number 0x12, again 00010010 in binary representation, will arrive as the sequence 0 1 0 0 1 0 0 0. When the bit numbering starts at zero for the least significant bit the numbering scheme is called "LSB 0"; this bit numbering method has the advantage that for any unsigned number the value of the number can be calculated by using exponentiation with the bit number and a base of 2. The value of an unsigned binary integer is therefore ∑ i = 0 N − 1 b i ⋅ 2 i where bi denotes the value of the bit w
In computer networking, Gigabit Ethernet is the various technologies for transmitting Ethernet frames at a rate of a gigabit per second, as defined by the IEEE 802.3-2008 standard. It came into use beginning in 1999 supplanting Fast Ethernet in wired local networks, as a result of being faster; the cables and equipment are similar to previous standards and have been common and economical since 2010. Half-duplex gigabit links connected through repeater hubs were part of the IEEE specification, but the specification is not updated anymore and full-duplex operation with switches is used exclusively. Ethernet was the result of the research done at Xerox PARC in the early 1970s. Ethernet evolved into a implemented physical and link layer protocol. Fast Ethernet increased speed from 10 to 100 megabits per second. Gigabit Ethernet was the next iteration; the initial standard for Gigabit Ethernet was produced by the IEEE in June 1998 as IEEE 802.3z, required optical fiber. 802.3z is referred to as 1000BASE-X, where -X refers to either -CX, -SX, -LX, or -ZX.
For the history behind the "X" see Fast Ethernet. IEEE 802.3ab, ratified in 1999, defines Gigabit Ethernet transmission over unshielded twisted pair category 5, 5e or 6 cabling, became known as 1000BASE-T. With the ratification of 802.3ab, Gigabit Ethernet became a desktop technology as organizations could use their existing copper cabling infrastructure. IEEE 802.3ah, ratified in 2004 added two more gigabit fiber standards, 1000BASE-LX10 and 1000BASE-BX10. This was part of a larger group of protocols known as Ethernet in the First Mile. Gigabit Ethernet was deployed in high-capacity backbone network links. In 2000, Apple's Power Mac G4 and PowerBook G4 were the first mass-produced personal computers featuring the 1000BASE-T connection, it became a built-in feature in many other computers. There are five physical layer standards for Gigabit Ethernet using optical fiber, twisted pair cable, or shielded balanced copper cable; the IEEE 802.3z standard includes 1000BASE-SX for transmission over multi-mode fiber, 1000BASE-LX for transmission over single-mode fiber, the nearly obsolete 1000BASE-CX for transmission over shielded balanced copper cabling.
These standards use 8b/10b encoding, which inflates the line rate by 25%, from 1000 Mbit/s to 1250 Mbit/s, to ensure a DC balanced signal. The symbols are sent using NRZ. Optical fiber transceivers are most implemented as user-swappable modules in SFP form or GBIC on older devices. IEEE 802.3ab, which defines the used 1000BASE-T interface type, uses a different encoding scheme in order to keep the symbol rate as low as possible, allowing transmission over twisted pair. IEEE 802.3ap defines Ethernet Operation over Electrical Backplanes at different speeds. Ethernet in the First Mile added 1000BASE-LX10 and -BX10. 1000BASE-X is used in industry to refer to Gigabit Ethernet transmission over fiber, where options include 1000BASE-SX, 1000BASE-LX, 1000BASE-LX10, 1000BASE-BX10 or the non-standard -EX and -ZX implementations. Included are copper variants using the same 8b/10b line code. 1000BASE-CX is an initial standard for Gigabit Ethernet connections with maximum distances of 25 meters using balanced shielded twisted pair and either DE-9 or 8P8C connector.
The short segment length is due to high signal transmission rate. Although it is still used for specific applications where cabling is done by IT professionals, for instance the IBM BladeCenter uses 1000BASE-CX for the Ethernet connections between the blade servers and the switch modules, 1000BASE-T has succeeded it for general copper wiring use. 1000BASE-KX is part of the IEEE 802.3ap standard for Ethernet Operation over Electrical Backplanes. This standard defines one to four lanes of backplane links, one RX and one TX differential pair per lane, at link bandwidth ranging from 100Mbit to 10Gbit per second; the 1000BASE-KX variant uses 1.25 GBd electrical signalling speed. 1000BASE-SX is an optical fiber Gigabit Ethernet standard for operation over multi-mode fiber using a 770 to 860 nanometer, near infrared light wavelength. The standard specifies a maximum length of 220 meters for 62.5 µm/160 MHz×km multi-mode fiber, 275 m for 62.5 µm/200 MHz×km, 500 m for 50 µm/400MHz×km, 550 m for 50 µm/500 MHz×km multi-mode fiber.
In practice, with good quality fiber and terminations, 1000BASE-SX will work over longer distances. This standard is popular for intra-building links in large office buildings, co-location facilities and carrier-neutral Internet exchanges. Optical power specifications of SX interface: Minimum output power = −9.5 dBm. Minimum receive sensitivity = −17 dBm. 1000BASE-LX is an optical fiber Gigabit Ethernet standard specified in IEEE 802.3 Clause 38 which uses a long wavelength laser, a maximum RMS spectral width of 4 nm. 1000BASE-LX is specified to work over a distance of up to 5 km over 10 µm single-mode fiber. 1000BASE-LX can run over all common types of multi-mode fiber with a maximum segment length of 550 m. For link distances greater than 300 m, the use of a special launch conditioning patch cord may be required; this launches the laser at a precise offset from the center of the fiber which causes it to spread across the diameter of the fiber core, reducing the effect known as differential mode delay which occurs when the laser couples onto only a small number of available modes in multi-mode f
Fibre Channel is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data used to connect computer data storage to servers. Fibre Channel is used in storage area networks in commercial data centers. Fibre Channel networks form a switched fabric. Fibre Channel runs on optical fiber cables within and between data centers, but can run on copper cabling. Most block storage supports many upper level protocols. Fibre Channel Protocol is a transport protocol that predominantly transports SCSI commands over Fibre Channel networks. Mainframe computers run the FICON command set over Fibre Channel because of its high reliability and throughput. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands; when the technology was devised, it ran over optical fiber cables only and, as such, was called "Fiber Channel". The ability to run over copper cabling was added to the specification.
In order to avoid confusion and to create a unique name, the industry decided to change the spelling and use the British English fibre for the name of the standard. Fibre Channel is standardized in the T11 Technical Committee of the International Committee for Information Technology Standards, an American National Standards Institute -accredited standards committee. Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON. Fibre Channel was designed as a serial interface to overcome limitations of the SCSI and HIPPI interfaces. FC was developed with leading edge multi-mode optical fiber technologies that overcame the speed limitations of the ESCON protocol. By appealing to the large base of SCSI disk drives and leveraging mainframe technologies, Fibre Channel developed economies of scale for advanced technologies and deployments became economical and widespread. Commercial products were released.
By the time the standard was ratified lower speed versions were growing out of use. Fibre Channel was the first serial storage transport to achieve gigabit speeds where it saw wide adoption, its success grew with each successive speed. Fibre Channel has doubled in speed every few years since 1996. Fibre Channel has seen active development since its inception, with numerous speed improvements on a variety of underlying transport media; the following table shows the progression of native Fibre Channel speeds: In addition to a modern physical layer, Fibre Channel added support for any number of "upper layer" protocols, including ATM, IP and FICON, with SCSI being the predominant usage. Two major characteristics of Fibre Channel networks is that they provide in-order and lossless delivery of raw block data. Lossless delivery of raw data block is achieved based on a credit mechanism. There are three major Fibre Channel topologies, describing how a number of ports are connected together. A port in Fibre Channel terminology is any entity that communicates over the network, not a hardware port.
This port is implemented in a device such as disk storage, a Host Bus Adapter network connection on a server or a Fibre Channel switch. Point-to-point. Two devices are connected directly to each other; this is the simplest topology, with limited connectivity. Arbitrated loop. In this design, all devices are in a ring, similar to token ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted; the failure of one device causes a break in the ring. Fibre Channel hubs may bypass failed ports. A loop may be made by cabling each port to the next in a ring. A minimal loop containing only two ports, while appearing to be similar to point-to-point, differs in terms of the protocol. Only one pair of ports can communicate concurrently on a loop. Maximum speed of 8GFC. Arbitrated Loop has been used after 2010. Switched Fabric. In this design, all devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. Advantages of this topology over point-to-point or Arbitrated Loop include: The Fabric can scale to tens of thousands of ports.
The switches manage the state of the Fabric, providing optimized paths via Fabric Shortest Path First data routing protocol. The traffic between two ports flows through the switches and not through any other ports like in Arbitrated Loop. Failure of a port should not affect operation of other ports. Multiple pairs of ports may communicate in a Fabric. Fibre Channel does not follow the OSI model layering, is split into five layers: FC-4 – Protocol-mapping layer, in which upper level protocols such as NVMe, SCSI, IP or FICON, are encapsulated into Information Units for delivery to FC-2. Current FC-4s include FCP-4, FC-SB-5, FC-NVMe. FC-3 – Common services layer, a thin layer that could implement functions like encryption or RAID redundancy algorithms. Layers FC-0 are defined in Fibre Channel Physical Interfaces, the
A CD-ROM is a pre-pressed optical compact disc that contains data. Computers can read—but not write to or erase—CD-ROMs, i.e. it is a type of read-only memory. During the 1990s, CD-ROMs were popularly used to distribute software and data for computers and fourth generation video game consoles; some CDs, called enhanced CDs, hold both computer data and audio with the latter capable of being played on a CD player, while data is only usable on a computer. The CD-ROM format was developed by Japanese company Denon in 1982, it was an extension of Compact Disc Digital Audio, adapted the format to hold any form of digital data, with a storage capacity of 553 MiB. CD-ROM was introduced by Denon and Sony at a Japanese computer show in 1984; the Yellow Book is the technical standard. One of a set of color-bound books that contain the technical specifications for all CD formats, the Yellow Book, standardized by Sony and Philips in 1983, specifies a format for discs with a maximum capacity of 650 MiB. CD-ROMs are identical in appearance to audio CDs, data are stored and retrieved in a similar manner.
Discs are made from a 1.2 mm thick disc of polycarbonate plastic, with a thin layer of aluminium to make a reflective surface. The most common size of CD-ROM is 120 mm in diameter, though the smaller Mini CD standard with an 80 mm diameter, as well as shaped compact discs in numerous non-standard sizes and molds, are available. Data is stored on the disc as a series of microscopic indentations. A laser is shone onto the reflective surface of the disc to read the pattern of lands; because the depth of the pits is one-quarter to one-sixth of the wavelength of the laser light used to read the disc, the reflected beam's phase is shifted in relation to the incoming beam, causing destructive interference and reducing the reflected beam's intensity. This is converted into binary data. Several formats are used for data stored on compact discs, known as the Rainbow Books; the Yellow Book, published in 1988, defines the specifications for CD-ROMs, standardized in 1989 as the ISO/IEC 10149 / ECMA-130 standard.
The CD-ROM standard builds on top of the original Red Book CD-DA standard for CD audio. Other standards, such as the White Book for Video CDs, further define formats based on the CD-ROM specifications; the Yellow Book itself is not available, but the standards with the corresponding content can be downloaded for free from ISO or ECMA. There are several standards that define how to structure data files on a CD-ROM. ISO 9660 defines the standard file system for a CD-ROM. ISO 13490 is an improvement on this standard which adds support for non-sequential write-once and re-writeable discs such as CD-R and CD-RW, as well as multiple sessions; the ISO 13346 standard was designed to address most of the shortcomings of ISO 9660, a subset of it evolved into the UDF format, adopted for DVDs. The bootable CD specification was issued in January 1995, to make a CD emulate a hard disk or floppy disk, is called El Torito. Data stored on CD-ROMs follows the standard CD data encoding techniques described in the Red Book specification.
This includes cross-interleaved Reed–Solomon coding, eight-to-fourteen modulation, the use of pits and lands for coding the bits into the physical surface of the CD. The structures used to group data on a CD-ROM are derived from the Red Book. Like audio CDs, a CD-ROM sector contains 2,352 bytes of user data, composed of 98 frames, each consisting of 33-bytes. Unlike audio CDs, the data stored in these sectors corresponds to any type of digital data, not audio samples encoded according to the audio CD specification. To structure and protect this data, the CD-ROM standard further defines two sector modes, Mode 1 and Mode 2, which describe two different layouts for the data inside a sector. A track inside a CD-ROM only contains sectors in the same mode, but if multiple tracks are present in a CD-ROM, each track can have its sectors in a different mode from the rest of the tracks, they can coexist with audio CD tracks as well, the case of mixed mode CDs. Both Mode 1 and 2 sectors use the first 16 bytes for header information, but differ in the remaining 2,336 bytes due to the use of error correction bytes.
Unlike an audio CD, a CD-ROM cannot rely on error concealment by interpolation. To achieve improved error correction and detection, Mode 1, used for digital data, adds a 32-bit cyclic redundancy check code for error detection, a third layer of Reed–Solomon error correction using a Reed-Solomon Product-like Code. Mode 1 therefore contains 288 bytes per sector for error detection and correction, leaving 2,048 bytes per sector available for data. Mode 2, more appropriate for image or video data, contains no additional error detection or correction bytes, having therefore 2,336 available data bytes per sector. Note that both modes, like audio CDs, still benefit from the lower layers of error correction at the frame level. Before being stored on a disc with the techniques described above, each CD-ROM sector is scrambled to prevent some problematic patterns from showing up; these scrambled sectors follow the same encoding process described in the Red Book in order to be stored
Telecommunication is the transmission of signs, messages, writings and sounds or information of any nature by wire, optical or other electromagnetic systems. Telecommunication occurs when the exchange of information between communication participants includes the use of technology, it is transmitted either electrically over physical media, such as cables, or via electromagnetic radiation. Such transmission paths are divided into communication channels which afford the advantages of multiplexing. Since the Latin term communicatio is considered the social process of information exchange, the term telecommunications is used in its plural form because it involves many different technologies. Early means of communicating over a distance included visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, optical heliographs. Other examples of pre-modern long-distance communication included audio messages such as coded drumbeats, lung-blown horns, loud whistles. 20th- and 21st-century technologies for long-distance communication involve electrical and electromagnetic technologies, such as telegraph and teleprinter, radio, microwave transmission, fiber optics, communications satellites.
A revolution in wireless communication began in the first decade of the 20th century with the pioneering developments in radio communications by Guglielmo Marconi, who won the Nobel Prize in Physics in 1909, other notable pioneering inventors and developers in the field of electrical and electronic telecommunications. These included Charles Wheatstone and Samuel Morse, Alexander Graham Bell, Edwin Armstrong and Lee de Forest, as well as Vladimir K. Zworykin, John Logie Baird and Philo Farnsworth; the word telecommunication is a compound of the Greek prefix tele, meaning distant, far off, or afar, the Latin communicare, meaning to share. Its modern use is adapted from the French, because its written use was recorded in 1904 by the French engineer and novelist Édouard Estaunié. Communication was first used as an English word in the late 14th century, it comes from Old French comunicacion, from Latin communicationem, noun of action from past participle stem of communicare "to share, divide out.
Homing pigeons have been used throughout history by different cultures. Pigeon post had Persian roots, was used by the Romans to aid their military. Frontinus said; the Greeks conveyed the names of the victors at the Olympic Games to various cities using homing pigeons. In the early 19th century, the Dutch government used the system in Sumatra, and in 1849, Paul Julius Reuter started a pigeon service to fly stock prices between Aachen and Brussels, a service that operated for a year until the gap in the telegraph link was closed. In the Middle Ages, chains of beacons were used on hilltops as a means of relaying a signal. Beacon chains suffered the drawback that they could only pass a single bit of information, so the meaning of the message such as "the enemy has been sighted" had to be agreed upon in advance. One notable instance of their use was during the Spanish Armada, when a beacon chain relayed a signal from Plymouth to London. In 1792, Claude Chappe, a French engineer, built the first fixed visual telegraphy system between Lille and Paris.
However semaphore suffered from the need for skilled operators and expensive towers at intervals of ten to thirty kilometres. As a result of competition from the electrical telegraph, the last commercial line was abandoned in 1880. On 25 July 1837 the first commercial electrical telegraph was demonstrated by English inventor Sir William Fothergill Cooke, English scientist Sir Charles Wheatstone. Both inventors viewed their device as "an improvement to the electromagnetic telegraph" not as a new device. Samuel Morse independently developed a version of the electrical telegraph that he unsuccessfully demonstrated on 2 September 1837, his code was an important advance over Wheatstone's signaling method. The first transatlantic telegraph cable was completed on 27 July 1866, allowing transatlantic telecommunication for the first time; the conventional telephone was invented independently by Alexander Bell and Elisha Gray in 1876. Antonio Meucci invented the first device that allowed the electrical transmission of voice over a line in 1849.
However Meucci's device was of little practical value because it relied upon the electrophonic effect and thus required users to place the receiver in their mouth to "hear" what was being said. The first commercial telephone services were set-up in 1878 and 1879 on both sides of the Atlantic in the cities of New Haven and London. Starting in 1894, Italian inventor Guglielmo Marconi began developing a wireless communication using the newly discovered phenomenon of radio waves, showing by 1901 that they could be transmitted across the Atlantic Ocean; this was the start of wireless telegraphy by radio. Voice and music had little early success. World War I accelerated the development of radio for military communications. After the war, commercial radio AM broadcasting began in the 1920s and became an important mass medium for entertainment and news. World War II again accelerated development of radio for the wartime purposes of aircraft and land communication, radio navigation and radar. Development of stereo FM broadcasting of radio
The media-independent interface was defined as a standard interface to connect a Fast Ethernet media access control block to a PHY chip. The MII connects different types of PHYs to MACs. Being media independent means that different types of PHY devices for connecting to different media can be used without redesigning or replacing the MAC hardware, thus any MAC may be used with independent of the network signal transmission media. The MII can be used to connect a MAC to an external PHY using a pluggable connector, or directly to a PHY chip, on the same PCB. On a PC the CNR connector Type B carries MII bus interface signals; the Management Data Input/Output serial bus is a subset of the MII, used to transfer management information between MAC and PHY. At power up, using autonegotiation, the PHY adapts to whatever it is connected to unless settings are altered via the MDIO interface; the original MII transfers network data using 4-bit nibbles in each direction. The data is clocked at 25 MHz to achieve 100 Mbit/s throughput.
The original MII design has been extended to support increased speeds. Current variants include, reduced media-independent interface, gigabit media-independent interface, reduced gigabit media-independent interface, serial gigabit media-independent interface, quad serial gigabit media-independent interface, 10-gigabit media-independent interface; the standard MII features a small set of registers: Basic Mode Configuration Status Word PHY Identification Ability Advertisement Link Partner Ability Auto Negotiation Expansion The MII Status Word is the most useful datum, since it may be used to detect whether an Ethernet NIC is connected to a network. It contains a bitmask with the following meaning: The transmit clock is a free-running clock generated by the PHY based on the link speed; the remaining transmit signals are driven by the MAC synchronously on the rising edge of TX_CLK. This arrangement allows the MAC to operate without having to be aware of the link speed; the transmit enable signal is held high during frame transmission and low when the transmitter is idle.
Transmit error may be raised for one or more clock periods during frame transmission to request the PHY to deliberately corrupt the frame in some visible way that precludes it from being received as valid. This may be used to abort a frame when some problem is detected after transmission has started; the MAC may omit the signal if it has no use for this functionality, in which case the signal should be tied low for the PHY. More raising transmit error outside frame transmission is used to indicate the transmit data lines are being used for special-purpose signalling; the data value 0b0001 is used to request an EEE-capable PHY to enter low power mode. The first seven receiver signals are analogous to the transmitter signals, except RX_ER is not optional and used to indicate the received signal could not be decoded to valid data; the receive. When no clock can be recovered, the PHY must present a free-running clock as a substitute; the receive data valid signal is not required to go high when the frame starts, but must do so in time to ensure the "start of frame delimiter" byte is included in the received data.
Some of the preamble nibbles may be lost. Similar to transmit, raising RX_ER outside a frame is used for special signalling. For receive, two data values are defined: 0b0001 to indicate the link partner is in EEE low power mode, 0b1110 for a "false carrier" indication; the CRS and COL signals are asynchronous to the receive clock, are only meaningful in half-duplex mode. Carrier sense is high when transmitting, receiving, or the medium is otherwise sensed as being in use. If a collision is detected, COL goes high while the collision persists. In addition, the MAC may weakly pull-up the COL signal, allowing the combination of COL high with CRS low to serve as indication of an absent/disconnected PHY. MDC and MDIO can be shared among multiple PHYs; the interface requires 18 signals, out. This presents a problem for multiport devices. For this reason, the reduced media independent interface was developed. Reduced media-independent interface is a standard, developed to reduce the number of signals required to connect a PHY to a MAC.
Four things were changed compared to the MII standard to achieve this: The two clocks TXCLK and RXCLK are replaced by a single clock. This clock is an input to the PHY rather than an output, which allows the clock signal to be shared among all PHYs in a multiport device, such as a switch; the clock frequency is doubled from 25 MHz to 50 MHz, while the data paths are narrowed to 2 bits rather than 4 bits. RXDV and CRS signals are multiplexed to one signal; the COL signal is removed. These changes mean that RMII uses about half the number of signals compared to MII; the high pin count of MII is more of a burden on microcontrollers with built-in MAC, FPGAs, multiport switches or repeaters, PC motherboard chipsets than it is for a separate single-port Ethernet MAC, which explains why the older MII standard was more wasteful of pins. MDC and MDIO can be shared among multiple PHYs; the receiver signal
USB 3.0 is the third major version of the Universal Serial Bus standard for interfacing computers and electronic devices. Among other improvements, USB 3.0 adds the new transfer rate referred to as SuperSpeed USB that can transfer data at up to 5 Gbit/s, about 10 times faster than the USB 2.0 standard. It is recommended that manufacturers distinguish USB 3.0 connectors from their USB 2.0 counterparts by using blue color for the Standard-A receptacles and plugs, by the initials SS. USB 3.1, released in July 2013, is the successor standard. USB 3.1 preserves the existing SuperSpeed transfer rate, giving it the new label USB 3.1 Gen 1, while defining a new SuperSpeed+ transfer mode, called USB 3.1 Gen 2 which can transfer data at up to 10 Gbit/s over the existing USB-type-A and USB-C connectors. USB 3.2, released in September 2017, replaces the USB 3.1 standard. It preserves existing USB 3.1 SuperSpeed and SuperSpeed+ data modes and introduces two new SuperSpeed+ transfer modes over the USB-C connector using two-lane operation, with data rates of 10 and 20 Gbit/s.
The USB 3.0 specification is similar to USB 2.0, but with many improvements and an alternative implementation. Earlier USB concepts such as endpoints and the four transfer types are preserved but the protocol and electrical interface are different; the specification defines a physically separate channel to carry USB 3.0 traffic. The changes in this specification make improvements in the following areas: Transfer speed – USB 3.0 adds a new transfer type called SuperSpeed or SS, 5 Gbit/s Increased bandwidth – USB 3.0 uses two unidirectional data paths instead of only one: one to receive data and the other to transmit Power management – U0 to U3 link power management states are defined Improved bus use – a new feature is added to let a device asynchronously notify the host of its readiness, with no need for polling Support for rotating media – the bulk protocol is updated with a new feature called Stream Protocol that allows a large number of logical streams within an EndpointUSB 3.0 has transmission speeds of up to 5 Gbit/s, about ten times faster than USB 2.0 without considering that USB 3.0 is full duplex whereas USB 2.0 is half duplex.
This gives USB 3.0 a potential total bidirectional bandwidth twenty times greater than USB 2.0. In USB 3.0, dual-bus architecture is used to allow both USB 2.0 and USB 3.0 operations to take place thus providing backward compatibility. The structural topology is the same, consisting of a tiered star topology with a root hub at level 0 and hubs at lower levels to provide bus connectivity to devices; the SuperSpeed transaction is initiated by a host request, followed by a response from the device. The device either rejects it. If the endpoint is halted, the device responds with a STALL handshake. If there is lack of buffer space or data, it responds with a Not Ready signal to tell the host that it is not able to process the request; when the device is ready, sends an Endpoint Ready to the host which reschedules the transaction. The use of unicast and the limited amount of multicast packets, combined with asynchronous notifications, enables links that are not passing packets to be put into reduced power states, which allows better power management.
The "SuperSpeed" bus provides for a transfer mode at a nominal rate of 5.0 Gbit/s, in addition to the three existing transfer modes. Accounting for the encoding overhead, the raw data throughput is 4 Gbit/s, the specification considers it reasonable to achieve 3.2 Gbit/s or more in practice. All data is sent as a stream of eight-bit segments that are scrambled and converted into 10-bit symbols via 8b/10b encoding. Scrambling is implemented using a free-running linear feedback shift register; the LFSR is reset whenever a COM symbol is received. Unlike previous standards, the USB 3.0 standard does not specify a maximum cable length, requiring only that all cables meet an electrical specification: for copper cabling with AWG 26 wires, the maximum practical length is 3 meters. As with earlier versions of USB, USB 3.0 provides power at 5 volts nominal. The available current for low-power SuperSpeed devices is 150 mA, an increase from the 100 mA defined in USB 2.0. For high-power SuperSpeed devices, the limit is six unit loads or 900 mA twice USB 2.0's 500 mA.
The term "available current" can be misunderstood. It implies that if a low power device or a USB2 device is connected to a USB3 port it can only draw 150 mA or 500 mA from that port. However, the available current for any USB device plugged into a USB3 port is 900 mA as defined by the USB3 spec; the actual current draw is determined by the device capability. The Vbus, pin 1, Ground, pin 4, are the same for USB 1, 2, or 3. A USB2 HDD with 2 USB2 connectors needing a total of 800 mA will draw full power from a single USB3 port. A USB2 phone will charge faster since 900 mA is "available" to it. USB 3.0 ports may implement other USB specifications for increased power, including the USB Battery Charging Specification for up to 1.5 A or 7.5 W, or, in the case of USB 3.1, the USB Power Delivery Specification for charging the host device up to 100 W. The USB 3.0 Promoter Group announced on 17 November 200