A network bridge is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer. If one or more segments of the bridged network are wireless, the device is known as a wireless bridge. There are four main types of network bridging technologies: simple bridging, multiport bridging, learning or transparent bridging, source route bridging. Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments; the table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is flooded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received.
By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation developed the technology in the 1980s. In the context of a two-port bridge, one can think of the forwarding information base as a filtering database. A bridge decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed. Transparent bridging can operate over devices with more than two ports; as an example, consider a bridge connected to three hosts, A, B, C.
The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge; the bridge examines the source address of the frame and creates an address and port number entry for A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods it to all other ports: 2 and 3; the frame is received by hosts B and C. Host C ignores the frame. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an port number entry for B to its forwarding table; the bridge has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between B without any further flooding in network. A simple bridge connects two network segments by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other.
A store and forward technique is used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge. A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally a multiport bridge must decide. Like the simple bridge, a multiport bridge uses store and forward operation; the multiport bridge function serves as the basis for network switches. The forwarding information base stored in content-addressable memory is empty. For each received ethernet frame the switch learns from the frame's source MAC address and adds this together with an ingress interface identifier to the forwarding information base.
The switch forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces; this behaviour is called unicast flooding. Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer 2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on "switch" products with the same input and output port bandwidths: Store and forward: the switch buffers and verifies each frame before forwarding it. Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method; when the outgoing port is busy at the time, the switch falls back to store-and-forward operation. When the egress port is running at a faster data rate than the ingress port, store-and-forward is used. Fragment free: a method that attempts to retain the benefits of both store and forward and cut through.
Fragment free checks the first 64 bytes of the frame. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frames that are in error because of a collision will not be forwarded; this way
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
ERS 3500 and ERS 2500 series
Ethernet Routing Switch 3500 Series and Ethernet Routing Switch 2500 Series or in data computer networking terms are stackable routing switches designed and manufactured by Avaya. The Switches can be stacked up to eight units high through a'stacking' configuration; this series of Switches consists of six ERS 3500 models, the ERS 3526T, ERS 3526T-PWR+, ERS 3510GT, ERS 3510GT-PWR+, ERS3524GT, ERS3524GT-PWR+ and four different ERS 2500 models, the ERS 2526T, ERS 2526T-PWR, ERS 2550T and the ERS 2550T-PWR. The'PWR' suffix designation identifies the Switch that can provide Power-over-Ethernet on the copper Ethernet ports, the'+' suffix designation indicates that the Switch can provide PoE plus on the copper ports; these Switches are all covered by Avaya's Lifetime warranty. This series of Switches became available in April 2012 with software release 5.0 This product line became available in 2007 with software release 4.0 and the device was demonstrated in March at the'Spring VoiceCon 2007'. In March 2007 the product started to ship, in May 2007 a detailed evaluation between this switch and two competitor's switches identified that this switch had a better performance and better total cost of ownership.
In 2008 Layer 3 routing support, secure web access with https and TACACS+ were added to the software in version 4.2. In January 2008 another detailed evaluation of this systems was performed by Tolly Enterprises, LLC. comparing the 2500 systems to Catalyst 2960-24T and HP ProCurve 2626 and 2650 systems. In May 2009 Cisco published an evaluation and comparison between this switch and its 2000 and 3000 series switches as competitor published fear and doubt about the product not having the ability to do routing after the product had released the new routing software a year earlier. In November 2010 IGMP multicast and IPv6 management was added in version 4.3. As of February 2012 the software version 4.4 is the latest software released for the product, published in August 2011. The ERS 3500 Series consists or four gigabit Ethernet models the 3510GT, 3510GT-PWR+, 3524GT, 3524GT-PWR+ along with 2 fast Ethernet models 35265T, 3526T-PWR+; the Switch leverages 802.1AB link layer discovery protocol and LLDP media endpoint discover and auto discovery and auto configuration to allow the Switch to automatically configure or reconfigure itself for new phone installs or phone movement in 1 minute.
The switch can be installed as standalone and field-upgraded via a license to support resilient'Stackable Chassis' configuration of up to eight Switches. The stack-enabled version of the ERS 2500 Switches will not require a license license file; the ERS 2550T supports 48 ports of 10/100 plus two Gigabit uplink ports that are a combo configuration of 1000BASE-T/SFP. The ERS 2550T-PWR supports PoE capabilities on half of the user ports. System scaling is accomplished by stacking eight ERS 2550T-PWR systems together to provide up to 384 ports of copper 10/100BASE-T and with the ERS 2550T-PWR models, 192 of the ports will support Power-over-Ethernet and up to 16 ports of 1000BASE-X Small form-factor pluggable transceivers; the ERS 2526T and ERS 2526T-PWR models offer 24 ports of 10/100 plus two Gigabit uplink ports that are a combo configuration of 1000BASE-T/SFP. The ERS 2526T-PWR model offers PoE support on half of the user ports; when staking eight ERS 2526T models it will provide 192 ports of copper 10/100BASE-T, with the ERS 2526T-PWR models 96 of the ports will support PoE and up to 16 ports of 1000BASE-X Small form-factor pluggable transceivers.
The system has the ability to stack any combination of these Switches in a system. The ERS 2500 Series of Switches can be stacked with Flexible Advanced Stacking Technology to allows eight switches to operate as single logical system with a 32 Gbit/s virtual backplane; the stack operates on a bi-directional and shortest path forwarding star topology that allows traffic to flow either'upstream' or downstream' from every switch allowing packets to take the optimal forwarding path. The bi-directional paths allow the traffic to automatically redirect around any switch in the stack, not operating properly; this stacking technology allows stackable switches to operate with the same performance and resiliency as chassis solution. The entire stack can be managed from the base switch by several methods: Console into the base switch and use command line or a menu; this switch allows for link aggregation from ports on different stacked switches either to other switches not in the stack or to allow servers and other devices to have multiple connections to the stack for improved redundancy and throughput.
Tanya Palta. "Avaya Releases Ethernet Routing Switch 3500". TMC Net. Retrieved 4 May 2012. Paul Robinson. "New Ethernet Switches from Avaya Simplify Operations for Small and Midsize Enterprises and Remote Branches". Unified Communications Stratiges. Retrieved 4 May 2012. Heather Clancy. "Avaya targets small businesses with new network gear". ZDNet. Retrieved 4 May 2012. Jeffrey Burt. "Avaya Unveils Network Switches for Small Enterprises". E Week. Retrieved 4 May 2012. Nortel Ethernet Routing Switch Solutions. Research Triangle Park, NC: Nortel Press. October 2008
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall establishes a barrier between a trusted internal network and untrusted external network, such as the Internet. Firewalls are categorized as either network firewalls or host-based firewalls. Network run on network hardware. Host-based firewalls run on host computers and control network traffic out of those machines; the term firewall referred to a wall intended to confine a fire within a building. Uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment; the term was applied in the late 1980s to network technology that emerged when the Internet was new in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s, because they separated networks from one another, thus halting the spread of problems from one network to another.
The first reported type of network firewall is called a packet filter. Packet filters act by inspecting packets transferred between computers; when a packet does not match the packet filter's set of filtering rules, the packet filter either drops the packet, or rejects the packet else it is allowed to pass. Packets may be filtered by source and destination network addresses, protocol and destination port numbers; the bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol or User Datagram Protocol in conjunction with well-known ports, enabling firewalls of that era to distinguish between, thus control, specific types of traffic, unless the machines on each side of the packet filter used the same non-standard ports. The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first generation architecture.
From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their first-generation predecessors but maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 of the OSI model for their conversation, allowing examination of the overall exchange between the nodes; this type of firewall is vulnerable to denial-of-service attacks that bombard the firewall with fake connections in an attempt to overwhelm the firewall by filling its connection state memory. Marcus Ranum, Wei Xu, Peter Churchyard released an application firewall known as Firewall Toolkit in October 1993; this became the basis for Gauntlet firewall at Trusted Information Systems. The key benefit of application layer filtering is that it can understand certain applications and protocols.
This is useful as it is able to detect if an unwanted application or service is attempting to bypass the firewall using a disallowed protocol on an allowed port, or detect if a protocol is being abused in any harmful way. As of 2012, the so-called next-generation firewall is nothing more than the "wider" or "deeper" inspection at the application layer. For example, the existing deep packet inspection functionality of modern firewalls can be extended to include: Intrusion prevention systems User identity management integration Web application firewall. WAF attacks may be implemented in the tool "WAF Fingerprinting utilizing timing side channels" Firewalls are categorized as network-based or host-based. Network-based firewalls are positioned on the gateway computers of WANs and intranets, they are either software appliances running on general-purpose hardware, or hardware-based firewall computer appliances. Firewall appliances may offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for that network.
Host-based firewalls are positioned on the network node itself and control network traffic in and out of those machines. The host-based firewall may be a daemon or service as a part of the operating system or an agent application such as endpoint security or protection; each has disadvantages. However, each has a role in layered security. Firewalls vary in type depending on where communication originates, where it is intercepted, the state of communication being traced. Network layer firewalls called packet filters, operate at a low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set; the firewall administrator may define the rules. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls fall into two sub-categories and stateless. Used packet filters on various versions of Unix are ipfw, NPF, PF, ip
An Ethernet hub, active hub, network hub, repeater hub, multiport repeater, or hub is a network hardware device for connecting multiple Ethernet devices together and making them act as a single network segment. It has multiple input/output ports, in which a signal introduced at the input of any port appears at the output of every port except the original incoming. A hub works at the physical layer of the OSI model. A repeater hub participates in collision detection, forwarding a jam signal to all ports if it detects a collision. In addition to standard 8P8C ports, some hubs may come with a BNC or an Attachment Unit Interface connector to allow connection to legacy 10BASE2 or 10BASE5 network segments. Hubs are now obsolete, having been replaced by network switches except in old installations or specialized applications; as of 2011, connecting network segments by repeaters or hubs is deprecated by IEEE 802.3. A network hub is an unsophisticated device in comparison with a switch; as a multiport repeater it works by repeating bits received from one of its ports to all other ports.
It is aware of physical layer packets, it can detect their start, an idle line and sense a collision which it propagates by sending a jam signal. A hub cannot further examine or manage any of the traffic that comes through it: any packet entering any port is rebroadcast on all other ports. A hub/repeater has no memory to store any data in – a packet must be transmitted while it is received and it is lost when a collision occurs. Due to this, hubs can only run in half duplex mode. Due to a larger collision domain, packet collisions are more frequent in networks connected using hubs than in networks connected using more sophisticated devices; the need for hosts to be able to detect collisions limits the number of hubs and the total size of a network built using hubs. For 10 Mbit/s networks built using repeater hubs, the 5-4-3 rule must be followed: up to five segments are allowed between any two end stations. For 10BASE-T networks, up to five segments and four repeaters are allowed between any two hosts.
For 100 Mbit/s networks, the limit is reduced to 3 segments between any two end stations, and, only allowed if the hubs are of Class II. Some hubs have manufacturer specific stack ports allowing them to be combined in a way that allows more hubs than simple chaining through Ethernet cables, but so, a large Fast Ethernet network is to require switches to avoid the chaining limits of hubs. Most hubs detect typical problems, such as excessive collisions and jabbering on individual ports, partition the port, disconnecting it from the shared medium. Thus, hub-based twisted-pair Ethernet is more robust than coaxial cable-based Ethernet, where a misbehaving device can adversely affect the entire collision domain. If not partitioned automatically, a hub simplifies troubleshooting because hubs remove the need to troubleshoot faults on a long cable with multiple taps. To pass data through the repeater in a usable fashion from one segment to the next, the framing and data rate must be the same on each segment.
This means that a repeater cannot connect an 802.3 segment and an 802.5 segment or a 10 Mbit/s segment to 100 Mbit/s Ethernet. 100 Mbit/s hubs and repeaters come in two different speed grades: Class I delay the signal for a maximum of 140 bit times and Class II hubs delay the signal for a maximum of 92 bit times. In the early days of Fast Ethernet, Ethernet switches were expensive devices. Hubs suffered from the problem that if there were any 10BASE-T devices connected the whole network needed to run at 10 Mbit/s. Therefore, a compromise between a hub and a switch was known as a dual-speed hub; these devices make use of an internal two-port switch, bridging the 10 Mbit/s and 100 Mbit/s segments. When a network device becomes active on any of the physical ports, the device attaches it to either the 10 Mbit/s segment or the 100 Mbit/s segment, as appropriate; this obviated the need for an all-or-nothing migration to Fast Ethernet networks. These devices are considered hubs because the traffic between devices connected at the same speed is not switched.
Repeater hubs have been defined for Gigabit Ethernet but commercial products have failed to appear due to the industry's transition to switching. The main reason for purchasing hubs rather than switches was their price; this motivator has been eliminated by reductions in the price of switches, but hubs can still be useful in special circumstances: For inserting a protocol analyzer into a network connection, a hub is an alternative to a network tap or port mirroring. A hub with both 10BASE-T ports and a 10BASE2 port can be used to connect a 10BASE2 segment to a modern Ethernet-over-twisted-pair network. A hub with both 10BASE-T ports and an AUI port can be used to connect a 10BASE5 segment to a modern network. USB hub Router Hub Reference
Fibre Channel is a high-speed data transfer protocol providing in-order, lossless delivery of raw block data used to connect computer data storage to servers. Fibre Channel is used in storage area networks in commercial data centers. Fibre Channel networks form a switched fabric. Fibre Channel runs on optical fiber cables within and between data centers, but can run on copper cabling. Most block storage supports many upper level protocols. Fibre Channel Protocol is a transport protocol that predominantly transports SCSI commands over Fibre Channel networks. Mainframe computers run the FICON command set over Fibre Channel because of its high reliability and throughput. Fibre Channel can be used to transport data from storage systems that use solid-state flash memory storage medium by transporting NVMe protocol commands; when the technology was devised, it ran over optical fiber cables only and, as such, was called "Fiber Channel". The ability to run over copper cabling was added to the specification.
In order to avoid confusion and to create a unique name, the industry decided to change the spelling and use the British English fibre for the name of the standard. Fibre Channel is standardized in the T11 Technical Committee of the International Committee for Information Technology Standards, an American National Standards Institute -accredited standards committee. Fibre Channel started in 1988, with ANSI standard approval in 1994, to merge the benefits of multiple physical layer implementations including SCSI, HIPPI and ESCON. Fibre Channel was designed as a serial interface to overcome limitations of the SCSI and HIPPI interfaces. FC was developed with leading edge multi-mode optical fiber technologies that overcame the speed limitations of the ESCON protocol. By appealing to the large base of SCSI disk drives and leveraging mainframe technologies, Fibre Channel developed economies of scale for advanced technologies and deployments became economical and widespread. Commercial products were released.
By the time the standard was ratified lower speed versions were growing out of use. Fibre Channel was the first serial storage transport to achieve gigabit speeds where it saw wide adoption, its success grew with each successive speed. Fibre Channel has doubled in speed every few years since 1996. Fibre Channel has seen active development since its inception, with numerous speed improvements on a variety of underlying transport media; the following table shows the progression of native Fibre Channel speeds: In addition to a modern physical layer, Fibre Channel added support for any number of "upper layer" protocols, including ATM, IP and FICON, with SCSI being the predominant usage. Two major characteristics of Fibre Channel networks is that they provide in-order and lossless delivery of raw block data. Lossless delivery of raw data block is achieved based on a credit mechanism. There are three major Fibre Channel topologies, describing how a number of ports are connected together. A port in Fibre Channel terminology is any entity that communicates over the network, not a hardware port.
This port is implemented in a device such as disk storage, a Host Bus Adapter network connection on a server or a Fibre Channel switch. Point-to-point. Two devices are connected directly to each other; this is the simplest topology, with limited connectivity. Arbitrated loop. In this design, all devices are in a ring, similar to token ring networking. Adding or removing a device from the loop causes all activity on the loop to be interrupted; the failure of one device causes a break in the ring. Fibre Channel hubs may bypass failed ports. A loop may be made by cabling each port to the next in a ring. A minimal loop containing only two ports, while appearing to be similar to point-to-point, differs in terms of the protocol. Only one pair of ports can communicate concurrently on a loop. Maximum speed of 8GFC. Arbitrated Loop has been used after 2010. Switched Fabric. In this design, all devices are connected to Fibre Channel switches, similar conceptually to modern Ethernet implementations. Advantages of this topology over point-to-point or Arbitrated Loop include: The Fabric can scale to tens of thousands of ports.
The switches manage the state of the Fabric, providing optimized paths via Fabric Shortest Path First data routing protocol. The traffic between two ports flows through the switches and not through any other ports like in Arbitrated Loop. Failure of a port should not affect operation of other ports. Multiple pairs of ports may communicate in a Fabric. Fibre Channel does not follow the OSI model layering, is split into five layers: FC-4 – Protocol-mapping layer, in which upper level protocols such as NVMe, SCSI, IP or FICON, are encapsulated into Information Units for delivery to FC-2. Current FC-4s include FCP-4, FC-SB-5, FC-NVMe. FC-3 – Common services layer, a thin layer that could implement functions like encryption or RAID redundancy algorithms. Layers FC-0 are defined in Fibre Channel Physical Interfaces, the
The RapidIO architecture is a high-performance packet-switched interconnect technology. RapidIO supports read/write and cache coherency semantics. RapidIO fabrics guarantee in-order packet delivery, enabling power- and area- efficient protocol implementation in hardware. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, chassis-to-chassis interconnect; the protocol is marketed as: RapidIO - the unified fabric for Performance Critical Computing, is used in many applications such as Data Center & HPC, Communications Infrastructure, Industrial Automation and Military & Aerospace that are constrained by at least one of size and power. RapidIO has its roots in high-performance computing; the protocol was designed by Mercury Computer Systems and Motorola as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, included telecommunications and storage OEMs as well as FPGA, switch companies.
The protocol was designed to meet the following objectives: Low latency Guaranteed, in order, packet delivery Support for messaging and read/write semantics Could be used in systems with fault tolerance/high availability requirements Flow control mechanisms to manage short-term, medium-term and long-term congestion Efficient protocol implementation in hardware Low system power Scales from two to thousands of nodesThe RapidIO Specification Revision 1.1, released in 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption; the RapidIO Specification Revision 1.2, released in 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband and military compute; the RapidIO Specification Revision 2.0, released in 2008, added more port widths and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has expanded the commercial success of the 1.2 specification.
The RapidIO Specification Revision 3.0, released in 2013, has the following changes and improvements compared to the 2.x specifications: Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short and long reach applications Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach signal quality optimization Defines a 64b/67b encoding scheme to support both copper and optical interconnects and to improve bandwidth efficiency Dynamic asymmetric links to save power Addition of a time synchronization capability similar to IEEE 1588, but much less expensive to implement Support for 32-bit device IDs, increasing maximum system size and enabling innovative hardware virtualization support Revised routing table programming model simplifies network management software Packet exchange protocol optimizationsThe RapidIO Specification Revision 4.0 was released in 2016. has the following changes and improvements compared to the 3.x specifications: Support 25 Gbaud lane rate and physical layer specification, with associated programming model changes Allow IDLE3 to be used with any Baud Rate Class, with specified IDLE sequence negotiation Increased maximum packet size to 284 bytes in anticipation of Cache Coherency specification Support 16 physical layer priorities Support "Error Free Transmission" for high throughput isochronous information transfer RapidIO fabrics enjoy dominant market share in global deployment of cellular infrastructure 3G, 4G & LTE networks with millions of RapidIO ports shipped into wireless base stations worldwide.
RapidIO fabrics were designed to support connecting different types of processors from different manufacturers together in a single system. This flexibility has driven the widespread use of RapidIO in wireless infrastructure equipment where there is a need to combine heterogeneous, DSP, FPGA and communication processors together in a coupled system with low latency and high reliability. Data Center and HPC Analytics systems have been deployed using a RapidIO 2D Torus Mesh Fabric, that provides a high speed general purpose interface among the system cartridges for applications that benefit from high bandwidth, low latency node-to-node communication; the RapidIO 2D Torus unified fabric is routed as a torus ring configuration connecting up to 45 server cartridges capable of providing 5Gbs per lane connections in each direction to its north, south and west neighbors. This allows the system to meet many unique HPC applications where efficient localized traffic is needed. Using an open modular data center and compute platform, a heterogeneous HPC system has showcased the low latency attribute of RapidIO to enable real-time analytics.
In March 2015 a top-of-rack switch was announced to drive RapidIO into mainstream data center applications. The interconnect or "bus" is one of the critical technologies in the design and development of spacecraft avionic systems that dictates its architecture and level of complexity. There are a host of existing architectures; these existing systems are sufficient for a given type of architecture requirement. For next generation missions a more capable avionics architecture is desired. A viable option toward the design and development of these next generation architectures is to leverage existing commercia