Coaxial cable, or coax is a type of electrical cable that has an inner conductor surrounded by a tubular insulating layer, surrounded by a tubular conducting shield. Many coaxial cables have an insulating outer sheath or jacket; the term coaxial comes from the outer shield sharing a geometric axis. Coaxial cable was invented by English engineer and mathematician Oliver Heaviside, who patented the design in 1880. Coaxial cable is a type of transmission line, used to carry high frequency electrical signals with low losses, it is used in such applications as telephone trunklines, broadband internet networking cables, high speed computer data busses, carrying cable television signals, connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, needed for it to function efficiently as a transmission line. Coaxial cable is used as a transmission line for radio frequency signals.
Its applications include feedlines connecting radio transmitters and receivers to their antennas, computer network connections, digital audio, distribution of cable television signals. One advantage of coaxial over other types of radio transmission line is that in an ideal coaxial cable the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors; this allows coaxial cable runs to be installed next to metal objects such as gutters without the power losses that occur in other types of transmission lines. Coaxial cable provides protection of the signal from external electromagnetic interference. Coaxial cable conducts electrical signal using an inner conductor surrounded by an insulating layer and all enclosed by a shield one to four layers of woven metallic braid and metallic tape; the cable is protected by an outer insulating jacket. The shield is kept at ground potential and a signal carrying voltage is applied to the center conductor; the advantage of coaxial design is that electric and magnetic fields are restricted to the dielectric with little leakage outside the shield.
Conversely and magnetic fields outside the cable are kept from interfering with signals inside the cable. Larger diameter cables and cables with multiple shields have less leakage; this property makes coaxial cable a good choice for carrying weak signals that cannot tolerate interference from the environment or for stronger electrical signals that must not be allowed to radiate or couple into adjacent structures or circuits. Common applications of coaxial cable include video and CATV distribution, RF and microwave transmission, computer and instrumentation data connections; the characteristic impedance of the cable is determined by the dielectric constant of the inner insulator and the radii of the inner and outer conductors. In radio frequency systems, where the cable length is comparable to the wavelength of the signals transmitted, a uniform cable characteristic impedance is important to minimize loss; the source and load impedances are chosen to match the impedance of the cable to ensure maximum power transfer and minimum standing wave ratio.
Other important properties of coaxial cable include attenuation as a function of frequency, voltage handling capability, shield quality. Coaxial cable design choices affect physical size, frequency performance, power handling capabilities, flexibility and cost; the inner conductor might be stranded. To get better high-frequency performance, the inner conductor may be silver-plated. Copper-plated steel wire is used as an inner conductor for cable used in the cable TV industry; the insulator surrounding the inner conductor may be solid plastic, a foam plastic, or air with spacers supporting the inner wire. The properties of the dielectric insulator determine some of the electrical properties of the cable. A common choice is a solid polyethylene insulator, used in lower-loss cables. Solid Teflon is used as an insulator; some coaxial lines have spacers to keep the inner conductor from touching the shield. Many conventional coaxial cables use braided copper wire forming the shield; this allows the cable to be flexible, but it means there are gaps in the shield layer, the inner dimension of the shield varies because the braid cannot be flat.
Sometimes the braid is silver-plated. For better shield performance, some cables have a double-layer shield; the shield might be just two braids, but it is more common now to have a thin foil shield covered by a wire braid. Some cables may invest in more than two shield layers, such as "quad-shield", which uses four alternating layers of foil and braid. Other shield designs sacrifice flexibility for better performance; those cables cannot be bent as the shield will kink, causing losses in the cable. When a foil shield is used a small wire conductor incorporated into the foil makes soldering the shield termination easier. For high-power radio-frequency transmission up to about 1 GHz, coaxial cable with a solid copper outer conductor is available in sizes of 0.25 inch upward. The outer conductor is corrugated like a bellows to permit flexibility and the inner conductor is held in position by a plastic spiral to approximate an air dielectric. One brand name for such cable is Heliax. Coaxial cables require an internal structure of an insulating material to maintain the spacing between the center conductor and shield.
In telecommunications, a repeater is an electronic device that receives a signal and retransmits it. Repeaters are used to extend transmissions so that the signal can cover longer distances or be received on the other side of an obstruction; some types of repeaters broadcast an identical signal, but alter its method of transmission, for example, on another frequency or baud rate. There are several different types of repeaters. A broadcast relay station is a repeater used in broadcast television; when an information-bearing signal passes through a communication channel, it is progressively degraded due to loss of power. For example, when a telephone call passes through a wire telephone line, some of the power in the electric current which represents the audio signal is dissipated as heat in the resistance of the copper wire; the longer the wire is, the more power is lost, the smaller the amplitude of the signal at the far end. So with a long enough wire the call will not be audible at the other end.
The farther from a radio station a receiver is, the weaker the radio signal, the poorer the reception. A repeater is an electronic device in a communication channel that increases the power of a signal and retransmits it, allowing it to travel further. Since it amplifies the signal, it requires a source of electric power; the term "repeater" originated with telegraphy in the 19th century, referred to an electromechanical device used to regenerate telegraph signals. Use of the term has continued in data communications. In computer networking, because repeaters work with the actual physical signal, do not attempt to interpret the data being transmitted, they operate on the physical layer, the first layer of the OSI model; this is used to increase the range of telephone signals in a telephone line. Land line repeaterThey are most used in trunklines that carry long distance calls. In an analog telephone line consisting of a pair of wires, it consists of an amplifier circuit made of transistors which use power from a DC current source to increase the power of the alternating current audio signal on the line.
Since the telephone is a duplex communication system, the wire pair carries two audio signals, one going in each direction. So telephone repeaters have to be bilateral, amplifying the signal in both directions without causing feedback, which complicates their design considerably. Telephone repeaters were the first type of repeater and were some of the first applications of amplification; the development of telephone repeaters between 1900 and 1915 made long distance phone service possible. Now, most telecommunications cables are fiber optic cables. Before the invention of electronic amplifiers, mechanically coupled carbon microphones were used as amplifiers in telephone repeaters. After the turn of the 20th century it was found that negative resistance mercury lamps could amplify, they were used; the invention of audion tube repeaters around 1916 made transcontinental telephony practical. In the 1930s vacuum tube repeaters using hybrid coils became commonplace, allowing the use of thinner wires.
In the 1950s negative impedance gain devices were more popular, a transistorized version called the E6 repeater was the final major type used in the Bell System before the low cost of digital transmission made all voiceband repeaters obsolete. Frequency frogging repeaters were commonplace in frequency-division multiplexing systems from the middle to late 20th century. Submarine cable repeaterThis is a type of telephone repeater used in underwater submarine telecommunications cables; this is used to increase the range of signals in a fiber optic cable. Digital information travels through a fiber optic cable in the form of short pulses of light; the light is made up of particles called photons, which can be scattered in the fiber. An optical communications repeater consists of a phototransistor which converts the light pulses to an electrical signal, an amplifier to increase the power of the signal, an electronic filter which reshapes the pulses, a laser which converts the electrical signal to light again and sends it out the other fiber.
However, optical amplifiers are being developed for repeaters to amplify the light itself without the need of converting it to an electric signal first. This is used to extend the range of coverage of a radio signal; the history of radio relay repeaters began in 1898 from the publication by Johann Mattausch in Austrian Journal Zeitschrift für Electrotechnik. But his proposal "Translator" was not suitable for use; the first relay system with radio repeaters, which functioned, was that invented in 1899 by Emile Guarini-Foresio. A radio repeater consists of a radio receiver connected to a radio transmitter; the received signal is amplified and retransmitted on another frequency, to provide coverage beyond the obstruction. Usage of a duplexer can allow the repeater to use one antenna for both receive and transmit at the same time. Broadcast relay station, rebroadcastor or translator: This is a repeater used to extend the coverage of a radio or television broadcasting station, it consists of a secondary television transmitter.
The signal from the main transmitter comes over leased telephone lines or by microwave relay. Microwave relay: This is a specialized point-to-point telecommunications link, consisting of a microwave receiver that receives information over a beam of microwaves from an
Twisted pair cabling is a type of wiring in which two conductors of a single circuit are twisted together for the purposes of improving electromagnetic compatibility. Compared to a single conductor or an untwisted balanced pair, a twisted pair reduces electromagnetic radiation from the pair and crosstalk between neighboring pairs and improves rejection of external electromagnetic interference, it was invented by Alexander Graham Bell. In a balanced line, the two wires carry equal and opposite signals, the destination detects the difference between the two; this is known as differential signaling. Noise sources introduce signals into the wires by coupling of electric or magnetic fields and tend to couple to both wires equally; the noise thus produces a common-mode signal which can be canceled at the receiver when the difference signal is taken. Differential signaling starts to fail; this problem is apparent in telecommunication cables where pairs in the same cable lie next to each other for many miles.
Twisting the pairs counters this effect as on each half twist the wire nearest to the noise-source is exchanged. Provided the interfering source remains uniform, or nearly so, over the distance of a single twist, the induced noise will remain common-mode; the twist rate makes up part of the specification for a given type of cable. When nearby pairs have equal twist rates, the same conductors of the different pairs may lie next to each other undoing the benefits of differential mode. For this reason it is specified that, at least for cables containing small numbers of pairs, the twist rates must differ. In contrast to shielded or foiled twisted pair, UTP cable is not surrounded by any shielding. UTP is the primary wire type for telephone usage and is common for computer networking; the earliest telephones used open-wire single-wire earth return circuits. In the 1880s electric trams were installed in many cities. Lawsuits being unavailing, the telephone companies converted to balanced circuits, which had the incidental benefit of reducing attenuation, hence increasing range.
As electrical power distribution became more commonplace, this measure proved inadequate. Two wires, strung on either side of cross bars on utility poles, shared the route with electrical power lines. Within a few years, the growing use of electricity again brought an increase of interference, so engineers devised a method called wire transposition, to cancel out the interference. In wire transposition, the wires exchange position once every several poles. In this way, the two wires would receive similar EMI from power lines; this represented an early implementation of twisting, with a twist rate of about four twists per kilometre, or six per mile. Such open-wire balanced lines with periodic transpositions still survive today in some rural areas. Twisted-pair cabling was invented by Alexander Graham Bell in 1881. By 1900, the entire American telephone line network was either twisted pair or open wire with transposition to guard against interference. Today, most of the millions of kilometres of twisted pairs in the world are outdoor landlines, owned by telephone companies, used for voice service, only handled or seen by telephone workers.
Unshielded twisted pair cables are found in many Ethernet networks and telephone systems. For indoor telephone applications, UTP is grouped into sets of 25 pairs according to a standard 25-pair color code developed by AT&T Corporation. A typical subset of these colors shows up in most UTP cables; the cables are made with copper wires measured at 22 or 24 American Wire Gauge, with the colored insulation made from an insulator such as polyethylene or FEP and the total package covered in a polyethylene jacket. For urban outdoor telephone cables containing hundreds or thousands of pairs, the cable is divided into small but identical bundles; each bundle consists of twisted pairs. The bundles are in turn twisted together to make up the cable. Pairs having the same twist rate within the cable can still experience some degree of crosstalk. Wire pairs are selected to minimize crosstalk within a large cable. UTP cable is the most common cable used in computer networking. Modern Ethernet, the most common data networking standard, can use UTP cables.
Twisted pair cabling is used in data networks for short and medium length connections because of its lower costs compared to optical fiber and coaxial cable. UTP is finding increasing use in video applications in security cameras. Many cameras include a UTP output with screw terminals; as UTP is a balanced transmission line, a balun is needed to connect to unbalanced equipment, for example any using BNC connectors and designed for coaxial cable. Twisted pair cables incorporate shielding in an attempt to prevent electromagnetic interference. Shielding provides an electrically conductive barrier to attenuate electromagnetic waves external to the shield; such shielding can be applied to individual quads. Individual pairs are foil shielded, while an overall cable may use any of braided screen or foi
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Hub (network science)
In network science, a hub is a node with a number of links that exceeds the average. Emergence of hubs is a consequence of a scale-free property of networks. While hubs cannot be observed in a random network, they are expected to emerge in scale-free networks; the uprise of hubs in scale-free networks is associated with power-law distribution. Hubs have a significant impact on the network topology. Hubs can be found in many real networks, such as Internet. A hub is a component of a network with a high-degree node. Hubs have a larger number of links in comparison with other nodes in the network; the number of links for a hub in a scale-free network is much higher than for the biggest node in a random network, keeping the size N of the network and average degree <k> constant. The existence of hubs is the biggest difference between scale-free networks. In random networks, the degree k is comparable for every node. In scale-free networks, a few nodes have a high degree k while the other nodes have a small number of links.
Emergence of hubs can be explained by the difference between scale-free networks and random networks. Scale-free networks are different from random networks in two aspects: growth, preferential attachment. Scale-free networks assume a continuous growth of the number of nodes N, compared to random networks which assume a fixed number of nodes. In scale-free networks the degree of the largest hub rises polynomially with the size of the network. Therefore, the degree of a hub can be high in a scale-free network. In random networks the degree of the largest node rises logaritmically with N, thus the hub number will be small in a large network. A new node in a scale-free network has a tendency to link to a node with a higher degree, compared to a new node in a random network which links itself to a random node; this process is called preferential attachment. The tendency of a new node to link to a node with a high degree k is characterized by power-law distribution; this idea was introduced by Vilfredo Pareto and it explained why a small percentage of the population earns most of the money.
This process is present in networks as well, for example 80 percent of web links point to 15 percent of webpages. The emergence of scale-free networks is not typical only of networks created by human action, but of such networks as metabolic networks or illness networks; this phenomenon may be explained by the example of hubs on the World Wide Web such as Facebook or Google. These webpages are well known and therefore the tendency of other webpages pointing to them is much higher than linking to random small webpages; the mathematical explanation for Barabási–Albert model: The network begins with an initial connected network of m 0 nodes. New nodes are added to the network one at a time; each new node is connected to m ≤ m 0 existing nodes with a probability, proportional to the number of links that the existing nodes have. Formally, the probability p i that the new node is connected to node i is p i = k i ∑ j k j, where k i is the degree of the node i and the sum is taken over all pre-existing nodes j.
Emergence of hubs in networks is related to time. In scale-free networks, nodes which emerged earlier have a higher chance of becoming a hub than latecomers; this phenomenon is called first-mover advantage and it explains why some nodes become hubs and some do not. However, in a real network, the time of emergence is not the only factor that influences the size of the hub. For example, Facebook emerged 8 years after Google became the largest hub on the World Wide Web and yet in 2011 Facebook became the largest hub of WWW. Therefore, in real networks the growth and the size of a hub depends on various attributes such as popularity, quality or the aging of a node. There are several attributes of Hubs in a scale-free networks The more observable hubs are in a network, the more they shrink a distances between nodes. In a scale-free networks hubs serve. Since the distance of two random nodes in a scale-free networks is small, we refer to scale-free networks as "small" or "ultra small". While a difference between path distance in a various small networks may not be noticeable, the difference in a path distance between large random network and scale-free network is remarkable.
Average path length in scale-free networks: ℓ ∼ ln N ln ln N. The phenomenon present in a real networks; this phenomenon is responsible for changes in topology of networks. The example of aging phenomenon may be the case of Facebook overtaking the position of the largest hub on the Web where Google was the largest node since 2000. During the random failure of nodes or targeted attack hubs are key components of the network. During the random failure o
An optical fiber is a flexible, transparent fiber made by drawing glass or plastic to a diameter thicker than that of a human hair. Optical fibers are used most as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths than electrical cables. Fibers are used instead of metal wires. Fibers are used for illumination and imaging, are wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are used for a variety of other applications, some of them being fiber optic sensors and fiber lasers. Optical fibers include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers.
Multi-mode fibers have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters. Being able to join optical fibers with low loss is important in fiber optic communication; this is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors; the field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics.
The term was coined by Indian physicist Narinder Singh Kapany, acknowledged as the father of fiber optics. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall wrote about the property of total internal reflection in an introductory book about the nature of light in 1870:When the light passes from air into water, the refracted ray is bent towards the perpendicular... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be reflected at the surface.... The angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′.
In the late 19th and early 20th centuries, light was guided through bent glass rods to illuminate body cavities. Practical applications such as close internal illumination during dentistry appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was forgotten. In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding; that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. Their article titled "A flexible fibrescope, using static scanning" was published in the journal Nature in 1954.
The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers. A variety of other image transmission applications soon followed. Kapany coined the term fiber optics, wrote a 1960 article in Scientific American that introduced the topic to a wide audience, wrote the first book about the new field; the first working fiber-optical data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. NASA used fiber optics in the television cameras. At the time, the use in the cameras was classified confidential, employees handling the cameras had to be supervised by someone with an appropriate security clearance. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables were the first, in 1965, to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer, making fibers a practical communication medium.
They proposed th
Single point of failure
A single point of failure is a part of a system that, if it fails, will stop the entire system from working. SPOFs are undesirable in any system with a goal of high availability or reliability, be it a business practice, software application, or other industrial system. Systems can be made robust by adding redundancy in all potential SPOFs. For instance, the owner of a small tree care company may only own one woodchipper. If the chipper breaks, he may be unable to complete his current job and may have to cancel future jobs until he can obtain a replacement. Redundancy can be achieved at various levels. For instance, the owner of the tree care company may have spare parts ready for the repair of the wood chipper, in case it fails. At a higher level, he may have a second wood chipper. At the highest level, he may have enough equipment available to replace everything at the work site in the case of multiple failures; the assessment of a potential SPOF involves identifying the critical components of a complex system that would provoke a total systems failure in case of malfunction.
Reliable systems should not rely on any such individual component. In computing, redundancy can be achieved at the internal component level, at the system level, or site level. One would deploy a load balancer to ensure high availability for a server cluster at the system level. In a high-availability server cluster, each individual server may attain internal component redundancy by having multiple power supplies, hard drives, other components. System level redundancy could be obtained by having spare servers waiting to take on the work of another server if it fails. Since a data center is a support center for other operations such as business logic, it represents a potential SPOF in itself. Thus, at the site level, the entire cluster may be replicated at another location, where it can be accessed in case the primary location becomes unavailable; this is addressed as part of an IT disaster recovery program. Paul Baran and Donald Davies developed packet switching, a key part of "survivable communications networks".
Such networks – including ARPANET and the Internet – are designed to have no single point of failure. Multiple paths between any two points on the network allow those points to continue communicating with each other, the packets "routing around" damage after any single failure of any one particular path or any one intermediate node. Network protocols used to prevent SPOF: Intermediate System to Intermediate System Open Shortest Path First Shortest Path Bridging In software engineering, a bottleneck occurs when the capacity of an application or a computer system is limited by a single component; the bottleneck has lowest throughput of all parts of the transaction path. Tracking down bottlenecks is called performance analysis. Reduction is achieved with the help of specialized tools, known as performance analyzers or profilers; the objective being to make those particular sections of code perform as fast as possible to improve overall algorithmic efficiency. A mistake in just one component can compromise the entire system.
The concept of a single point of failure has been applied to fields outside of engineering and networking, such as corporate supply chain management and transportation management. Design structures that create single points of failure include bottlenecks and series circuits. In transportation, some noted recent examples of the concept's recent application have included the Nipigon River Bridge in Canada, where a partial bridge failure in January 2016 severed road traffic between Eastern Canada and Western Canada for several days because it is located along a portion of the Trans-Canada Highway where there is no alternate detour route for vehicles to take; the concept of a single point of failure has been applied to the fields of intelligence. Edward Snowden talked of the dangers of being what he described as "the single point of failure" – the sole repository of information