A webclient is a piece of computer hardware or software that accesses a service made available by a server. The server is on another computer system, in which case the client accesses the service by way of a network; the term applies to the role that devices play in the client -- server model. A client is a computer or a program that, as part of its operation, relies on sending a request to another program or a computer hardware or software that accesses a service made available by a server. For example, web browsers are clients that connect to web servers and retrieve web pages for display. Email clients retrieve email from mail servers. Online chat uses a variety of clients. Multiplayer video games or online video games may run as a client on each computer; the term "client" may be applied to computers or devices that run the client software or users that use the client software. A client is part of a client–server model, still used today. Clients and servers may be computer programs run on the same machine and connect via inter-process communication techniques.
Combined with Internet sockets, programs may connect to a service operating on a remote system through the Internet protocol suite. Servers wait for potential clients to initiate connections; the term was first applied to devices that were not capable of running their own stand-alone programs, but could interact with remote computers via a network. These computer terminals were clients of the time-sharing mainframe computer. In one classification, client computers and devices are either thick clients, thin clients, or hybrid clients. A Thick client known as a rich client or fat client, is a client that performs the bulk of any data processing operations itself, does not rely on the server; the personal computer is a common example of a fat client, because of its large set of features and capabilities and its light reliance upon a server. For example, a computer running an Art program that shares the result of its work on a network is a thick client. A computer that runs entirely as a standalone machine save to send or receive files via a network is by standard called a workstation.
A thin client is a minimal sort of client. Thin clients use the resources of the host computer. A thin client only presents processed data provided by an application server, which performs the bulk of any required data processing. A device using web application is a thin client. A hybrid client is a mixture of the above two client models. Similar to a fat client, it relies on the server for storing persistent data; this approach offers features from both the thin client. A device running an online version of the video game Diablo III is an example of hybrid client
A media access control address of a device is a unique identifier assigned to a network interface controller. For communications within a network segment, it is used as a network address for most IEEE 802 network technologies, including Ethernet, Wi-Fi, Bluetooth. Within the Open Systems Interconnection model, MAC addresses are used in the medium access control protocol sublayer of the data link layer; as represented, MAC addresses are recognizable as six groups of two hexadecimal digits, separated by hyphens, colons, or no separator. A MAC address may be referred to as the burned-in address, is known as an Ethernet hardware address, hardware address, physical address. A network node with multiple NICs must have a unique MAC address for each. Sophisticated network equipment such as a multilayer switch or router may require one or more permanently assigned MAC addresses. MAC addresses are most assigned by the manufacturer of network interface cards; each is stored by a firmware mechanism. A MAC address includes the manufacturer's organizationally unique identifier.
MAC addresses are formed according to the principles of two numbering spaces based on Extended Unique Identifiers managed by the Institute of Electrical and Electronics Engineers: EUI-48, which replaces the obsolete term MAC-48, EUI-64. The original IEEE 802 MAC address comes from the original Xerox Network Systems Ethernet addressing scheme; this 48-bit address space contains 248 or 281,474,976,710,656 possible MAC addresses. The IEEE manages allocation of MAC addresses known as MAC-48 and which it now refers to as EUI-48 identifiers; the IEEE has a target lifetime of 100 years for applications using EUI-48 space and restricts applications accordingly. The IEEE encourages adoption of the more plentiful EUI-64 for non-Ethernet applications; the distinction between EUI-48 and MAC-48 identifiers is in application only. MAC-48 was used to address hardware interfaces within existing 802-based networking applications; the IEEE now considers MAC-48 to be an obsolete term. EUI-48 is now used in all cases.
In addition, the EUI-64 numbering system encompassed both MAC-48 and EUI-48 identifiers by a simple translation mechanism. These translations have since been deprecated. IPv6 — one of the most prominent standards that uses a Modified EUI-64 — treats MAC-48 as EUI-48 instead and toggles the U/L bit; this results in extending MAC addresses to Modified EUI-64 using only FF-FE and with the U/L bit inverted. An Individual Address Block is an inactive registry activity, replaced by the MA-S registry product as of January 1, 2014; the IAB uses a OUI from MA-L belonging to the IEEE Registration Authority, concatenated with 12 additional IEEE-provided bits, leaving only 12 bits for the IAB owner to assign to their individual devices. An IAB is ideal for organizations requiring not more than 4096 unique 48-bit numbers. Unlike an OUI, which allows the assignee to assign values in various different number spaces, the Individual Address Block could only be used to assign EUI-48 identifiers. All other potential uses based on the OUI from which the IABs are allocated are reserved, remain the property of the IEEE Registration Authority.
It should be noted that, between 2007 and September 2012, the OUI value 00:50:C2 was used for IAB assignments. After September 2012, the value 40:D8:55 was used; the owners of an assigned IAB may continue to use the assignment. There is another registry, called MA-M; the MA-M assignment block provides 236 EUI-64 identifiers. The first 24-bits of the assigned MA-M block are an OUI assigned to IEEE that will not be reassigned. Addresses can either be locally administered addresses. A universally administered address is uniquely assigned to a device by its manufacturer; the first three octets identify the organization that issued the identifier and are known as the organizationally unique identifier. The remainder of the address are assigned by that organization in nearly any manner they please, subject to the constraint of uniqueness. A locally administered address is assigned to a device by a network administrator, overriding the burned-in address. Universally administered and locally administered addresses are distinguished by setting the second-least-significant bit of the first octet of the address.
This bit is referred to as the U/L bit, short for Universal/Local, which identifies how the address is administered. If the bit is 0, the address is universally administered. If it is 1, the address is locally administered. In the example address 06-00-00-00-00-00 the first octet is 06, the binary form of, 00000110, where the second-least-significant bit is 1. Therefore, it is a locally administered address. Another example that uses locally administered addresses is the DECnet protocol; the MAC address of the Ethernet interface is changed by the DE
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
In telecommunications, a repeater is an electronic device that receives a signal and retransmits it. Repeaters are used to extend transmissions so that the signal can cover longer distances or be received on the other side of an obstruction; some types of repeaters broadcast an identical signal, but alter its method of transmission, for example, on another frequency or baud rate. There are several different types of repeaters. A broadcast relay station is a repeater used in broadcast television; when an information-bearing signal passes through a communication channel, it is progressively degraded due to loss of power. For example, when a telephone call passes through a wire telephone line, some of the power in the electric current which represents the audio signal is dissipated as heat in the resistance of the copper wire; the longer the wire is, the more power is lost, the smaller the amplitude of the signal at the far end. So with a long enough wire the call will not be audible at the other end.
The farther from a radio station a receiver is, the weaker the radio signal, the poorer the reception. A repeater is an electronic device in a communication channel that increases the power of a signal and retransmits it, allowing it to travel further. Since it amplifies the signal, it requires a source of electric power; the term "repeater" originated with telegraphy in the 19th century, referred to an electromechanical device used to regenerate telegraph signals. Use of the term has continued in data communications. In computer networking, because repeaters work with the actual physical signal, do not attempt to interpret the data being transmitted, they operate on the physical layer, the first layer of the OSI model; this is used to increase the range of telephone signals in a telephone line. Land line repeaterThey are most used in trunklines that carry long distance calls. In an analog telephone line consisting of a pair of wires, it consists of an amplifier circuit made of transistors which use power from a DC current source to increase the power of the alternating current audio signal on the line.
Since the telephone is a duplex communication system, the wire pair carries two audio signals, one going in each direction. So telephone repeaters have to be bilateral, amplifying the signal in both directions without causing feedback, which complicates their design considerably. Telephone repeaters were the first type of repeater and were some of the first applications of amplification; the development of telephone repeaters between 1900 and 1915 made long distance phone service possible. Now, most telecommunications cables are fiber optic cables. Before the invention of electronic amplifiers, mechanically coupled carbon microphones were used as amplifiers in telephone repeaters. After the turn of the 20th century it was found that negative resistance mercury lamps could amplify, they were used; the invention of audion tube repeaters around 1916 made transcontinental telephony practical. In the 1930s vacuum tube repeaters using hybrid coils became commonplace, allowing the use of thinner wires.
In the 1950s negative impedance gain devices were more popular, a transistorized version called the E6 repeater was the final major type used in the Bell System before the low cost of digital transmission made all voiceband repeaters obsolete. Frequency frogging repeaters were commonplace in frequency-division multiplexing systems from the middle to late 20th century. Submarine cable repeaterThis is a type of telephone repeater used in underwater submarine telecommunications cables; this is used to increase the range of signals in a fiber optic cable. Digital information travels through a fiber optic cable in the form of short pulses of light; the light is made up of particles called photons, which can be scattered in the fiber. An optical communications repeater consists of a phototransistor which converts the light pulses to an electrical signal, an amplifier to increase the power of the signal, an electronic filter which reshapes the pulses, a laser which converts the electrical signal to light again and sends it out the other fiber.
However, optical amplifiers are being developed for repeaters to amplify the light itself without the need of converting it to an electric signal first. This is used to extend the range of coverage of a radio signal; the history of radio relay repeaters began in 1898 from the publication by Johann Mattausch in Austrian Journal Zeitschrift für Electrotechnik. But his proposal "Translator" was not suitable for use; the first relay system with radio repeaters, which functioned, was that invented in 1899 by Emile Guarini-Foresio. A radio repeater consists of a radio receiver connected to a radio transmitter; the received signal is amplified and retransmitted on another frequency, to provide coverage beyond the obstruction. Usage of a duplexer can allow the repeater to use one antenna for both receive and transmit at the same time. Broadcast relay station, rebroadcastor or translator: This is a repeater used to extend the coverage of a radio or television broadcasting station, it consists of a secondary television transmitter.
The signal from the main transmitter comes over leased telephone lines or by microwave relay. Microwave relay: This is a specialized point-to-point telecommunications link, consisting of a microwave receiver that receives information over a beam of microwaves from an
A network bridge is a computer networking device that creates a single aggregate network from multiple communication networks or network segments. This function is called network bridging. Bridging is distinct from routing. Routing allows multiple networks to communicate independently and yet remain separate, whereas bridging connects two separate networks as if they were a single network. In the OSI model, bridging is performed in the data link layer. If one or more segments of the bridged network are wireless, the device is known as a wireless bridge. There are four main types of network bridging technologies: simple bridging, multiport bridging, learning or transparent bridging, source route bridging. Transparent bridging uses a table called the forwarding information base to control the forwarding of frames between network segments; the table starts empty and entries are added as the bridge receives frames. If a destination address entry is not found in the table, the frame is flooded to all other ports of the bridge, flooding the frame to all segments except the one from which it was received.
By means of these flooded frames, a host on the destination network will respond and a forwarding database entry will be created. Both source and destination addresses are used in this process: source addresses are recorded in entries in the table, while destination addresses are looked up in the table and matched to the proper segment to send the frame to. Digital Equipment Corporation developed the technology in the 1980s. In the context of a two-port bridge, one can think of the forwarding information base as a filtering database. A bridge decides to either forward or filter. If the bridge determines that the destination host is on another segment on the network, it forwards the frame to that segment. If the destination address belongs to the same segment as the source address, the bridge filters the frame, preventing it from reaching the other network where it is not needed. Transparent bridging can operate over devices with more than two ports; as an example, consider a bridge connected to three hosts, A, B, C.
The bridge has three ports. A is connected to bridge port 1, B is connected to bridge port 2, C is connected to bridge port 3. A sends a frame addressed to B to the bridge; the bridge examines the source address of the frame and creates an address and port number entry for A in its forwarding table. The bridge examines the destination address of the frame and does not find it in its forwarding table so it floods it to all other ports: 2 and 3; the frame is received by hosts B and C. Host C ignores the frame. Host B recognizes a destination address match and generates a response to A. On the return path, the bridge adds an port number entry for B to its forwarding table; the bridge has A's address in its forwarding table so it forwards the response only to port 1. Host C or any other hosts on port 3 are not burdened with the response. Two-way communication is now possible between B without any further flooding in network. A simple bridge connects two network segments by operating transparently and deciding on a frame-by-frame basis whether or not to forward from one network to the other.
A store and forward technique is used so, as part of forwarding, the frame integrity is verified on the source network and CSMA/CD delays are accommodated on the destination network. In contrast to repeaters which extend the maximum span of a segment, bridges only forward frames that are required to cross the bridge. Additionally, bridges reduce collisions by creating a separate collision domain on either side of the bridge. A multiport bridge connects multiple networks and operates transparently to decide on a frame-by-frame basis whether to forward traffic. Additionally a multiport bridge must decide. Like the simple bridge, a multiport bridge uses store and forward operation; the multiport bridge function serves as the basis for network switches. The forwarding information base stored in content-addressable memory is empty. For each received ethernet frame the switch learns from the frame's source MAC address and adds this together with an ingress interface identifier to the forwarding information base.
The switch forwards the frame to the interface found in the CAM based on the frame's destination MAC address. If the destination address is unknown the switch sends the frame out on all interfaces; this behaviour is called unicast flooding. Once a bridge learns the addresses of its connected nodes, it forwards data link layer frames using a layer 2 forwarding method. There are four forwarding methods a bridge can use, of which the second through fourth methods were performance-increasing methods when used on "switch" products with the same input and output port bandwidths: Store and forward: the switch buffers and verifies each frame before forwarding it. Cut through: the switch starts forwarding after the frame's destination address is received. There is no error checking with this method; when the outgoing port is busy at the time, the switch falls back to store-and-forward operation. When the egress port is running at a faster data rate than the ingress port, store-and-forward is used. Fragment free: a method that attempts to retain the benefits of both store and forward and cut through.
Fragment free checks the first 64 bytes of the frame. According to Ethernet specifications, collisions should be detected during the first 64 bytes of the frame, so frames that are in error because of a collision will not be forwarded; this way
Peer-to-peer computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are privileged, equipotent participants in the application, they are said to form a peer-to-peer network of nodes. Peers make a portion of their resources, such as processing power, disk storage or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client-server model in which the consumption and supply of resources is divided. Emerging collaborative P2P systems are going beyond the era of peers doing similar things while sharing resources, are looking for diverse peers that can bring in unique resources and capabilities to a virtual community thereby empowering it to engage in greater tasks beyond those that can be accomplished by individual peers, yet that are beneficial to all the peers.
While P2P systems had been used in many application domains, the architecture was popularized by the file sharing system Napster released in 1999. The concept has inspired new philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. While P2P systems had been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster; the peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, filesystems." The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor and linking content to form an interlinked "web" of links.
The early Internet was more open than present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts to the broadcasting-like structure of the web; as a precursor to the Internet, ARPANET was a successful client-server network where "every participating node could request and serve content." However, ARPANET was not self-organized, it lacked the ability to "provide any means for context or content-based routing beyond'simple' address-based routing."Therefore, USENET, a distributed messaging system, described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system; the basic model is a client-server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers; the same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of e-mail clients and their direct connections is a client-server relationship.
In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network independent from the physical network, without having to obey any administrative authorities or restrictions." A peer-to-peer network is designed around the notion of equal peer nodes functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol service in which the client and server programs are distinct: the clients initiate the transfer, the servers satisfy these requests. Peer-to-peer networks implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network.
Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers are able to communicate with each other directly, via the logical overlay links. Overlays are used for indexing and peer discovery, make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, how resources are indexed and located, we can classify networks as unstructured or structured. Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other.. Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay; because the role of all peers in the network is the same, unstructured networks are robust in the face of high rates of "churn"—that is, when large numbers of peers are joining and leaving the network.