Packet switching is a method of grouping data, transmitted over a digital network into packets. Packets are made of a payload. Data in the header are used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. Packet switching is the primary basis for data communications in computer networks worldwide. In the early 1960s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching with the goal to provide a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense; this concept contrasted and contradicted then-established principles of pre-allocation of network bandwidth fortified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965.
Davies is credited with coining the modern term packet switching and inspiring numerous packet switching networks in the decade following, including the incorporation of the concept in the early ARPANET in the United States. A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, upon completion of the transmission the channel is made available for the transfer of other traffic Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques; as they traverse networking hardware, such as switches and routers, packets are received, buffered and retransmitted, resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket.
Packet-based communication may be implemented without intermediate forwarding nodes. In case of a shared physical medium, the packets may be delivered according to a multiple access scheme. Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages; the concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation starting in the late 1950s in the US and Donald Davies at the National Physical Laboratory in the UK. In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment radar defense system.
They sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies. Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative; the concept was first presented to the Air Force in the summer of 1961 as briefing B-265 published as RAND report P-2626 in 1962, in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, survivable communications network; the work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, delivery of these messages by store and forward switching. Davies developed a similar message routing concept in 1965, he called it packet switching, proposed building a nationwide network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Baran's work. Roger Scantlebury, a member of Davies' team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles and suggested it for use in the ARPANET.
Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in 1967, the NPL Data Communications Network entered service in 1969. Leonard Kleinrock conducted early research in queueing theory for his doctoral dissertation at MIT in 1961-2 and published it as a book in 1964 in the field of digital message switching. Following the 1967 ACM Symposium, Lawrence Roberts asked Kleinrock to carry out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET; the NPL team carried out simulation work on packet networks. The French CYCLADES network, designed by Louis Pouzin in the early 1970s, was the first to employ what came to be known as the end-to-end principle, make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a centralized service of the network i
IP fragmentation is an Internet Protocol process that breaks packets into smaller pieces, so that the resulting pieces can pass through a link with a smaller maximum transmission unit than the original packet size. The fragments are reassembled by the receiving host. RFC 791 describes the procedure for IP fragmentation, transmission and reassembly of IP packets. RFC 815 describes a simplified reassembly algorithm; the Identification field along with the foreign and local internet address and the protocol ID, Fragment offset field along with Don't Fragment and More Fragment flags in the IP protocol header are used for fragmentation and reassembly of IP packets. If a receiving host receives a fragmented IP packet, it has to reassemble the packet and pass it to the higher protocol layer. Reassembly is intended to happen in the receiving host but in practice it may be done by an intermediate router, for example, network address translation may need to reassemble fragments in order to translate data streams.
The details of the fragmentation mechanism, as well as the overall architectural approach to fragmentation, are different between IPv4, IPv6. Under IPv4, a router that receives a network packet larger than the next hop's MTU has two options: drop the packet and send an Internet Control Message Protocol message which indicates the condition Packet too Big, or fragment the packet and send it over the link with a smaller MTU. Although originators may produce fragmented packets, IPv6 routers do not have the option to fragment further. Instead, network equipment is required to deliver any IPv6 packets or packet fragments smaller than or equal to 1280 bytes and IPv6 hosts are required to determine the optimal MTU through path MTU discovery before sending packets. Though the header formats are different for IPv4 and IPv6, analogous fields are used for fragmentation, so the same algorithm can be reused for IPv4 and IPv6 fragmentation and reassembly. In IPv4, hosts must make a best-effort attempt to reassemble fragmented IP packets with a total reassembled size of up to 576 bytes.
They may attempt to reassemble fragmented IP packets larger than 576 bytes, but they are permitted to silently discard such larger packets. Applications are recommended to refrain from sending packets larger than 576 bytes unless they have prior knowledge that the remote host is capable of accepting or reassembling them. In IPv6, hosts must make a best-effort attempt to reassemble fragmented packets with a total reassembled size of up to 1500 bytes, larger than IPv6's minimum MTU of 1280 bytes. Fragmented packets with a total reassembled size larger than 1500 bytes may optionally be silently discarded. Applications relying upon IPv6 fragmentation to overcome a path MTU limitation must explicitly fragment the packet at the point of origin; when a network has multiple parallel paths, technologies like LAG and CEF split traffic across the paths according to a hash algorithm. One goal of the algorithm is to ensure all packets of the same flow are sent out the same path to minimize unnecessary packet reordering.
IP fragmentation can cause excessive retransmissions when fragments encounter packet loss and reliable protocols such as TCP must retransmit all of the fragments in order to recover from the loss of a single fragment. Thus, senders use two approaches to decide the size of IP packets to send over the network; the first is for the sending host to send an IP packet of size equal to the MTU of the first hop of the source destination pair. The second is to run the path MTU discovery algorithm, to determine the path MTU between two IP hosts, so that IP fragmentation can be avoided. IPv4 § Fragmentation and reassembly IPv6 packet § Fragmentation IP fragmentation attacks What is packet fragmentation? The Never-Ending Story of IP Fragmentation
IEEE 802.11 is part of the IEEE 802 set of LAN protocols, specifies the set of media access control and physical layer protocols for implementing wireless local area network Wi-Fi computer communication in various frequencies, including but not limited to 2.4, 5, 60 GHz frequency bands. They are the world's most used wireless computer networking standards, used in most home and office networks to allow laptops and smartphones to talk to each other and access the Internet without connecting wires, they are created and maintained by the Institute of Electrical and Electronics Engineers LAN/MAN Standards Committee. The base version of the standard was released in 1997, has had subsequent amendments; the standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products.
As a result, in the marketplace, each revision tends to become its own standard. The protocols are used in conjunction with IEEE 802.2, are designed to interwork seamlessly with Ethernet, are often used to carry Internet Protocol traffic. Although IEEE 802.11 specifications list channels that might be used, the radio frequency spectrum availability allowed varies by regulatory domain. The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employ carrier-sense multiple access with collision avoidance whereby equipment listens to a channel for other users before transmitting each packet. 802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first accepted one, followed by 802.11a, 802.11g, 802.11n, 802.11ac. Other standards in the family are service amendments that are used to extend the current scope of the existing standard, which may include corrections to a previous specification.802.11b and 802.11g use the 2.4 GHz ISM band, operating in the United States under Part 15 of the U.
S. Federal Communications Commission Rules and Regulations; because of this choice of frequency band, 802.11b/g/n equipment may suffer interference in the 2.4 GHz band from microwave ovens, cordless telephones, Bluetooth devices etc. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum and orthogonal frequency-division multiplexing signaling methods, respectively. 802.11a uses the 5 GHz U-NII band, for much of the world, offers at least 23 non-overlapping 20 MHz-wide channels rather than the 2.4 GHz ISM frequency band offering only three non-overlapping 20 MHz-wide channels, where other adjacent channels overlap—see list of WLAN channels. Better or worse performance with higher or lower frequencies may be realized, depending on the environment. 802.11 n can use either the 5 GHz band. The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations.
Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption. 802.11 technology has its origins in a 1985 ruling by the U. S. Federal Communications Commission that released the ISM band for unlicensed use. In 1991 NCR Corporation/AT & T invented a precursor to 802.11 in the Netherlands. The inventors intended to use the technology for cashier systems; the first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s. Vic Hayes, who held the chair of IEEE 802.11 for 10 years, has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
The major commercial breakthrough came with Apple Inc. adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, branded by Apple as AirPort. One year IBM followed with its ThinkPad 1300 series in 2000; the original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 2 megabits per second, plus forward error correction code, it specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U. S. 900 MHz ISM band. Legacy 802.11 with direct-sequence spread spectrum was supplanted and popularized by 802.11b. 802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface.
It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20
In digital communications, a chip is a pulse of a direct-sequence spread spectrum code, such as a Pseudo-random Noise code sequence used in direct-sequence code division multiple access channel access techniques. In a binary direct-sequence system, each chip is a rectangular pulse of +1 or –1 amplitude, multiplied by a data sequence and by a carrier waveform to make the transmitted signal; the chips are therefore just the bit sequence out of the code generator. The chip rate of a code is the number of pulses per second; the chip rate is larger than the symbol rate, meaning that one symbol is represented by multiple chips. The ratio is known as the spreading factor or processing gain: SF = chip rate symbol rate Orthogonal variable spreading factor is an implementation of code division multiple access where before each signal is transmitted, the signal is spread over a wide spectrum range through the use of a user's code. Users' codes are chosen to be mutually orthogonal to each other; these codes are derived from an OVSF code tree, each user is given a different code.
An OVSF code tree is a complete binary tree. Baud CDMA basics