Stream Control Transmission Protocol
The Stream Control Transmission Protocol is a computer networking communications protocol which operates at the transport layer and serves a role similar to the popular protocols TCP and UDP. It is standardized by IETF in RFC 4960. SCTP provides some of the features of both UDP and TCP: it is message-oriented like UDP and ensures reliable, in-sequence transport of messages with congestion control like TCP, it differs from those protocols by providing multi-homing and redundant paths to increase resilience and reliability. In the absence of native SCTP support in operating systems, it is possible to tunnel SCTP over UDP, as well as to map TCP API calls to SCTP calls so existing applications can use SCTP without modification; the reference implementation was released as part of FreeBSD version 7. It has since been ported; the IETF Signaling Transport working group defined the protocol in the year 2000, the IETF Transport Area working group maintains it. RFC 4960 defines the protocol. RFC 3286 provides an introduction.
SCTP applications submit their data to be transmitted in messages to the SCTP transport layer. SCTP places messages and control information into separate chunks, each identified by a chunk header; the protocol can fragment a message into a number of data chunks, but each data chunk contains data from only one user message. SCTP bundles the chunks into SCTP packets; the SCTP packet, submitted to the Internet Protocol, consists of a packet header, SCTP control chunks, followed by SCTP data chunks. One can characterize SCTP as message-oriented, meaning it transports a sequence of messages, rather than transporting an unbroken stream of bytes as does TCP; as in UDP, in SCTP a sender sends a message in one operation, that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol; however TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out.
At the sender, TCP appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such. The term multi-streaming refers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmitting web page images together with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages rather than bytes. TCP preserves byte order in the stream by including a byte sequence number with each segment. SCTP, on the other hand, assigns a message-id to each message sent in a stream; this allows independent ordering of messages in different streams. However, message ordering is optional in SCTP. Features of SCTP include: Reliable transmission of both ordered and unordered data streams. Multihoming support in which one or both endpoints of a connection can consist of more than one IP address, enabling transparent fail-over between redundant network paths.
Delivery of chunks within independent streams eliminates unnecessary head-of-line blocking, as opposed to TCP byte-stream delivery. Explicit partial reliability. Path selection and monitoring to select a primary data transmission path and test the connectivity of the transmission path. Validation and acknowledgment mechanisms protect against flooding attacks and provide notification of duplicated or missing data chunks. Improved error detection suitable for Ethernet jumbo frames; the designers of SCTP intended it for the transport of telephony over Internet Protocol, with the goal of duplicating some of the reliability attributes of the SS7 signaling network in IP. This IETF effort is known as SIGTRAN. In the meantime, other uses have been proposed, for example, the Diameter protocol and Reliable Server Pooling. TCP has provided the primary means to transfer data reliably across the Internet. However, TCP has imposed limitations on several applications. From RFC 4960: TCP provides both reliable data transfer and strict order-of-transmission delivery of data.
Some applications need reliable transfer without sequence maintenance, while others would be satisfied with partial ordering of the data. In both of these cases, the head-of-line blocking property of TCP causes unnecessary delay. For applications exchanging distinct records or messages, the stream-oriented nature of TCP requires the addition of explicit markers or other encoding to delineate the individual records. In order to avoid sending many small IP packets where one single larger packet would have sufficed, the TCP implementation may delay transmitting data while waiting for more data being queued by the application. If and when such a small delay is undesirable, the application must explicitly request undelayed transmission on a case-by-case basis using the push facility. SCTP on the other hand allows undelayed transmission to be configured as a default for an association, eliminating any undesired delays, but at the cost of higher transfer overhead; the limited scope of TCP sockets complicates the task of providing highly-available data transfer capability using multi-homed hosts.
TCP is vulnerable to denial-of-service attacks, such as SYN attacks. Adoption has been slowed by lack of awareness, lack of implementations (particularly in Microsoft Windows
Internet protocol suite
The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol and the Internet Protocol, it is known as the Department of Defense model because the development of the networking method was funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, transmitted and received; this functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment; the technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force.
The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, instead of the network being responsible for reliability, as in the ARPANET, this function was delegated to the hosts.
Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The protocol was implemented as the Transmission Control Program, first published in 1974; the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments, the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would lead to scalability issues; the Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect any network to the ARPANET, irrespective of the local characteristics, thereby solving Kahn's initial internetworking problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years as a joke, the IP over Avian Carriers formal protocol specification was created and tested. A computer called, it forwards network packets forth between them. A router was called gateway, but the term was changed to avoid confusion with other types of gateways. From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.
A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA contracted with BBN Technologies, Stanford University, the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, TCP/IP v4; the last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol; the migration of the ARPANET to TCP/IP was completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board held a three-day TCP/
Session Initiation Protocol
The Session Initiation Protocol is a signaling protocol used for initiating and terminating real-time sessions that include voice and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol networks as well as mobile phone calling over LTE; the protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol and the Simple Mail Transfer Protocol. A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message. SIP works in conjunction with several other protocols that carry the session media. Most media type and parameter negotiation and media setup is performed with the Session Description Protocol, carried as payload in SIP messages.
SIP is designed to be independent of the underlying transport layer protocol, can be used with the User Datagram Protocol, the Transmission Control Protocol, the Stream Control Transmission Protocol. For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security. For the transmission of media streams the SDP payload carried in SIP messages employs the Real-time Transport Protocol or the Secure Real-time Transport Protocol. SIP was designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996; the protocol was standardized as RFC 2543 in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in RFC 3261 and various extensions and clarifications have been published since. SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network with a vision of supporting new multimedia applications.
It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games. SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized by the IETF, while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union. SIP is only involved for the signaling operations of a media communication session and is used to set up and terminate voice or video calls. SIP can be used to establish multiparty sessions, it allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, adding or deleting media streams. SIP has found applications in messaging applications, such as instant messaging, event subscription and notification. SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up.
For call setup, the body of a SIP message contains a Session Description Protocol data unit, which specifies the media format and media communication protocol. Voice and video media streams are carried between the terminals using the Real-time Transport Protocol or Secure Real-time Transport Protocol; every resource of a SIP network, such as user agents, call routers, voicemail boxes, are identified by a Uniform Resource Identifier. The syntax of the URI follows the general standard syntax used in Web services and e-mail; the URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a qualified domain name of the host and port. If secure transmission is required, the scheme sips is used. SIP employs design elements similar to the HTTP request/response transaction model; each transaction consists of a client request that invokes a particular method or function on the server and at least one response.
SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format. SIP can be carried by several transport layer protocols including Transmission Control Protocol, User Datagram Protocol, Stream Control Transmission Protocol. SIP clients use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is used for non-encrypted signaling traffic whereas port 5061 is used for traffic encrypted with Transport Layer Security. SIP-based telephony networks implement call processing features of Signaling System 7, for which special SIP protocol extensions exist, although the two protocols themselves are different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints. SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers; the network elements that use the Session Initiation Protocol for commun
Frame Relay is a standardized wide area network technology that specifies the physical and data link layers of digital telecommunications channels using a packet switching methodology. Designed for transport across Integrated Services Digital Network infrastructure, it may be used today in the context of many other network interfaces. Network providers implement Frame Relay for voice and data as an encapsulation technique used between local area networks over a wide area network; each end-user gets a private line to a Frame Relay node. The Frame Relay network handles the transmission over a changing path transparent to all end-user extensively used WAN protocols, it is less expensive than leased lines and, one reason for its popularity. The extreme simplicity of configuring user equipment in a Frame Relay network offers another reason for Frame Relay's popularity. With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end may loom for the Frame Relay protocol and encapsulation.
However many rural areas remain lacking cable modem services. In such cases, the least expensive type of non-dial-up connection remains a 64-kbit/s Frame Relay line, thus a retail chain, for instance, may use Frame Relay for connecting rural stores into their corporate WAN. The designers of Frame Relay aimed to provide a telecommunication service for cost-efficient data transmission for intermittent traffic between local area networks and between end-points in a wide area network. Frame Relay puts data in variable-size units called "frames" and leaves any necessary error-correction up to the end-points; this speeds up overall data transmission. For most services, the network provides a permanent virtual circuit, which means that the customer sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-provider figures out the route each frame travels to its destination and can charge based on usage. An enterprise can select a level of service quality, prioritizing some frames and making others less important.
Frame Relay can run on full T-carrier system carriers. Frame Relay complements and provides a mid-range service between basic rate ISDN, which offers bandwidth at 128 kbit/s, Asynchronous Transfer Mode, which operates in somewhat similar fashion to Frame Relay but at speeds from 155.520 Mbit/s to 622.080 Mbit/s. Frame Relay has its technical base in the older X.25 packet-switching technology, designed for transmitting data on analog voice lines. Unlike X.25, whose designers expected analog signals with a high chance of transmission errors, Frame Relay is a fast packet switching technology operating over links with a low chance of transmission errors, which means that the protocol does not attempt to correct errors. When a Frame Relay network detects an error in a frame, it drops that frame; the end points have the responsibility for detecting and retransmitting dropped frames. Frame Relay serves to connect local area networks with major backbones, as well as on public wide-area networks and in private network environments with leased lines over T-1 lines.
It requires a dedicated connection during the transmission period. Frame Relay does not provide an ideal path for voice or video transmission, both of which require a steady flow of transmissions. However, under certain circumstances and video transmission do use Frame Relay. Frame Relay originated as an extension of integrated services digital network, its designers aimed to enable a packet-switched network to transport over circuit-switched technology. The technology has become a stand-alone and cost-effective means of creating a WAN. Frame Relay switches create virtual circuits to connect remote LANs to a WAN; the Frame Relay network exists between a LAN border device a router, the carrier switch. The technology used by the carrier to transport data between the switches is variable and may differ among carriers; the sophistication of the technology requires a thorough understanding of the terms used to describe how Frame Relay works. Without a firm understanding of Frame Relay, it is difficult to troubleshoot its performance.
Frame-relay frame structure mirrors exactly that defined for LAP-D. Traffic analysis can distinguish Frame Relay format from LAP-D by its lack of a control field; each Frame Relay protocol data unit consists of the following fields: Flag Field. The flag is used to perform high-level data link synchronization which indicates the beginning and end of the frame with the unique pattern 01111110. To ensure that the 01111110 pattern does not appear somewhere inside the frame, bit stuffing and destuffing procedures are used. Address Field; each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2 to 5, depending on the range of the address in use. A two-octet address field comprises the EA=ADDRESS FIELD EXTENSION BITS and the C/R=COMMAND/RESPONSE BIT. DLCI-Data Link Connection Identifier Bits; the DLCI serves to identify the virtual connection so that the receiving end knows which information connection a frame belongs to. Note that this DLCI has only local significance. A single physical channel can multiplex several different virtual connections.
FECN, BECN, DE bits. These bits report congestion: FECN=Forward Explicit Congestion Notific
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
Simple Network Management Protocol
Simple Network Management Protocol is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. Devices that support SNMP include cable modems, switches, workstations and more. SNMP is used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base which describe the system status and configuration; these variables can be remotely queried by managing applications. Three significant versions of SNMP have been deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance and security. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force, it consists of a set of standards for network management, including an application layer protocol, a database schema, a set of data objects.
In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent which reports information via SNMP to the manager. An SNMP-managed network consists of three key components: Managed devices Agent – software which runs on managed devices Network management station – software which runs on the managerA managed device is a network node that implements an SNMP interface that allows unidirectional or bidirectional access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, but not limited to, access servers, cable modems, hubs, IP telephones, IP video cameras, computer hosts, printers. An agent is a network-management software module. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
A network management station executes applications that control managed devices. NMSs provide the bulk of the memory resources required for network management. One or more NMSs may exist on any managed network. SNMP agents expose management data on the managed systems as variables; the protocol permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design; these hierarchies are described as a management information base. MIBs describe the structure of the management data of a device subsystem; each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0, a subset of ASN.1. SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol.
The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent; the agent response is sent back to the source port on the manager. The manager receives notifications on port 162; the agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162. SNMPv1 specifies five core protocol data units. Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows: The seven SNMP PDU types as identified by the PDU-type field are as follows: GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings. Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned.
SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with new values for the variables is returned. GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB; the entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request. GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior.
GetBulkRequest was introduced in SNMPv2. Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-index fields. Although it was used as a response to both gets and sets, this P
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and