Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
Network Time Protocol
The Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time, it uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can maintain time to within tens of milliseconds over the public Internet, can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more; the protocol is described in terms of a client-server model, but can as be used in peer-to-peer relationships where both peers consider the other to be a potential time source.
Implementations send and receive timestamps using the User Datagram Protocol on port number 123. They can use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted; the current protocol is version 4, a proposed standard as documented in RFC 5905. It is backward compatible with version 3, specified in RFC 1305. In 1979, network time synchronization technology was used in what was the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York; the technology was described in the 1981 Internet Engineering Note 173 and a public protocol was developed from it, documented in RFC 778. The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years.
Other related network tools were available both and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp and IP Timestamp option. More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients. In 1985, NTP version 0 was implemented in both Fuzzball and Unix, the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in RFC 958. Despite the slow computers and networks available at the time, accuracy of better than 100 milliseconds was obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks. In 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC 1059, it drew on the experimental results and clock filter algorithm documented in RFC 956 and was the first version to describe the client-server and peer-to-peer modes.
In 1991, the NTPv1 architecture and algorithms were brought to the attention of a wider engineering community with the publication of an article by David L. Mills in the IEEE Transactions on Communications. In 1989, RFC 1119 was published defining NTPv2 by means of a state machine, with pseudocode to describe its operation, it introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4. The design of NTPv2 was criticized for lacking formal correctness principles by the DTSS community, their alternative design included Marzullo's algorithm, a modified version of, promptly added to NTP. The bulk of the algorithms from this era have largely survived into NTPv4. In 1992, RFC 1305 defined NTPv3; the RFC included an analysis of all sources of error, from the reference clock down to the final client, which enabled the calculation of a metric that helps choose the best server where several candidates appear to disagree. Broadcast mode was introduced. In subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required.
In 2010, RFC 5905 was published containing a proposed specification for NTPv4. The protocol has moved on since and as of 2014, an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the reference implementation is maintained as an open source project led by Harlan Stenn. NTP uses a semi-layered system of time sources; each level of this hierarchy is termed a stratum and is assigned a number starting with zero for the reference clock at the top. A server synchronized to a stratum n server runs at stratum n + 1; the number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of reliability. A brief description of strata 0, 1, 2 and 3 is provided below. Stratum 0 These are high-precision timekeeping devices such as atomic clocks, GPS or other radio clocks, they generate a accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer.
Stratum 0 devices are known as reference clocks. Stratum 1 These are computers whose system time is synchronized to w
In telecommunications, RS-232, Recommended Standard 232 refers to a standard introduced in 1960 for serial communication transmission of data. It formally defines signals connecting between a DTE such as a computer terminal, a DCE, such as a modem; the standard defines the electrical characteristics and timing of signals, the meaning of signals, the physical size and pinout of connectors. The current version of the standard is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997; the RS-232 standard had been used in computer serial ports. A serial port complying with the RS-232 standard was once a standard feature of many types of computers. Personal computers used them for connections not only to modems, but to printers, computer mice, data storage, uninterruptible power supplies, other peripheral devices. RS-232, when compared to interfaces such as RS-422, RS-485 and Ethernet, has lower transmission speed, short maximum cable length, large voltage swing, large standard connectors, no multipoint capability and limited multidrop capability.
In modern personal computers, USB has displaced RS-232 from most of its peripheral interface roles. Many computers no longer come equipped with RS-232 ports and must use either an external USB-to-RS-232 converter or an internal expansion card with one or more serial ports to connect to RS-232 peripherals. Thanks to their simplicity and past ubiquity, RS-232 interfaces are still used—particularly in industrial machines, networking equipment, scientific instruments where a short-range, point-to-point, low-speed wired data connection is adequate; the Electronic Industries Association standard RS-232-C as of 1969 defines: Electrical signal characteristics such as voltage levels, signaling rate and slew-rate of signals, voltage withstand level, short-circuit behavior, maximum load capacitance. Interface mechanical characteristics, pluggable connectors and pin identification. Functions of each circuit in the interface connector. Standard subsets of interface circuits for selected telecom applications.
The standard does not define such elements as the character encoding, the framing of characters, transmission order of bits, or error detection protocols. The character format and transmission bit rate are set by the serial port hardware a UART, which may contain circuits to convert the internal logic levels to RS-232 compatible signal levels; the standard does not define bit rates for transmission, except that it says it is intended for bit rates lower than 20,000 bits per second. RS-232 was first introduced in 1960 by the Electronic Industries Association as a Recommended Standard; the original DTEs were electromechanical teletypewriters, the original DCEs were modems. When electronic terminals began to be used, they were designed to be interchangeable with teletypewriters, so supported RS-232; because the standard did not foresee the requirements of devices such as computers, test instruments, POS terminals, so on, designers implementing an RS-232 compatible interface on their equipment interpreted the standard idiosyncratically.
The resulting common problems were non-standard pin assignment of circuits on connectors, incorrect or missing control signals. The lack of adherence to the standards produced a thriving industry of breakout boxes, patch boxes, test equipment and other aids for the connection of disparate equipment. A common deviation from the standard was to drive the signals at a reduced voltage; some manufacturers therefore built transmitters that supplied +5 V and −5 V and labeled them as "RS-232 compatible". Personal computers started to make use of the standard so that they could connect to existing equipment. For many years, an RS-232-compatible port was a standard feature for serial communications, such as modem connections, on many computers, it remained in widespread use into the late 1990s. In personal computer peripherals, it has been supplanted by other interface standards, such as USB. RS-232 is still used to connect older designs of peripherals, industrial equipment, console ports, special purpose equipment.
The standard has been renamed several times during its history as the sponsoring organization changed its name, has been variously known as EIA RS-232, EIA 232, most as TIA 232. The standard continued to be revised and updated by the Electronic Industries Association and since 1988 by the Telecommunications Industry Association. Revision C was issued in a document dated August 1969. Revision D was issued in 1986; the current revision is TIA-232-F Interface Between Data Terminal Equipment and Data Circuit-Terminating Equipment Employing Serial Binary Data Interchange, issued in 1997. Changes since Revision C have been in timing and details intended to improve harmonization with the CCITT standard V.24, but equipment built to the current standard will interoperate with older versions. Related ITU-T standards include V.24 and V.28. In revision D of EIA-232, the D-subminiature connector was formally included as part of the standard; the voltage range was extended to ±25 volts, the circuit capacitance limit was expressly stated as 2500 pF.
Revision E of EIA-232 introduced a new, standard D-shell 26-pin "Alt A" connector, made other changes to improve compatibility w
X.25 is an ITU-T standard protocol suite for packet-switched wide area network communication. An X.25 WAN consists of packet-switching exchange nodes as the networking hardware, leased lines, plain old telephone service connections, or ISDN connections as physical links. X.25 was defined by the International Telegraph and Telephone Consultative Committee in a series of drafts and finalized in a publication known as The Orange Book in 1976. X.25 networks were popular during the 1980s with telecommunications companies and in financial transaction systems such as automated teller machines. However, most uses have moved to Internet Protocol systems instead. X.25 is still used and available in niche applications such as Retronet that allows vintage computers to use the internet. X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model; the protocol suite is designed as three conceptual layers, which correspond to the lower three layers of the seven-layer OSI model.
It supports functionality not found in the OSI network layer. X.25 was developed in the ITU-T Study Group VII based upon a number of emerging data network projects. Various updates and additions were worked into the standard recorded in the ITU series of technical books describing the telecommunication systems; these books were published every fourth year with different-colored covers. The X.25 specification is only part of the larger set of X-Series specifications on public data networks. The public data network was the common name given to the international collection of X.25 providers. Their combined network had large global coverage into the 1990s. Publicly accessible X.25 networks were set up in most countries during the 1970s and 1980s, to lower the cost of accessing various online services. Beginning in the early 1990s, in North America, use of X.25 networks started to be replaced by Frame Relay services offered by national telephone companies. Most systems that required X.25 now use TCP/IP, however it is possible to transport X.25 over TCP/IP when necessary.
X.25 networks are still in use throughout the world. A variant called AX.25 is used by amateur packet radio. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 connection for low-volume applications such as point-of-sale terminals. Additionally X.25 is still under heavy use in the aeronautical business though a transition to modern protocols like X.400 is without option as X.25 hardware becomes rare and costly. As as March 2006, the United States National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with Air Route Traffic Control Centers. France was one of the last remaining countries where commercial end-user service based on X.25 operated. Known as Minitel it was based on Videotex, itself running on X.25. In 2002, Minitel had about 9 million users, in 2011, it still accounted for about 2 million users in France when France Télécom announced it would shut down the service by 30 June 2012.
As planned, service was terminated 30 June 2012. There were 800,000 terminals still in operation at the time; the general concept of the X. 25 was to create a global packet-switched network. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, as well as more efficient sharing of capital-intensive physical resources. The X. 25 specification defines only the interface between an X. 25 network. X.75, a protocol similar to X.25, defines the interface between two X.25 networks to allow connections to traverse two or more networks. X.25 does not specify how the network operates internally – many X.25 network implementations used something similar to X.25 or X.75 internally, but others used quite different protocols internally. The ISO protocol equivalent to X.25, ISO 8208, is compatible with X.25, but additionally includes provision for two X.25 DTEs to be directly connected to each other with no network in between. By separating the Packet-Layer Protocol, ISO 8208 permits operation over additional networks such as ISO 8802 LLC2 and the OSI data link layer.
X.25 defined three basic protocol levels or architectural layers. In the original specifications these were referred to as levels and had a level number, whereas all ITU-T X.25 recommendations and ISO 8208 standards released after 1984 refer to them as layers. The layer numbers were dropped to avoid confusion with the OSI Model layers. Physical layer: This layer specifies the physical, electrical and procedural characteristics to control the physical link between a DTE and a DCE. Common implementations use X. 21, EIA-449 or other serial protocols. Data link layer: The data link layer consists of the link access procedure for data interchange on the link between a DTE and a DCE. In its implementation, the Link Access Procedure, Balanced is a data link protocol that manages a communication session and controls the packet framing, it is a bit-oriented protocol that provides orderly delivery. Packet layer: This layer defined a packet-layer protocol for exchanging control and user data packets to form a packet-switching network based on virtual calls, acco
Session Initiation Protocol
The Session Initiation Protocol is a signaling protocol used for initiating and terminating real-time sessions that include voice and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol networks as well as mobile phone calling over LTE; the protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol and the Simple Mail Transfer Protocol. A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message. SIP works in conjunction with several other protocols that carry the session media. Most media type and parameter negotiation and media setup is performed with the Session Description Protocol, carried as payload in SIP messages.
SIP is designed to be independent of the underlying transport layer protocol, can be used with the User Datagram Protocol, the Transmission Control Protocol, the Stream Control Transmission Protocol. For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security. For the transmission of media streams the SDP payload carried in SIP messages employs the Real-time Transport Protocol or the Secure Real-time Transport Protocol. SIP was designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996; the protocol was standardized as RFC 2543 in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in RFC 3261 and various extensions and clarifications have been published since. SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network with a vision of supporting new multimedia applications.
It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games. SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized by the IETF, while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union. SIP is only involved for the signaling operations of a media communication session and is used to set up and terminate voice or video calls. SIP can be used to establish multiparty sessions, it allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, adding or deleting media streams. SIP has found applications in messaging applications, such as instant messaging, event subscription and notification. SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up.
For call setup, the body of a SIP message contains a Session Description Protocol data unit, which specifies the media format and media communication protocol. Voice and video media streams are carried between the terminals using the Real-time Transport Protocol or Secure Real-time Transport Protocol; every resource of a SIP network, such as user agents, call routers, voicemail boxes, are identified by a Uniform Resource Identifier. The syntax of the URI follows the general standard syntax used in Web services and e-mail; the URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a qualified domain name of the host and port. If secure transmission is required, the scheme sips is used. SIP employs design elements similar to the HTTP request/response transaction model; each transaction consists of a client request that invokes a particular method or function on the server and at least one response.
SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format. SIP can be carried by several transport layer protocols including Transmission Control Protocol, User Datagram Protocol, Stream Control Transmission Protocol. SIP clients use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is used for non-encrypted signaling traffic whereas port 5061 is used for traffic encrypted with Transport Layer Security. SIP-based telephony networks implement call processing features of Signaling System 7, for which special SIP protocol extensions exist, although the two protocols themselves are different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints. SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers; the network elements that use the Session Initiation Protocol for commun
Stream Control Transmission Protocol
The Stream Control Transmission Protocol is a computer networking communications protocol which operates at the transport layer and serves a role similar to the popular protocols TCP and UDP. It is standardized by IETF in RFC 4960. SCTP provides some of the features of both UDP and TCP: it is message-oriented like UDP and ensures reliable, in-sequence transport of messages with congestion control like TCP, it differs from those protocols by providing multi-homing and redundant paths to increase resilience and reliability. In the absence of native SCTP support in operating systems, it is possible to tunnel SCTP over UDP, as well as to map TCP API calls to SCTP calls so existing applications can use SCTP without modification; the reference implementation was released as part of FreeBSD version 7. It has since been ported; the IETF Signaling Transport working group defined the protocol in the year 2000, the IETF Transport Area working group maintains it. RFC 4960 defines the protocol. RFC 3286 provides an introduction.
SCTP applications submit their data to be transmitted in messages to the SCTP transport layer. SCTP places messages and control information into separate chunks, each identified by a chunk header; the protocol can fragment a message into a number of data chunks, but each data chunk contains data from only one user message. SCTP bundles the chunks into SCTP packets; the SCTP packet, submitted to the Internet Protocol, consists of a packet header, SCTP control chunks, followed by SCTP data chunks. One can characterize SCTP as message-oriented, meaning it transports a sequence of messages, rather than transporting an unbroken stream of bytes as does TCP; as in UDP, in SCTP a sender sends a message in one operation, that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol; however TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out.
At the sender, TCP appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such. The term multi-streaming refers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmitting web page images together with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages rather than bytes. TCP preserves byte order in the stream by including a byte sequence number with each segment. SCTP, on the other hand, assigns a message-id to each message sent in a stream; this allows independent ordering of messages in different streams. However, message ordering is optional in SCTP. Features of SCTP include: Reliable transmission of both ordered and unordered data streams. Multihoming support in which one or both endpoints of a connection can consist of more than one IP address, enabling transparent fail-over between redundant network paths.
Delivery of chunks within independent streams eliminates unnecessary head-of-line blocking, as opposed to TCP byte-stream delivery. Explicit partial reliability. Path selection and monitoring to select a primary data transmission path and test the connectivity of the transmission path. Validation and acknowledgment mechanisms protect against flooding attacks and provide notification of duplicated or missing data chunks. Improved error detection suitable for Ethernet jumbo frames; the designers of SCTP intended it for the transport of telephony over Internet Protocol, with the goal of duplicating some of the reliability attributes of the SS7 signaling network in IP. This IETF effort is known as SIGTRAN. In the meantime, other uses have been proposed, for example, the Diameter protocol and Reliable Server Pooling. TCP has provided the primary means to transfer data reliably across the Internet. However, TCP has imposed limitations on several applications. From RFC 4960: TCP provides both reliable data transfer and strict order-of-transmission delivery of data.
Some applications need reliable transfer without sequence maintenance, while others would be satisfied with partial ordering of the data. In both of these cases, the head-of-line blocking property of TCP causes unnecessary delay. For applications exchanging distinct records or messages, the stream-oriented nature of TCP requires the addition of explicit markers or other encoding to delineate the individual records. In order to avoid sending many small IP packets where one single larger packet would have sufficed, the TCP implementation may delay transmitting data while waiting for more data being queued by the application. If and when such a small delay is undesirable, the application must explicitly request undelayed transmission on a case-by-case basis using the push facility. SCTP on the other hand allows undelayed transmission to be configured as a default for an association, eliminating any undesired delays, but at the cost of higher transfer overhead; the limited scope of TCP sockets complicates the task of providing highly-available data transfer capability using multi-homed hosts.
TCP is vulnerable to denial-of-service attacks, such as SYN attacks. Adoption has been slowed by lack of awareness, lack of implementations (particularly in Microsoft Windows
Simple Network Management Protocol
Simple Network Management Protocol is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. Devices that support SNMP include cable modems, switches, workstations and more. SNMP is used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base which describe the system status and configuration; these variables can be remotely queried by managing applications. Three significant versions of SNMP have been deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance and security. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force, it consists of a set of standards for network management, including an application layer protocol, a database schema, a set of data objects.
In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent which reports information via SNMP to the manager. An SNMP-managed network consists of three key components: Managed devices Agent – software which runs on managed devices Network management station – software which runs on the managerA managed device is a network node that implements an SNMP interface that allows unidirectional or bidirectional access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, but not limited to, access servers, cable modems, hubs, IP telephones, IP video cameras, computer hosts, printers. An agent is a network-management software module. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
A network management station executes applications that control managed devices. NMSs provide the bulk of the memory resources required for network management. One or more NMSs may exist on any managed network. SNMP agents expose management data on the managed systems as variables; the protocol permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design; these hierarchies are described as a management information base. MIBs describe the structure of the management data of a device subsystem; each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0, a subset of ASN.1. SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol.
The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent; the agent response is sent back to the source port on the manager. The manager receives notifications on port 162; the agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162. SNMPv1 specifies five core protocol data units. Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows: The seven SNMP PDU types as identified by the PDU-type field are as follows: GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings. Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned.
SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with new values for the variables is returned. GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB; the entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request. GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior.
GetBulkRequest was introduced in SNMPv2. Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-index fields. Although it was used as a response to both gets and sets, this P