Media Gateway Control Protocol
The Media Gateway Control Protocol is a signaling and call control communications protocol used in voice over IP telecommunication systems. It implements the media gateway control protocol architecture for controlling media gateways on Internet Protocol networks connected to the public switched telephone network; the protocol is a successor to the Simple Gateway Control Protocol, developed by Bellcore and Cisco, the Internet Protocol Device Control. The methodology of MGCP reflects the structure of the PSTN with the power of the network residing in a call control center softswitch, analogous to the central office in the telephone network; the endpoints are low-intelligence devices executing control commands from a call agent or media gateway controller in the softswitch and providing result indications in response. The protocol represents a decomposition of other VoIP models, such as H.323 and the Session Initiation Protocol, in which the endpoint devices of a call have higher levels of signaling intelligence.
MGCP is a text-based protocol consisting of responses. It uses the Session Description Protocol for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol for framing the media streams; the media gateway control protocol architecture and its methodologies and programming interfaces are described in RFC 2805. MGCP is a master-slave protocol in which media gateways are controlled by a call control agent or softswitch; this controller is called call agent. With the network protocol it can control each specific port on a media gateway; this provides scalable IP telephony solutions. The distributed system is composed of at least one call agent and one or multiple media gateways, which performs the conversion of media signals between circuit-switched and packet-switched networks, at least one signaling gateway when connected to the PSTN. MGCP presents a call control architecture with limited intelligence at the edge and intelligence at the core controllers.
The MGCP model assumes that call agents synchronize with each other to send coherent commands and responses to the gateways under their control. The call agent uses MGCP to request event notifications, reports and configuration data from the media gateway, as well as to specify connection parameters and activation of signals toward the PSTN telephony interface. A softswitch is used in conjunction with signaling gateways, for access to Signalling System No. 7 functionality, for example. The call agent does not use MGCP to control a signaling gateway. A media gateway may be configured with a list of call agents from which it may accept control commands. In principle, event notifications may be sent to different call agents for each endpoint on the gateway, according to the instructions received from the call agents by setting the NotifiedEntity parameter. In practice, however, it is desirable that all endpoints of a gateway are controlled by the same call agent. In the event of such a failure it is the backup call agent's responsibility to reconfigure the media gateway so that it reports to the backup call agent.
The gateway may be audited to determine the controlling call agent, a query that may be used to resolve any conflicts. In case of multiple call agents, MGCP assumes that they maintain knowledge of device state among themselves; such failover features take into account both planned and unplanned outages. MGCP recognizes three essential elements of communication, the media gateway controller, the media gateway endpoint, connections between these entities. A media gateway may host multiple endpoints and each endpoint should be able to engage in multiple connections. Multiple connections on the endpoints support calling features such as call waiting and three-way calling. MGCP is a text-based protocol using a response model. Commands and responses are encoded in messages that are structured and formatted with the whitespace characters space, horizontal tab, carriage return, linefeed and full stop. Messages are transmitted using the User Datagram Protocol. Media gateways use the port number 2427, call agents use 2727 by default.
The message sequence of command and its response is known as a transaction, identified by the numerical Transaction Identifier exchanged in each transaction. The protocol specification defines nine standard commands that are distinguished by a four-letter command verb: AUEP, AUCX, CRCX, DLCX, EPCF, MDCX, NTFY, RQNT, RSIP. Responses begin with a three-digit numerical response code that identifies the outcome or result of the transaction. Two verbs are used by a call agent to query the state of its associated connections. AUEP: Audit Endpoint AUCX: Audit ConnectionThree verbs are used by a call agent to manage the connection to a media gateway endpoint. CRCX: Create Connection DLCX: Delete Connection; this command may be issued by an endpoint to terminate a connection. MDCX: Modify Connection; this command is used to alter operating characteristics of the connection, e.g. speech encoders, half-duplex/full-duplex state and others. One verb is used by a call agent to request notification of events occurring at the endpoint, to apply signals to the connected PSTN network link, or to a connected telephony endpoint, e.g
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Network Time Protocol
The Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time, it uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can maintain time to within tens of milliseconds over the public Internet, can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more; the protocol is described in terms of a client-server model, but can as be used in peer-to-peer relationships where both peers consider the other to be a potential time source.
Implementations send and receive timestamps using the User Datagram Protocol on port number 123. They can use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted; the current protocol is version 4, a proposed standard as documented in RFC 5905. It is backward compatible with version 3, specified in RFC 1305. In 1979, network time synchronization technology was used in what was the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York; the technology was described in the 1981 Internet Engineering Note 173 and a public protocol was developed from it, documented in RFC 778. The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years.
Other related network tools were available both and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp and IP Timestamp option. More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients. In 1985, NTP version 0 was implemented in both Fuzzball and Unix, the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in RFC 958. Despite the slow computers and networks available at the time, accuracy of better than 100 milliseconds was obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks. In 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC 1059, it drew on the experimental results and clock filter algorithm documented in RFC 956 and was the first version to describe the client-server and peer-to-peer modes.
In 1991, the NTPv1 architecture and algorithms were brought to the attention of a wider engineering community with the publication of an article by David L. Mills in the IEEE Transactions on Communications. In 1989, RFC 1119 was published defining NTPv2 by means of a state machine, with pseudocode to describe its operation, it introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4. The design of NTPv2 was criticized for lacking formal correctness principles by the DTSS community, their alternative design included Marzullo's algorithm, a modified version of, promptly added to NTP. The bulk of the algorithms from this era have largely survived into NTPv4. In 1992, RFC 1305 defined NTPv3; the RFC included an analysis of all sources of error, from the reference clock down to the final client, which enabled the calculation of a metric that helps choose the best server where several candidates appear to disagree. Broadcast mode was introduced. In subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required.
In 2010, RFC 5905 was published containing a proposed specification for NTPv4. The protocol has moved on since and as of 2014, an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the reference implementation is maintained as an open source project led by Harlan Stenn. NTP uses a semi-layered system of time sources; each level of this hierarchy is termed a stratum and is assigned a number starting with zero for the reference clock at the top. A server synchronized to a stratum n server runs at stratum n + 1; the number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of reliability. A brief description of strata 0, 1, 2 and 3 is provided below. Stratum 0 These are high-precision timekeeping devices such as atomic clocks, GPS or other radio clocks, they generate a accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer.
Stratum 0 devices are known as reference clocks. Stratum 1 These are computers whose system time is synchronized to w
Hypertext Transfer Protocol Secure is an extension of the Hypertext Transfer Protocol. It is used for secure communication over a computer network, is used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security, or its predecessor, Secure Sockets Layer; the protocol is therefore often referred to as HTTP over TLS, or HTTP over SSL. The principal motivation for HTTPS is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit, it protects against man-in-the-middle attacks. The bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. In practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor. HTTPS connections were used for payment transactions on the World Wide Web, e-mail and for sensitive transactions in corporate information systems.
Since 2018, HTTPS is used more by web users than the original non-secure HTTP to protect page authenticity on all types of websites. The Uniform Resource Identifier scheme HTTPS has identical usage syntax to the HTTP scheme. However, HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL/TLS is suited for HTTP, since it can provide some protection if only one side of the communication is authenticated; this is the case with HTTP transactions over the Internet, where only the server is authenticated. HTTPS creates a secure channel over an insecure network; this ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. Because HTTPS piggybacks HTTP on top of TLS, the entirety of the underlying HTTP protocol can be encrypted; this includes the request URL, query parameters and cookies. However, because host addresses and port numbers are part of the underlying TCP/IP protocols, HTTPS cannot protect their disclosure.
In practice this means that on a configured web server, eavesdroppers can infer the IP address and port number of the web server that one is communicating with, as well as the amount and duration of the communication, though not the content of the communication. Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-installed in their software. Certificate authorities are in this way being trusted by web browser creators to provide valid certificates. Therefore, a user should trust an HTTPS connection to a website if and only if all of the following are true: The user trusts that the browser software implements HTTPS with pre-installed certificate authorities; the user trusts the certificate authority to vouch only for legitimate websites. The website provides a valid certificate; the certificate identifies the website. The user trusts. HTTPS is important over insecure networks, as anyone on the same local network can packet-sniff and discover sensitive information not protected by HTTPS.
Additionally, many free to use and paid WLAN networks engage in packet injection in order to serve their own ads on webpages. However, this can be exploited maliciously in many ways, such as injecting malware onto webpages and stealing users' private information. HTTPS is very important for connections over the Tor anonymity network, as malicious Tor nodes can damage or alter the contents passing through them in an insecure fashion and inject malware into the connection; this is one reason why the Electronic Frontier Foundation and the Tor project started the development of HTTPS Everywhere, included in the Tor Browser Bundle. As more information is revealed about global mass surveillance and criminals stealing personal information, the use of HTTPS security on all websites is becoming important regardless of the type of Internet connection being used. While metadata about individual pages that a user visits is not sensitive, when combined, they can reveal a lot about the user and compromise the user's privacy.
Deploying HTTPS allows the use of HTTP/2, that are new generations of HTTP, designed to reduce page load times and latency. It is recommended to use HTTP Strict Transport Security with HTTPS to protect users from man-in-the-middle attacks SSL stripping. HTTPS should not be confused with the little-used Secure HTTP specified in RFC 2660; as of April 2018, 33.2% of Alexa top 1,000,000 websites use HTTPS as default, 57.1% of the Internet's 137,971 most popular websites have a secure implementation of HTTPS, 70% of page loads use HTTPS. Most browsers display a warning. Older browsers, when connecting to a site wit
Streaming media is multimedia, received by and presented to an end-user while being delivered by a provider. The verb "to stream" refers to the process of obtaining media in this manner. A client end-user can use their media player to start playing digital video or digital audio content before the entire file has been transmitted. Distinguishing delivery method from the media distributed applies to telecommunications networks, as most of the delivery systems are either inherently streaming or inherently non-streaming. For example, in the 1930s, elevator music was among the earliest popular music available as streaming media; the term "streaming media" can apply to media other than video and audio, such as live closed captioning, ticker tape, real-time text, which are all considered "streaming text". Live streaming is the delivery of Internet content in real-time much as live television broadcasts content over the airwaves via a television signal. Live internet streaming requires a form of source media, an encoder to digitize the content, a media publisher, a content delivery network to distribute and deliver the content.
Live streaming does not need to be recorded at the origination point, although it is. There are challenges with streaming content on the Internet. If the user does not have enough bandwidth in their Internet connection, they may experience stops, lags, or slow buffering of the content; some users may not be able to stream certain content due to not having compatible computer or software systems. Some popular streaming services include the video sharing website YouTube and Mixer, which live stream the playing of video games. Netflix and Amazon Video stream movies and TV shows, Spotify, Apple Music and TIDAL stream music. In the early 1920s, George O. Squier was granted patents for a system for the transmission and distribution of signals over electrical lines, the technical basis for what became Muzak, a technology streaming continuous music to commercial customers without the use of radio. Attempts to display media on computers date back to the earliest days of computing in the mid-20th century.
However, little progress was made for several decades due to the high cost and limited capabilities of computer hardware. From the late 1980s through the 1990s, consumer-grade personal computers became powerful enough to display various media; the primary technical issues related to streaming were having enough CPU power bus bandwidth to support the required data rates, creating low-latency interrupt paths in the operating system to prevent buffer underrun, enabling skip-free streaming of the content. However, computer networks were still limited in the mid-1990s, audio and video media were delivered over non-streaming channels, such as by downloading a digital file from a remote server and saving it to a local drive on the end user's computer or storing it as a digital file and playing it back from CD-ROMs. In 1991 the first commercial Ethernet Switch was introduced, which enabled more powerful computer networks leading to the first streaming video solutions used by schools and corporations such as expanding Bloomberg Television worldwide.
In the mid 1990s the World Wide Web was established, but streaming audio would not be practical until years later. During the late 1990s and early 2000s, users had increased access to computer networks the Internet. During the early 2000s, users had access to increased network bandwidth in the "last mile"; these technological improvements facilitated the streaming of audio and video content to computer users in their homes and workplaces. There was an increasing use of standard protocols and formats, such as TCP/IP, HTTP, HTML as the Internet became commercialized, which led to an infusion of investment into the sector; the band Severe Tire Damage was the first group to perform live on the Internet. On June 24, 1993, the band was playing a gig at Xerox PARC while elsewhere in the building, scientists were discussing new technology for broadcasting on the Internet using multicasting; as proof of PARC's technology, the band's performance was broadcast and could be seen live in Australia and elsewhere.
In a March 2017 interview, band member Russ Haines stated that the band had used "half of the total bandwidth of the internet" to stream the performance, a 152-by-76 pixel video, updated eight to twelve times per second, with audio quality, "at best, a bad telephone connection". Microsoft Research developed a Microsoft TV application, compiled under MS Windows Studio Suite and tested in conjunction with Connectix QuickCam. RealNetworks was a pioneer in the streaming media markets, when it broadcast a baseball game between the New York Yankees and the Seattle Mariners over the Internet in 1995; the first symphonic concert on the Internet took place at the Paramount Theater in Seattle, Washington on November 10, 1995. The concert was a collaboration between The Seattle Symphony and various guest musicians such as Slash, Matt Cameron, Barrett Martin; when Word Magazine launched in 1995, they featured the first-ever streaming soundtracks on the Internet. Metro
Fiber Distributed Data Interface
Fiber Distributed Data Interface is a standard for data transmission in a local area network. It uses optical fiber as its standard underlying physical medium, although it was later specified to use copper cable, in which case it may be called CDDI, standardized as TP-PMD referred to as TP-DDI. FDDI provides a 100 Mbit/s optical standard for data transmission in local area network that can extend in range up to 200 kilometers. Although FDDI logical topology is a ring-based token network, it did not use the IEEE 802.5 token ring protocol as its basis. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. FDDI offers both a Dual-Attached Station, counter-rotating token ring topology and a Single-Attached Station, token bus passing ring topology. FDDI, as a product of American National Standards Institute X3T9.5, conforms to the Open Systems Interconnection model of functional layering using other protocols. The standards process started in the mid 1980s.
FDDI-II, a version of FDDI described in 1989, added circuit-switched service capability to the network so that it could handle voice and video signals. Work started to connect FDDI networks to synchronous optical networking technology. A FDDI network contains two rings; the primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the secondary ring to do backup, it can carry data, extending capacity to 200 Mbit/s; the single ring can extend the maximum distance. FDDI had a larger maximum-frame size than the standard Ethernet family, which only supports a maximum-frame size of 1,500 bytes, allowing better effective data rates in some cases. Designers constructed FDDI rings in a network topology such as a "dual ring of trees". A small number of devices infrastructure devices such as routers and concentrators rather than host computers, were "dual-attached" to both rings. Host computers connect as single-attached devices to the routers or concentrators; the dual ring in its most degenerate form collapses into a single device.
A computer-room contained the whole dual ring, although some implementations deployed FDDI as a metropolitan area network. FDDI requires this network topology because the dual ring passes through each connected device and requires each such device to remain continuously operational; the standard allows for optical bypasses, but network engineers consider these unreliable and error-prone. Devices such as workstations and minicomputers that might not come under the control of the network managers are not suitable for connection to the dual ring; as an alternative to using a dual-attached connection, a workstation can obtain the same degree of resilience through a dual-homed connection made to two separate devices in the same FDDI ring. One of the connections becomes active. If the first connection fails, the backup link takes over with no perceptible delay; the FDDI data frame format is: Where PA is the preamble, SD is a start delimiter, FC is frame control, DA is the destination address, SA is the source address, PDU is the protocol data unit, FCS is the frame check Sequence, ED/FS are the end delimiter and frame status.
The Internet Engineering Task Force defined a standard for transmission of the Internet Protocol over FDDI. It was first proposed in June 1989 and revised in 1990; some aspects of the protocol were compatible with the IEEE 802.2 standard for logical link control. For example, the 48-bit MAC addresses, thus other protocols such as the Address Resolution Protocol could be common as well. FDDI was considered an attractive campus backbone network technology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and token ring networks only offered 4 Mbit/s or 16 Mbit/s rates, thus it was a high-speed choice of that era. By 1994, vendors included Cisco Systems, National Semiconductor, Network Peripherals, SysKonnect, 3Com. FDDI was made obsolete in local networks by Fast Ethernet which offered the same 100 Mbit/s speeds, but at a much lower cost and, since 1998, by Gigabit Ethernet due to its speed, lower cost, ubiquity. FDDI standards included: ANSI X3.139-1987, Media Access Control — ISO 9314-2 ANSI X3.148-1988, Physical Layer Protocol — ISO 9314-1 ANSI X3.166-1989, Physical Medium Dependent — ISO 9314-3 ANSI X3.184-1993, Single Mode Fiber Physical Medium Dependent — ISO 9314-4 ANSI X3.229-1994, Station Management — ISO 9314-6 This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later
Session Initiation Protocol
The Session Initiation Protocol is a signaling protocol used for initiating and terminating real-time sessions that include voice and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol networks as well as mobile phone calling over LTE; the protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol and the Simple Mail Transfer Protocol. A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message. SIP works in conjunction with several other protocols that carry the session media. Most media type and parameter negotiation and media setup is performed with the Session Description Protocol, carried as payload in SIP messages.
SIP is designed to be independent of the underlying transport layer protocol, can be used with the User Datagram Protocol, the Transmission Control Protocol, the Stream Control Transmission Protocol. For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security. For the transmission of media streams the SDP payload carried in SIP messages employs the Real-time Transport Protocol or the Secure Real-time Transport Protocol. SIP was designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996; the protocol was standardized as RFC 2543 in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in RFC 3261 and various extensions and clarifications have been published since. SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network with a vision of supporting new multimedia applications.
It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games. SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized by the IETF, while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union. SIP is only involved for the signaling operations of a media communication session and is used to set up and terminate voice or video calls. SIP can be used to establish multiparty sessions, it allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, adding or deleting media streams. SIP has found applications in messaging applications, such as instant messaging, event subscription and notification. SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up.
For call setup, the body of a SIP message contains a Session Description Protocol data unit, which specifies the media format and media communication protocol. Voice and video media streams are carried between the terminals using the Real-time Transport Protocol or Secure Real-time Transport Protocol; every resource of a SIP network, such as user agents, call routers, voicemail boxes, are identified by a Uniform Resource Identifier. The syntax of the URI follows the general standard syntax used in Web services and e-mail; the URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a qualified domain name of the host and port. If secure transmission is required, the scheme sips is used. SIP employs design elements similar to the HTTP request/response transaction model; each transaction consists of a client request that invokes a particular method or function on the server and at least one response.
SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format. SIP can be carried by several transport layer protocols including Transmission Control Protocol, User Datagram Protocol, Stream Control Transmission Protocol. SIP clients use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is used for non-encrypted signaling traffic whereas port 5061 is used for traffic encrypted with Transport Layer Security. SIP-based telephony networks implement call processing features of Signaling System 7, for which special SIP protocol extensions exist, although the two protocols themselves are different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints. SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers; the network elements that use the Session Initiation Protocol for commun