A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Media Gateway Control Protocol
The Media Gateway Control Protocol is a signaling and call control communications protocol used in voice over IP telecommunication systems. It implements the media gateway control protocol architecture for controlling media gateways on Internet Protocol networks connected to the public switched telephone network; the protocol is a successor to the Simple Gateway Control Protocol, developed by Bellcore and Cisco, the Internet Protocol Device Control. The methodology of MGCP reflects the structure of the PSTN with the power of the network residing in a call control center softswitch, analogous to the central office in the telephone network; the endpoints are low-intelligence devices executing control commands from a call agent or media gateway controller in the softswitch and providing result indications in response. The protocol represents a decomposition of other VoIP models, such as H.323 and the Session Initiation Protocol, in which the endpoint devices of a call have higher levels of signaling intelligence.
MGCP is a text-based protocol consisting of responses. It uses the Session Description Protocol for specifying and negotiating the media streams to be transmitted in a call session and the Real-time Transport Protocol for framing the media streams; the media gateway control protocol architecture and its methodologies and programming interfaces are described in RFC 2805. MGCP is a master-slave protocol in which media gateways are controlled by a call control agent or softswitch; this controller is called call agent. With the network protocol it can control each specific port on a media gateway; this provides scalable IP telephony solutions. The distributed system is composed of at least one call agent and one or multiple media gateways, which performs the conversion of media signals between circuit-switched and packet-switched networks, at least one signaling gateway when connected to the PSTN. MGCP presents a call control architecture with limited intelligence at the edge and intelligence at the core controllers.
The MGCP model assumes that call agents synchronize with each other to send coherent commands and responses to the gateways under their control. The call agent uses MGCP to request event notifications, reports and configuration data from the media gateway, as well as to specify connection parameters and activation of signals toward the PSTN telephony interface. A softswitch is used in conjunction with signaling gateways, for access to Signalling System No. 7 functionality, for example. The call agent does not use MGCP to control a signaling gateway. A media gateway may be configured with a list of call agents from which it may accept control commands. In principle, event notifications may be sent to different call agents for each endpoint on the gateway, according to the instructions received from the call agents by setting the NotifiedEntity parameter. In practice, however, it is desirable that all endpoints of a gateway are controlled by the same call agent. In the event of such a failure it is the backup call agent's responsibility to reconfigure the media gateway so that it reports to the backup call agent.
The gateway may be audited to determine the controlling call agent, a query that may be used to resolve any conflicts. In case of multiple call agents, MGCP assumes that they maintain knowledge of device state among themselves; such failover features take into account both planned and unplanned outages. MGCP recognizes three essential elements of communication, the media gateway controller, the media gateway endpoint, connections between these entities. A media gateway may host multiple endpoints and each endpoint should be able to engage in multiple connections. Multiple connections on the endpoints support calling features such as call waiting and three-way calling. MGCP is a text-based protocol using a response model. Commands and responses are encoded in messages that are structured and formatted with the whitespace characters space, horizontal tab, carriage return, linefeed and full stop. Messages are transmitted using the User Datagram Protocol. Media gateways use the port number 2427, call agents use 2727 by default.
The message sequence of command and its response is known as a transaction, identified by the numerical Transaction Identifier exchanged in each transaction. The protocol specification defines nine standard commands that are distinguished by a four-letter command verb: AUEP, AUCX, CRCX, DLCX, EPCF, MDCX, NTFY, RQNT, RSIP. Responses begin with a three-digit numerical response code that identifies the outcome or result of the transaction. Two verbs are used by a call agent to query the state of its associated connections. AUEP: Audit Endpoint AUCX: Audit ConnectionThree verbs are used by a call agent to manage the connection to a media gateway endpoint. CRCX: Create Connection DLCX: Delete Connection; this command may be issued by an endpoint to terminate a connection. MDCX: Modify Connection; this command is used to alter operating characteristics of the connection, e.g. speech encoders, half-duplex/full-duplex state and others. One verb is used by a call agent to request notification of events occurring at the endpoint, to apply signals to the connected PSTN network link, or to a connected telephony endpoint, e.g
Digital subscriber line
Digital subscriber line is a family of technologies that are used to transmit digital data over telephone lines. In telecommunications marketing, the term DSL is understood to mean asymmetric digital subscriber line, the most installed DSL technology, for Internet access. DSL service can be delivered with wired telephone service on the same telephone line since DSL uses higher frequency bands for data. On the customer premises, a DSL filter on each non-DSL outlet blocks any high-frequency interference to enable simultaneous use of the voice and DSL services; the bit rate of consumer DSL services ranges from 256 kbit/s to over 100 Mbit/s in the direction to the customer, depending on DSL technology, line conditions, service-level implementation. Bit rates of 1 Gbit/s have been reached. In ADSL, the data throughput in the upstream direction is lower, hence the designation of asymmetric service. In symmetric digital subscriber line services, the downstream and upstream data rates are equal. Researchers at Bell Labs have reached speeds over 1 Gbit/s for symmetrical broadband access services using traditional copper telephone lines, though such speeds have not yet been deployed elsewhere.
It was thought that it was not possible to operate a conventional phone line beyond low-speed limits. In the 1950s, ordinary twisted-pair telephone cable carried four megahertz television signals between studios, suggesting that such lines would allow transmitting many megabits per second. One such circuit in the United Kingdom ran some 10 miles between the BBC studios in Newcastle-upon-Tyne and the Pontop Pike transmitting station, it was able to give the studios a low quality cue feed but not one suitable for transmission. However, these cables had other impairments besides Gaussian noise, preventing such rates from becoming practical in the field; the 1980s saw the development of techniques for broadband communications that allowed the limit to be extended. A patent was filed in 1979 for the use of existing telephone wires for both telephones and data terminals that were connected to a remote computer via a digital data carrier system; the motivation for digital subscriber line technology was the Integrated Services Digital Network specification proposed in 1984 by the CCITT as part of Recommendation I.120 reused as ISDN digital subscriber line.
Employees at Bellcore developed asymmetric digital subscriber line by placing wide-band digital signals at frequencies above the existing baseband analog voice signal carried on conventional twisted pair cabling between telephone exchanges and customers. A patent was filed in 1988. Joseph W. Lechleider's contribution to DSL was his insight that an asymmetric arrangement offered more than double the bandwidth capacity of symmetric DSL; this allowed Internet service providers to offer efficient service to consumers, who benefited from the ability to download large amounts of data but needed to upload comparable amounts. ADSL supports two modes of transport -- interleaved channel. Fast channel is preferred for streaming multimedia, where an occasional dropped bit is acceptable, but lags are less so. Interleaved channel works better for file transfers, where the delivered data must be error-free but latency incurred by the retransmission of error-containing packets is acceptable. Consumer-oriented ADSL was designed to operate on existing lines conditioned for Basic Rate Interface ISDN services, which itself is a digital circuit switching service, though most incumbent local exchange carriers provision rate-adaptive digital subscriber line to work on any available copper pair facility, whether conditioned for BRI or not.
Engineers developed high speed DSL facilities such as high bit rate digital subscriber line and symmetric digital subscriber line to provision traditional Digital Signal 1 services over standard copper pair facilities. Older ADSL standards delivered 8 Mbit/s to the customer over about 2 km of unshielded twisted-pair copper wire. Newer variants improved these rates. Distances greater than 2 km reduce the bandwidth usable on the wires, thus reducing the data rate, but ADSL loop extenders increase these distances by repeating the signal, allowing the LEC to deliver DSL speeds to any distance. Until the late 1990s, the cost of digital signal processors for DSL was prohibitive. All types of DSL employ complex digital signal processing algorithms to overcome the inherent limitations of the existing twisted pair wires. Due to the advancements of very-large-scale integration technology, the cost of the equipment associated with a DSL deployment lowered significantly; the two main pieces of equipment are a digital subscriber line access multiplexer at one end and a DSL modem at the other end.
A DSL connection can be deployed over existing cable. Such deployment including equipment, is much cheaper than installing a new, high-bandwidth fiber-optic cable over the same route and distance; this is true both for SDSL variations. The commercial success of DSL and similar technologies reflects the advances made in electronics over the decades that have increased performance and reduced costs while digging trenches in the ground for new cables remains expensive. In the case of ADSL, competition in Internet access caused subscription fees to drop over the years, thus making ADSL more economical than dial up access. Telephone companies were pressured into moving to A
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
Network Time Protocol
The Network Time Protocol is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use. NTP was designed by David L. Mills of the University of Delaware. NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time, it uses the intersection algorithm, a modified version of Marzullo's algorithm, to select accurate time servers and is designed to mitigate the effects of variable network latency. NTP can maintain time to within tens of milliseconds over the public Internet, can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more; the protocol is described in terms of a client-server model, but can as be used in peer-to-peer relationships where both peers consider the other to be a potential time source.
Implementations send and receive timestamps using the User Datagram Protocol on port number 123. They can use broadcasting or multicasting, where clients passively listen to time updates after an initial round-trip calibrating exchange. NTP supplies a warning of any impending leap second adjustment, but no information about local time zones or daylight saving time is transmitted; the current protocol is version 4, a proposed standard as documented in RFC 5905. It is backward compatible with version 3, specified in RFC 1305. In 1979, network time synchronization technology was used in what was the first public demonstration of Internet services running over a trans-Atlantic satellite network, at the National Computer Conference in New York; the technology was described in the 1981 Internet Engineering Note 173 and a public protocol was developed from it, documented in RFC 778. The technology was first deployed in a local area network as part of the Hello routing protocol and implemented in the Fuzzball router, an experimental operating system used in network prototyping, where it ran for many years.
Other related network tools were available both and now. They include the Daytime and Time protocols for recording the time of events, as well as the ICMP Timestamp and IP Timestamp option. More complete synchronization systems, although lacking NTP's data analysis and clock disciplining algorithms, include the Unix daemon timed, which uses an election algorithm to appoint a server for all the clients. In 1985, NTP version 0 was implemented in both Fuzzball and Unix, the NTP packet header and round-trip delay and offset calculations, which have persisted into NTPv4, were documented in RFC 958. Despite the slow computers and networks available at the time, accuracy of better than 100 milliseconds was obtained on Atlantic spanning links, with accuracy of tens of milliseconds on Ethernet networks. In 1988, a much more complete specification of the NTPv1 protocol, with associated algorithms, was published in RFC 1059, it drew on the experimental results and clock filter algorithm documented in RFC 956 and was the first version to describe the client-server and peer-to-peer modes.
In 1991, the NTPv1 architecture and algorithms were brought to the attention of a wider engineering community with the publication of an article by David L. Mills in the IEEE Transactions on Communications. In 1989, RFC 1119 was published defining NTPv2 by means of a state machine, with pseudocode to describe its operation, it introduced a management protocol and cryptographic authentication scheme which have both survived into NTPv4. The design of NTPv2 was criticized for lacking formal correctness principles by the DTSS community, their alternative design included Marzullo's algorithm, a modified version of, promptly added to NTP. The bulk of the algorithms from this era have largely survived into NTPv4. In 1992, RFC 1305 defined NTPv3; the RFC included an analysis of all sources of error, from the reference clock down to the final client, which enabled the calculation of a metric that helps choose the best server where several candidates appear to disagree. Broadcast mode was introduced. In subsequent years, as new features were added and algorithm improvements were made, it became apparent that a new protocol version was required.
In 2010, RFC 5905 was published containing a proposed specification for NTPv4. The protocol has moved on since and as of 2014, an updated RFC has yet to be published. Following the retirement of Mills from the University of Delaware, the reference implementation is maintained as an open source project led by Harlan Stenn. NTP uses a semi-layered system of time sources; each level of this hierarchy is termed a stratum and is assigned a number starting with zero for the reference clock at the top. A server synchronized to a stratum n server runs at stratum n + 1; the number represents the distance from the reference clock and is used to prevent cyclical dependencies in the hierarchy. Stratum is not always an indication of reliability. A brief description of strata 0, 1, 2 and 3 is provided below. Stratum 0 These are high-precision timekeeping devices such as atomic clocks, GPS or other radio clocks, they generate a accurate pulse per second signal that triggers an interrupt and timestamp on a connected computer.
Stratum 0 devices are known as reference clocks. Stratum 1 These are computers whose system time is synchronized to w
Simple Network Management Protocol
Simple Network Management Protocol is an Internet Standard protocol for collecting and organizing information about managed devices on IP networks and for modifying that information to change device behavior. Devices that support SNMP include cable modems, switches, workstations and more. SNMP is used in network management for network monitoring. SNMP exposes management data in the form of variables on the managed systems organized in a management information base which describe the system status and configuration; these variables can be remotely queried by managing applications. Three significant versions of SNMP have been deployed. SNMPv1 is the original version of the protocol. More recent versions, SNMPv2c and SNMPv3, feature improvements in performance and security. SNMP is a component of the Internet Protocol Suite as defined by the Internet Engineering Task Force, it consists of a set of standards for network management, including an application layer protocol, a database schema, a set of data objects.
In typical uses of SNMP, one or more administrative computers called managers have the task of monitoring or managing a group of hosts or devices on a computer network. Each managed system executes a software component called an agent which reports information via SNMP to the manager. An SNMP-managed network consists of three key components: Managed devices Agent – software which runs on managed devices Network management station – software which runs on the managerA managed device is a network node that implements an SNMP interface that allows unidirectional or bidirectional access to node-specific information. Managed devices exchange node-specific information with the NMSs. Sometimes called network elements, the managed devices can be any type of device, but not limited to, access servers, cable modems, hubs, IP telephones, IP video cameras, computer hosts, printers. An agent is a network-management software module. An agent has local knowledge of management information and translates that information to or from an SNMP-specific form.
A network management station executes applications that control managed devices. NMSs provide the bulk of the memory resources required for network management. One or more NMSs may exist on any managed network. SNMP agents expose management data on the managed systems as variables; the protocol permits active management tasks, such as configuration changes, through remote modification of these variables. The variables accessible via SNMP are organized in hierarchies. SNMP itself does not define which variables a managed system should offer. Rather, SNMP uses an extensible design; these hierarchies are described as a management information base. MIBs describe the structure of the management data of a device subsystem; each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2.0, a subset of ASN.1. SNMP operates in the application layer of the Internet protocol suite. All SNMP messages are transported via User Datagram Protocol.
The SNMP agent receives requests on UDP port 161. The manager may send requests from any available source port to port 161 in the agent; the agent response is sent back to the source port on the manager. The manager receives notifications on port 162; the agent may generate notifications from any available port. When used with Transport Layer Security or Datagram Transport Layer Security, requests are received on port 10161 and notifications are sent to port 10162. SNMPv1 specifies five core protocol data units. Two other PDUs, GetBulkRequest and InformRequest were added in SNMPv2 and the Report PDU was added in SNMPv3. All SNMP PDUs are constructed as follows: The seven SNMP PDU types as identified by the PDU-type field are as follows: GetRequest A manager-to-agent request to retrieve the value of a variable or list of variables. Desired variables are specified in variable bindings. Retrieval of the specified variable values is to be done as an atomic operation by the agent. A Response with current values is returned.
SetRequest A manager-to-agent request to change the value of a variable or list of variables. Variable bindings are specified in the body of the request. Changes to all specified variables are to be made as an atomic operation by the agent. A Response with new values for the variables is returned. GetNextRequest A manager-to-agent request to discover available variables and their values. Returns a Response with variable binding for the lexicographically next variable in the MIB; the entire MIB of an agent can be walked by iterative application of GetNextRequest starting at OID 0. Rows of a table can be read by specifying column OIDs in the variable bindings of the request. GetBulkRequest A manager-to-agent request for multiple iterations of GetNextRequest. An optimized version of GetNextRequest. Returns a Response with multiple variable bindings walked from the variable binding or bindings in the request. PDU specific non-repeaters and max-repetitions fields are used to control response behavior.
GetBulkRequest was introduced in SNMPv2. Response Returns variable bindings and acknowledgement from agent to manager for GetRequest, SetRequest, GetNextRequest, GetBulkRequest and InformRequest. Error reporting is provided by error-index fields. Although it was used as a response to both gets and sets, this P
Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line login and remote command execution, but any network service can be secured with SSH. SSH provides a secure channel over an unsecured network in a client–server architecture, connecting an SSH client application with an SSH server; the protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22. SSH is used to access Unix-like operating systems, but it can be used on Microsoft Windows. Windows 10 uses OpenSSH as its default SSH client. SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin and rexec protocols; those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis. The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read the contents of SSH sessions.
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. There are several ways to use SSH. Another is to use a manually generated public-private key pair to perform the authentication, allowing users or programs to log in without having to specify a password. In this scenario, anyone can produce a matching pair of different keys; the public key is placed on all computers that must allow access to the owner of the matching private key. While authentication is based on the private key, the key itself is never transferred through the network during authentication. SSH only verifies whether the same person offering the public key owns the matching private key. In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. On Unix-like systems, the list of authorized public keys is stored in the home directory of the user, allowed to log in remotely, in the file ~/.ssh/authorized_keys.
This file is respected by SSH only if it is not writable by anything apart from the root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase; the private key can be looked for in standard places, its full path can be specified as a command line setting. The ssh-keygen utility produces the private keys, always in pairs. SSH supports password-based authentication, encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, obtain it. However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side used; the SSH client raises a warning before accepting the key of a new unknown server. Password authentication can be disabled. SSH is used to log into a remote machine and execute commands, but it supports tunneling, forwarding TCP ports and X11 connections.
SSH uses the client-server model. The standard TCP port 22 has been assigned for contacting SSH servers. An SSH client program is used for establishing connections to an SSH daemon accepting remote connections. Both are present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to 1709 do not include SSH by default. Proprietary and open source versions of various levels of complexity and completeness exist. File managers for UNIX-like systems can use the FISH protocol to provide a split-pane GUI with drag-and-drop; the open source Windows program WinSCP provides similar file management capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine. In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, designed the first version of the protocol prompted by a password-sniffing attack at his university network; the goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, an