Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line login and remote command execution, but any network service can be secured with SSH. SSH provides a secure channel over an unsecured network in a client–server architecture, connecting an SSH client application with an SSH server; the protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22. SSH is used to access Unix-like operating systems, but it can be used on Microsoft Windows. Windows 10 uses OpenSSH as its default SSH client. SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin and rexec protocols; those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis. The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read the contents of SSH sessions.
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. There are several ways to use SSH. Another is to use a manually generated public-private key pair to perform the authentication, allowing users or programs to log in without having to specify a password. In this scenario, anyone can produce a matching pair of different keys; the public key is placed on all computers that must allow access to the owner of the matching private key. While authentication is based on the private key, the key itself is never transferred through the network during authentication. SSH only verifies whether the same person offering the public key owns the matching private key. In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. On Unix-like systems, the list of authorized public keys is stored in the home directory of the user, allowed to log in remotely, in the file ~/.ssh/authorized_keys.
This file is respected by SSH only if it is not writable by anything apart from the root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase; the private key can be looked for in standard places, its full path can be specified as a command line setting. The ssh-keygen utility produces the private keys, always in pairs. SSH supports password-based authentication, encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, obtain it. However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side used; the SSH client raises a warning before accepting the key of a new unknown server. Password authentication can be disabled. SSH is used to log into a remote machine and execute commands, but it supports tunneling, forwarding TCP ports and X11 connections.
SSH uses the client-server model. The standard TCP port 22 has been assigned for contacting SSH servers. An SSH client program is used for establishing connections to an SSH daemon accepting remote connections. Both are present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to 1709 do not include SSH by default. Proprietary and open source versions of various levels of complexity and completeness exist. File managers for UNIX-like systems can use the FISH protocol to provide a split-pane GUI with drag-and-drop; the open source Windows program WinSCP provides similar file management capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine. In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, designed the first version of the protocol prompted by a password-sniffing attack at his university network; the goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, an
Ethernet is a family of computer networking technologies used in local area networks, metropolitan area networks and wide area networks. It was commercially introduced in 1980 and first standardized in 1983 as IEEE 802.3, has since retained a good deal of backward compatibility and been refined to support higher bit rates and longer link distances. Over time, Ethernet has replaced competing wired LAN technologies such as Token Ring, FDDI and ARCNET; the original 10BASE5 Ethernet uses coaxial cable as a shared medium, while the newer Ethernet variants use twisted pair and fiber optic links in conjunction with switches. Over the course of its history, Ethernet data transfer rates have been increased from the original 2.94 megabits per second to the latest 400 gigabits per second. The Ethernet standards comprise several wiring and signaling variants of the OSI physical layer in use with Ethernet. Systems communicating over Ethernet divide a stream of data into shorter pieces called frames; each frame contains source and destination addresses, error-checking data so that damaged frames can be detected and discarded.
As per the OSI model, Ethernet provides services up including the data link layer. Features such as the 48-bit MAC address and Ethernet frame format have influenced other networking protocols including Wi-Fi wireless networking technology. Ethernet is used in home and industry; the Internet Protocol is carried over Ethernet and so it is considered one of the key technologies that make up the Internet. Ethernet was developed at Xerox PARC between 1973 and 1974, it was inspired by ALOHAnet. The idea was first documented in a memo that Metcalfe wrote on May 22, 1973, where he named it after the luminiferous aether once postulated to exist as an "omnipresent, completely-passive medium for the propagation of electromagnetic waves." In 1975, Xerox filed a patent application listing Metcalfe, David Boggs, Chuck Thacker, Butler Lampson as inventors. In 1976, after the system was deployed at PARC, Metcalfe and Boggs published a seminal paper; that same year, Ron Crane, Bob Garner, Roy Ogus facilitated the upgrade from the original 2.94 Mbit/s protocol to the 10 Mbit/s protocol, released to the market in 1980.
Metcalfe left Xerox in June 1979 to form 3Com. He convinced Digital Equipment Corporation and Xerox to work together to promote Ethernet as a standard; as part of that process Xerox agreed to relinquish their'Ethernet' trademark. The first standard was published on September 1980 as "The Ethernet, A Local Area Network. Data Link Layer and Physical Layer Specifications"; this so-called DIX standard specified 10 Mbit/s Ethernet, with 48-bit destination and source addresses and a global 16-bit Ethertype-type field. Version 2 was published in November, 1982 and defines what has become known as Ethernet II. Formal standardization efforts proceeded at the same time and resulted in the publication of IEEE 802.3 on June 23, 1983. Ethernet competed with Token Ring and other proprietary protocols. Ethernet was able to adapt to market realities and shift to inexpensive thin coaxial cable and ubiquitous twisted pair wiring. By the end of the 1980s, Ethernet was the dominant network technology. In the process, 3Com became a major company.
3Com shipped its first 10 Mbit/s Ethernet 3C100 NIC in March 1981, that year started selling adapters for PDP-11s and VAXes, as well as Multibus-based Intel and Sun Microsystems computers. This was followed by DEC's Unibus to Ethernet adapter, which DEC sold and used internally to build its own corporate network, which reached over 10,000 nodes by 1986, making it one of the largest computer networks in the world at that time. An Ethernet adapter card for the IBM PC was released in 1982, and, by 1985, 3Com had sold 100,000. Parallel port based Ethernet adapters were produced with drivers for DOS and Windows. By the early 1990s, Ethernet became so prevalent that it was a must-have feature for modern computers, Ethernet ports began to appear on some PCs and most workstations; this process was sped up with the introduction of 10BASE-T and its small modular connector, at which point Ethernet ports appeared on low-end motherboards. Since Ethernet technology has evolved to meet new bandwidth and market requirements.
In addition to computers, Ethernet is now used to interconnect appliances and other personal devices. As Industrial Ethernet it is used in industrial applications and is replacing legacy data transmission systems in the world's telecommunications networks. By 2010, the market for Ethernet equipment amounted to over $16 billion per year. In February 1980, the Institute of Electrical and Electronics Engineers started project 802 to standardize local area networks; the "DIX-group" with Gary Robinson, Phil Arst, Bob Printis submitted the so-called "Blue Book" CSMA/CD specification as a candidate for the LAN specification. In addition to CSMA/CD, Token Ring and Token Bus were considered as candidates for a LAN standard. Competing proposals and broad interest in the initiative led to strong disagreement over which technology to standardize. In December 1980, the group was split into three subgroups, standardization proceeded separately for each proposal. Delays in the standards process put at risk the market introduction of the Xerox Star workstation and 3Com's Ethernet LAN products.
With such business implications in mind, David Liddle an
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node. A router is connected to two or more data lines from different networks; when a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Using information in its routing table or routing policy, it directs the packet to the next network on its journey; the most familiar type of routers are home and small office routers that forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Though routers are dedicated hardware devices, software-based routers exist. When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol; each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks. A router has two types of network element components organized onto separate planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, through which physical interface connection, it does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table; the control-plane logic strips non-essential directives from the table and builds a forwarding information base to be used by the forwarding plane. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections.
It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane. A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission, it can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' networks; the largest routers may be used in large enterprise networks. Smaller routers provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises; the most powerful routers are found in ISPs, academic and research facilities.
Large businesses may need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access routers, including small office/home office models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are responsible for enforcing quality of service across a wide area network, so they may have considerable memory installed, multiple WAN interface connections, substantial onboard data processing routines, they may provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations.
They lack some of the features of edge routers. External networks must be considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, other security functions, or these may be handled by separate devices. Routers commonly perform network address translation which restricts connections initiated from external connections but is not recognised as a security feature by all experts.. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be found and corrected. Routers are often distinguished on the basis of the network in which they operate. A router in a local area network of a single organisation is called an interior router. A router, operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network is called a border router, or gateway router. Routers intended for ISP and major enterprise connectivity exchange routing information using the Border Gateway Protocol.
RFC 4098 standard defines the types of BGP routers according to their functions: Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP
Integrated Services Digital Network
Integrated Services Digital Network is a set of communication standards for simultaneous digital transmission of voice, video and other network services over the traditional circuits of the public switched telephone network. It was first defined in 1988 in the CCITT red book. Prior to ISDN, the telephone system was viewed as a way to transport voice, with some special services available for data; the key feature of ISDN is that it integrates speech and data on the same lines, adding features that were not available in the classic telephone system. The ISDN standards define several kinds of access interfaces, such as Basic Rate Interface, Primary Rate Interface, Narrowband ISDN, Broadband ISDN. ISDN is a circuit-switched telephone network system, which provides access to packet switched networks, designed to allow digital transmission of voice and data over ordinary telephone copper wires, resulting in better voice quality than an analog phone can provide, it offers circuit-switched connections, packet-switched connections, in increments of 64 kilobit/s.
In some countries, ISDN found major market application for Internet access, in which ISDN provides a maximum of 128 kbit/s bandwidth in both upstream and downstream directions. Channel bonding can achieve a greater data rate. ISDN is employed as data-link and physical layers in the context of the OSI model. In common use, ISDN is limited to usage to Q.931 and related protocols, which are a set of signaling protocols establishing and breaking circuit-switched connections, for advanced calling features for the user. They were introduced in 1986. In a videoconference, ISDN provides simultaneous voice and text transmission between individual desktop videoconferencing systems and group videoconferencing systems. Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice and fax, over a single line. Multiple devices can be attached to the line, used as needed; that means an ISDN line can take care of what were expected to be most people's complete communications needs at a much higher transmission rate, without forcing the purchase of multiple analog phone lines.
It refers to integrated switching and transmission in that telephone switching and carrier wave transmission are integrated rather than separate as in earlier technology. The entry level interface to ISDN is the Basic Rate Interface, a 128 kbit/s service delivered over a pair of standard telephone copper wires; the 144 kbit/s overall payload rate is divided into two 64 kbit/s bearer channels and one 16 kbit/s signaling channel. This is sometimes referred to as 2B+D; the interface specifies the following network interfaces: The U interface is a two-wire interface between the exchange and a network terminating unit, the demarcation point in non-North American networks. The T interface is a serial interface between a computing device and a terminal adapter, the digital equivalent of a modem; the S interface is a four-wire bus. The R interface defines the point between a non-ISDN device and a terminal adapter which provides translation to and from such a device. BRI-ISDN is popular in Europe but is much less common in North America.
It is common in Japan — where it is known as INS64. The other ISDN access available is the Primary Rate Interface, carried over T-carrier with 24 time slots in North America, over E-carrier with 32 channels in most other countries; each channel provides transmission at a 64 kbit/s data rate. With the E1 carrier, the available channels are divided into 30 bearer channels, one data channel, one timing and alarm channel; this scheme is referred to as 30B+2D. In North America, PRI service is delivered via T1 carriers with only one data channel referred to as 23B+D, a total data rate of 1544 kbit/s. Non-Facility Associated Signalling allows two or more PRI circuits to be controlled by a single D channel, sometimes called 23B+D + n*24B. D-channel backup allows for a second D channel in case the primary fails. NFAS is used on a Digital Signal 3. PRI-ISDN is popular throughout the world for connecting private branch exchanges to the public switched telephone network. Though many network professionals use the term ISDN to refer to the lower-bandwidth BRI circuit, in North America BRI is uncommon whilst PRI circuits serving PBXs are commonplace.
The bearer channel is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with G.711 encoding. B-channels can be used to carry data, since they are nothing more than digital channels; each one of these channels is known as a DS0. Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled over RBS lines; this has since become less so. X.25 can be carried over the B or D channels of a BRI line, over the B channels of a PRI line. X.25 over the D channel is used at many point-of-sale terminals because it eliminates the modem setup, because it connects to the central system over a B channel, thereby eliminating the need for modems and making much better use of the central system's telephone lines. X.25 was part of an ISDN protocol
Fiber Distributed Data Interface
Fiber Distributed Data Interface is a standard for data transmission in a local area network. It uses optical fiber as its standard underlying physical medium, although it was later specified to use copper cable, in which case it may be called CDDI, standardized as TP-PMD referred to as TP-DDI. FDDI provides a 100 Mbit/s optical standard for data transmission in local area network that can extend in range up to 200 kilometers. Although FDDI logical topology is a ring-based token network, it did not use the IEEE 802.5 token ring protocol as its basis. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. FDDI offers both a Dual-Attached Station, counter-rotating token ring topology and a Single-Attached Station, token bus passing ring topology. FDDI, as a product of American National Standards Institute X3T9.5, conforms to the Open Systems Interconnection model of functional layering using other protocols. The standards process started in the mid 1980s.
FDDI-II, a version of FDDI described in 1989, added circuit-switched service capability to the network so that it could handle voice and video signals. Work started to connect FDDI networks to synchronous optical networking technology. A FDDI network contains two rings; the primary ring offers up to 100 Mbit/s capacity. When a network has no requirement for the secondary ring to do backup, it can carry data, extending capacity to 200 Mbit/s; the single ring can extend the maximum distance. FDDI had a larger maximum-frame size than the standard Ethernet family, which only supports a maximum-frame size of 1,500 bytes, allowing better effective data rates in some cases. Designers constructed FDDI rings in a network topology such as a "dual ring of trees". A small number of devices infrastructure devices such as routers and concentrators rather than host computers, were "dual-attached" to both rings. Host computers connect as single-attached devices to the routers or concentrators; the dual ring in its most degenerate form collapses into a single device.
A computer-room contained the whole dual ring, although some implementations deployed FDDI as a metropolitan area network. FDDI requires this network topology because the dual ring passes through each connected device and requires each such device to remain continuously operational; the standard allows for optical bypasses, but network engineers consider these unreliable and error-prone. Devices such as workstations and minicomputers that might not come under the control of the network managers are not suitable for connection to the dual ring; as an alternative to using a dual-attached connection, a workstation can obtain the same degree of resilience through a dual-homed connection made to two separate devices in the same FDDI ring. One of the connections becomes active. If the first connection fails, the backup link takes over with no perceptible delay; the FDDI data frame format is: Where PA is the preamble, SD is a start delimiter, FC is frame control, DA is the destination address, SA is the source address, PDU is the protocol data unit, FCS is the frame check Sequence, ED/FS are the end delimiter and frame status.
The Internet Engineering Task Force defined a standard for transmission of the Internet Protocol over FDDI. It was first proposed in June 1989 and revised in 1990; some aspects of the protocol were compatible with the IEEE 802.2 standard for logical link control. For example, the 48-bit MAC addresses, thus other protocols such as the Address Resolution Protocol could be common as well. FDDI was considered an attractive campus backbone network technology in the early to mid 1990s since existing Ethernet networks only offered 10 Mbit/s data rates and token ring networks only offered 4 Mbit/s or 16 Mbit/s rates, thus it was a high-speed choice of that era. By 1994, vendors included Cisco Systems, National Semiconductor, Network Peripherals, SysKonnect, 3Com. FDDI was made obsolete in local networks by Fast Ethernet which offered the same 100 Mbit/s speeds, but at a much lower cost and, since 1998, by Gigabit Ethernet due to its speed, lower cost, ubiquity. FDDI standards included: ANSI X3.139-1987, Media Access Control — ISO 9314-2 ANSI X3.148-1988, Physical Layer Protocol — ISO 9314-1 ANSI X3.166-1989, Physical Medium Dependent — ISO 9314-3 ANSI X3.184-1993, Single Mode Fiber Physical Medium Dependent — ISO 9314-4 ANSI X3.229-1994, Station Management — ISO 9314-6 This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later