Wi-Fi is technology for radio wireless local area networking of devices based on the IEEE 802.11 standards. Wi‑Fi is a trademark of the Wi-Fi Alliance, which restricts the use of the term Wi-Fi Certified to products that complete after many years of testing the 802.11 committee interoperability certification testing. Devices that can use Wi-Fi technologies include, among others and laptops, video game consoles and tablets, smart TVs, digital audio players, digital cameras and drones. Wi-Fi compatible devices can connect to the Internet via a wireless access point; such an access point has a range of about 20 meters indoors and a greater range outdoors. Hotspot coverage can be as small as a single room with walls that block radio waves, or as large as many square kilometres achieved by using multiple overlapping access points. Different versions of Wi-Fi exist, with radio bands and speeds. Wi-Fi most uses the 2.4 gigahertz UHF and 5 gigahertz SHF ISM radio bands. Each channel can be time-shared by multiple networks.
These wavelengths work best for line-of-sight. Many common materials absorb or reflect them, which further restricts range, but can tend to help minimise interference between different networks in crowded environments. At close range, some versions of Wi-Fi, running on suitable hardware, can achieve speeds of over 1 Gbit/s. Anyone within range with a wireless network interface controller can attempt to access a network. Wi-Fi Protected Access is a family of technologies created to protect information moving across Wi-Fi networks and includes solutions for personal and enterprise networks. Security features of WPA have included stronger protections and new security practices as the security landscape has changed over time. In 1971, ALOHAnet connected the Hawaiian Islands with a UHF wireless packet network. ALOHAnet and the ALOHA protocol were early forerunners to Ethernet, the IEEE 802.11 protocols, respectively. A 1985 ruling by the U. S. Federal Communications Commission released the ISM band for unlicensed use.
These frequency bands are the same ones used by equipment such as microwave ovens and are subject to interference. In 1991, NCR Corporation with AT&T Corporation invented the precursor to 802.11, intended for use in cashier systems, under the name WaveLAN. The Australian radio-astronomer Dr John O'Sullivan with his colleagues Terence Percival, Graham Daniels, Diet Ostry, John Deane developed a key patent used in Wi-Fi as a by-product of a Commonwealth Scientific and Industrial Research Organisation research project, "a failed experiment to detect exploding mini black holes the size of an atomic particle". Dr O'Sullivan and his colleagues are credited with inventing Wi-Fi. In 1992 and 1996, CSIRO obtained patents for a method used in Wi-Fi to "unsmear" the signal; the first version of the 802.11 protocol was released in 1997, provided up to 2 Mbit/s link speeds. This was updated in 1999 with 802.11b to permit 11 Mbit/s link speeds, this proved to be popular. In 1999, the Wi-Fi Alliance formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
Wi-Fi uses a large number of patents held by many different organizations. In April 2009, 14 technology companies agreed to pay CSIRO $1 billion for infringements on CSIRO patents; this led to Australia labeling Wi-Fi as an Australian invention, though this has been the subject of some controversy. CSIRO won a further $220 million settlement for Wi-Fi patent-infringements in 2012 with global firms in the United States required to pay the CSIRO licensing rights estimated to be worth an additional $1 billion in royalties. In 2016, the wireless local area network Test Bed was chosen as Australia's contribution to the exhibition A History of the World in 100 Objects held in the National Museum of Australia; the name Wi-Fi, commercially used at least as early as August 1999, was coined by the brand-consulting firm Interbrand. The Wi-Fi Alliance had hired Interbrand to create a name, "a little catchier than'IEEE 802.11b Direct Sequence'." Phil Belanger, a founding member of the Wi-Fi Alliance who presided over the selection of the name "Wi-Fi", has stated that Interbrand invented Wi-Fi as a pun on the word hi-fi, a term for high-quality audio technology.
Interbrand created the Wi-Fi logo. The yin-yang Wi-Fi logo indicates the certification of a product for interoperability; the Wi-Fi Alliance used the advertising slogan "The Standard for Wireless Fidelity" for a short time after the brand name was created. While inspired by the term hi-fi, the name was never "Wireless Fidelity"; the Wi-Fi Alliance was called the "Wireless Fidelity Alliance Inc" in some publications. Non-Wi-Fi technologies intended for fixed points, such as Motorola Canopy, are described as fixed wireless. Alternative wireless technologies include mobile phone standards, such as 2G, 3G, 4G, LTE; the name is sometimes written as WiFi, Wifi, or wifi, but these are not approved by the Wi-Fi Alliance. IEEE is a separate, but related organization and their website has stated "WiFi is a short name for Wireless Fidelity". To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller; the combination of computer and interface controllers is called a station.
A service set is the set of all the devices associated with a particular Wi-Fi network. The service set can be local, extended or mesh; each service set has an associated identifier, the 32-byte Service Set Identifier, which identifies the partic
Hypertext Transfer Protocol Secure is an extension of the Hypertext Transfer Protocol. It is used for secure communication over a computer network, is used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security, or its predecessor, Secure Sockets Layer; the protocol is therefore often referred to as HTTP over TLS, or HTTP over SSL. The principal motivation for HTTPS is authentication of the accessed website and protection of the privacy and integrity of the exchanged data while in transit, it protects against man-in-the-middle attacks. The bidirectional encryption of communications between a client and server protects against eavesdropping and tampering of the communication. In practice, this provides a reasonable assurance that one is communicating without interference by attackers with the website that one intended to communicate with, as opposed to an impostor. HTTPS connections were used for payment transactions on the World Wide Web, e-mail and for sensitive transactions in corporate information systems.
Since 2018, HTTPS is used more by web users than the original non-secure HTTP to protect page authenticity on all types of websites. The Uniform Resource Identifier scheme HTTPS has identical usage syntax to the HTTP scheme. However, HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL/TLS is suited for HTTP, since it can provide some protection if only one side of the communication is authenticated; this is the case with HTTP transactions over the Internet, where only the server is authenticated. HTTPS creates a secure channel over an insecure network; this ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. Because HTTPS piggybacks HTTP on top of TLS, the entirety of the underlying HTTP protocol can be encrypted; this includes the request URL, query parameters and cookies. However, because host addresses and port numbers are part of the underlying TCP/IP protocols, HTTPS cannot protect their disclosure.
In practice this means that on a configured web server, eavesdroppers can infer the IP address and port number of the web server that one is communicating with, as well as the amount and duration of the communication, though not the content of the communication. Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-installed in their software. Certificate authorities are in this way being trusted by web browser creators to provide valid certificates. Therefore, a user should trust an HTTPS connection to a website if and only if all of the following are true: The user trusts that the browser software implements HTTPS with pre-installed certificate authorities; the user trusts the certificate authority to vouch only for legitimate websites. The website provides a valid certificate; the certificate identifies the website. The user trusts. HTTPS is important over insecure networks, as anyone on the same local network can packet-sniff and discover sensitive information not protected by HTTPS.
Additionally, many free to use and paid WLAN networks engage in packet injection in order to serve their own ads on webpages. However, this can be exploited maliciously in many ways, such as injecting malware onto webpages and stealing users' private information. HTTPS is very important for connections over the Tor anonymity network, as malicious Tor nodes can damage or alter the contents passing through them in an insecure fashion and inject malware into the connection; this is one reason why the Electronic Frontier Foundation and the Tor project started the development of HTTPS Everywhere, included in the Tor Browser Bundle. As more information is revealed about global mass surveillance and criminals stealing personal information, the use of HTTPS security on all websites is becoming important regardless of the type of Internet connection being used. While metadata about individual pages that a user visits is not sensitive, when combined, they can reveal a lot about the user and compromise the user's privacy.
Deploying HTTPS allows the use of HTTP/2, that are new generations of HTTP, designed to reduce page load times and latency. It is recommended to use HTTP Strict Transport Security with HTTPS to protect users from man-in-the-middle attacks SSL stripping. HTTPS should not be confused with the little-used Secure HTTP specified in RFC 2660; as of April 2018, 33.2% of Alexa top 1,000,000 websites use HTTPS as default, 57.1% of the Internet's 137,971 most popular websites have a secure implementation of HTTPS, 70% of page loads use HTTPS. Most browsers display a warning. Older browsers, when connecting to a site wit
Integrated Services Digital Network
Integrated Services Digital Network is a set of communication standards for simultaneous digital transmission of voice, video and other network services over the traditional circuits of the public switched telephone network. It was first defined in 1988 in the CCITT red book. Prior to ISDN, the telephone system was viewed as a way to transport voice, with some special services available for data; the key feature of ISDN is that it integrates speech and data on the same lines, adding features that were not available in the classic telephone system. The ISDN standards define several kinds of access interfaces, such as Basic Rate Interface, Primary Rate Interface, Narrowband ISDN, Broadband ISDN. ISDN is a circuit-switched telephone network system, which provides access to packet switched networks, designed to allow digital transmission of voice and data over ordinary telephone copper wires, resulting in better voice quality than an analog phone can provide, it offers circuit-switched connections, packet-switched connections, in increments of 64 kilobit/s.
In some countries, ISDN found major market application for Internet access, in which ISDN provides a maximum of 128 kbit/s bandwidth in both upstream and downstream directions. Channel bonding can achieve a greater data rate. ISDN is employed as data-link and physical layers in the context of the OSI model. In common use, ISDN is limited to usage to Q.931 and related protocols, which are a set of signaling protocols establishing and breaking circuit-switched connections, for advanced calling features for the user. They were introduced in 1986. In a videoconference, ISDN provides simultaneous voice and text transmission between individual desktop videoconferencing systems and group videoconferencing systems. Integrated services refers to ISDN's ability to deliver at minimum two simultaneous connections, in any combination of data, voice and fax, over a single line. Multiple devices can be attached to the line, used as needed; that means an ISDN line can take care of what were expected to be most people's complete communications needs at a much higher transmission rate, without forcing the purchase of multiple analog phone lines.
It refers to integrated switching and transmission in that telephone switching and carrier wave transmission are integrated rather than separate as in earlier technology. The entry level interface to ISDN is the Basic Rate Interface, a 128 kbit/s service delivered over a pair of standard telephone copper wires; the 144 kbit/s overall payload rate is divided into two 64 kbit/s bearer channels and one 16 kbit/s signaling channel. This is sometimes referred to as 2B+D; the interface specifies the following network interfaces: The U interface is a two-wire interface between the exchange and a network terminating unit, the demarcation point in non-North American networks. The T interface is a serial interface between a computing device and a terminal adapter, the digital equivalent of a modem; the S interface is a four-wire bus. The R interface defines the point between a non-ISDN device and a terminal adapter which provides translation to and from such a device. BRI-ISDN is popular in Europe but is much less common in North America.
It is common in Japan — where it is known as INS64. The other ISDN access available is the Primary Rate Interface, carried over T-carrier with 24 time slots in North America, over E-carrier with 32 channels in most other countries; each channel provides transmission at a 64 kbit/s data rate. With the E1 carrier, the available channels are divided into 30 bearer channels, one data channel, one timing and alarm channel; this scheme is referred to as 30B+2D. In North America, PRI service is delivered via T1 carriers with only one data channel referred to as 23B+D, a total data rate of 1544 kbit/s. Non-Facility Associated Signalling allows two or more PRI circuits to be controlled by a single D channel, sometimes called 23B+D + n*24B. D-channel backup allows for a second D channel in case the primary fails. NFAS is used on a Digital Signal 3. PRI-ISDN is popular throughout the world for connecting private branch exchanges to the public switched telephone network. Though many network professionals use the term ISDN to refer to the lower-bandwidth BRI circuit, in North America BRI is uncommon whilst PRI circuits serving PBXs are commonplace.
The bearer channel is a standard 64 kbit/s voice channel of 8 bits sampled at 8 kHz with G.711 encoding. B-channels can be used to carry data, since they are nothing more than digital channels; each one of these channels is known as a DS0. Most B channels can carry a 64 kbit/s signal, but some were limited to 56K because they traveled over RBS lines; this has since become less so. X.25 can be carried over the B or D channels of a BRI line, over the B channels of a PRI line. X.25 over the D channel is used at many point-of-sale terminals because it eliminates the modem setup, because it connects to the central system over a B channel, thereby eliminating the need for modems and making much better use of the central system's telephone lines. X.25 was part of an ISDN protocol
Internet protocol suite
The Internet protocol suite is the conceptual model and set of communications protocols used in the Internet and similar computer networks. It is known as TCP/IP because the foundational protocols in the suite are the Transmission Control Protocol and the Internet Protocol, it is known as the Department of Defense model because the development of the networking method was funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, transmitted and received; this functionality is organized into four abstraction layers, which classify all related protocols according to the scope of networking involved. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment; the technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force.
The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. The Internet protocol suite resulted from research and development conducted by the Defense Advanced Research Projects Agency in the late 1960s. After initiating the pioneering ARPANET in 1969, DARPA started work on a number of other data transmission technologies. In 1972, Robert E. Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf, who helped develop the existing ARPANET Network Control Program protocol, joined Kahn to work on open-architecture interconnection models with the goal of designing the next protocol generation for the ARPANET. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, instead of the network being responsible for reliability, as in the ARPANET, this function was delegated to the hosts.
Cerf credits Hubert Zimmermann and Louis Pouzin, designer of the CYCLADES network, with important influences on this design. The protocol was implemented as the Transmission Control Program, first published in 1974; the TCP managed both datagram transmissions and routing, but as the protocol grew, other researchers recommended a division of functionality into protocol layers. Advocates included Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments, the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would lead to scalability issues; the Transmission Control Program was split into two distinct protocols, the Transmission Control Protocol and the Internet Protocol.
The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This design is known as the end-to-end principle. Using this design, it became possible to connect any network to the ARPANET, irrespective of the local characteristics, thereby solving Kahn's initial internetworking problem. One popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years as a joke, the IP over Avian Carriers formal protocol specification was created and tested. A computer called, it forwards network packets forth between them. A router was called gateway, but the term was changed to avoid confusion with other types of gateways. From 1973 to 1974, Cerf's networking research group at Stanford worked out details of the idea, resulting in the first TCP specification.
A significant technical influence was the early networking work at Xerox PARC, which produced the PARC Universal Packet protocol suite, much of which existed around that time. DARPA contracted with BBN Technologies, Stanford University, the University College London to develop operational versions of the protocol on different hardware platforms. Four versions were developed: TCP v1, TCP v2, TCP v3 and IP v3, TCP/IP v4; the last protocol is still in use today. In 1975, a two-network TCP/IP communications test was performed between Stanford and University College London. In November 1977, a three-network TCP/IP test was conducted between sites in the US, the UK, Norway. Several other TCP/IP prototypes were developed at multiple research centers between 1978 and 1983. In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, Peter T. Kirstein's research group at University College London adopted the protocol; the migration of the ARPANET to TCP/IP was completed on flag day January 1, 1983, when the new protocols were permanently activated.
In 1985, the Internet Advisory Board held a three-day TCP/
Secure Shell is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line login and remote command execution, but any network service can be secured with SSH. SSH provides a secure channel over an unsecured network in a client–server architecture, connecting an SSH client application with an SSH server; the protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2. The standard TCP port for SSH is 22. SSH is used to access Unix-like operating systems, but it can be used on Microsoft Windows. Windows 10 uses OpenSSH as its default SSH client. SSH was designed as a replacement for Telnet and for unsecured remote shell protocols such as the Berkeley rlogin and rexec protocols; those protocols send information, notably passwords, in plaintext, rendering them susceptible to interception and disclosure using packet analysis. The encryption used by SSH is intended to provide confidentiality and integrity of data over an unsecured network, such as the Internet, although files leaked by Edward Snowden indicate that the National Security Agency can sometimes decrypt SSH, allowing them to read the contents of SSH sessions.
SSH uses public-key cryptography to authenticate the remote computer and allow it to authenticate the user, if necessary. There are several ways to use SSH. Another is to use a manually generated public-private key pair to perform the authentication, allowing users or programs to log in without having to specify a password. In this scenario, anyone can produce a matching pair of different keys; the public key is placed on all computers that must allow access to the owner of the matching private key. While authentication is based on the private key, the key itself is never transferred through the network during authentication. SSH only verifies whether the same person offering the public key owns the matching private key. In all versions of SSH it is important to verify unknown public keys, i.e. associate the public keys with identities, before accepting them as valid. Accepting an attacker's public key without validation will authorize an unauthorized attacker as a valid user. On Unix-like systems, the list of authorized public keys is stored in the home directory of the user, allowed to log in remotely, in the file ~/.ssh/authorized_keys.
This file is respected by SSH only if it is not writable by anything apart from the root. When the public key is present on the remote end and the matching private key is present on the local end, typing in the password is no longer required. However, for additional security the private key itself can be locked with a passphrase; the private key can be looked for in standard places, its full path can be specified as a command line setting. The ssh-keygen utility produces the private keys, always in pairs. SSH supports password-based authentication, encrypted by automatically generated keys. In this case, the attacker could imitate the legitimate server side, ask for the password, obtain it. However, this is possible only if the two sides have never authenticated before, as SSH remembers the key that the server side used; the SSH client raises a warning before accepting the key of a new unknown server. Password authentication can be disabled. SSH is used to log into a remote machine and execute commands, but it supports tunneling, forwarding TCP ports and X11 connections.
SSH uses the client-server model. The standard TCP port 22 has been assigned for contacting SSH servers. An SSH client program is used for establishing connections to an SSH daemon accepting remote connections. Both are present on most modern operating systems, including macOS, most distributions of Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably, versions of Windows prior to 1709 do not include SSH by default. Proprietary and open source versions of various levels of complexity and completeness exist. File managers for UNIX-like systems can use the FISH protocol to provide a split-pane GUI with drag-and-drop; the open source Windows program WinSCP provides similar file management capability using PuTTY as a back-end. Both WinSCP and PuTTY are available packaged to run directly off a USB drive, without requiring installation on the client machine. Setting up an SSH server in Windows involves enabling a feature in Settings app. In Windows 10 version 1709, an official Win32 port of OpenSSH is available.
SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine. In 1995, Tatu Ylönen, a researcher at Helsinki University of Technology, designed the first version of the protocol prompted by a password-sniffing attack at his university network; the goal of SSH was to replace the earlier rlogin, TELNET, FTP and rsh protocols, which did not provide strong authentication nor guarantee confidentiality. Ylönen released his implementation as freeware in July 1995, an
Session Initiation Protocol
The Session Initiation Protocol is a signaling protocol used for initiating and terminating real-time sessions that include voice and messaging applications. SIP is used for signaling and controlling multimedia communication sessions in applications of Internet telephony for voice and video calls, in private IP telephone systems, in instant messaging over Internet Protocol networks as well as mobile phone calling over LTE; the protocol defines the specific format of messages exchanged and the sequence of communications for cooperation of the participants. SIP is a text-based protocol, incorporating many elements of the Hypertext Transfer Protocol and the Simple Mail Transfer Protocol. A call established with SIP may consist of multiple media streams, but no separate streams are required for applications, such as text messaging, that exchange data as payload in the SIP message. SIP works in conjunction with several other protocols that carry the session media. Most media type and parameter negotiation and media setup is performed with the Session Description Protocol, carried as payload in SIP messages.
SIP is designed to be independent of the underlying transport layer protocol, can be used with the User Datagram Protocol, the Transmission Control Protocol, the Stream Control Transmission Protocol. For secure transmissions of SIP messages over insecure network links, the protocol may be encrypted with Transport Layer Security. For the transmission of media streams the SDP payload carried in SIP messages employs the Real-time Transport Protocol or the Secure Real-time Transport Protocol. SIP was designed by Mark Handley, Henning Schulzrinne, Eve Schooler and Jonathan Rosenberg in 1996; the protocol was standardized as RFC 2543 in 1999. In November 2000, SIP was accepted as a 3GPP signaling protocol and permanent element of the IP Multimedia Subsystem architecture for IP-based streaming multimedia services in cellular networks. In June 2002 the specification was revised in RFC 3261 and various extensions and clarifications have been published since. SIP was designed to provide a signaling and call setup protocol for IP-based communications supporting the call processing functions and features present in the public switched telephone network with a vision of supporting new multimedia applications.
It has been extended for video conferencing, streaming media distribution, instant messaging, presence information, file transfer, Internet fax and online games. SIP is distinguished by its proponents for having roots in the Internet community rather than in the telecommunications industry. SIP has been standardized by the IETF, while other protocols, such as H.323, have traditionally been associated with the International Telecommunication Union. SIP is only involved for the signaling operations of a media communication session and is used to set up and terminate voice or video calls. SIP can be used to establish multiparty sessions, it allows modification of existing calls. The modification can involve changing addresses or ports, inviting more participants, adding or deleting media streams. SIP has found applications in messaging applications, such as instant messaging, event subscription and notification. SIP works in conjunction with several other protocols that specify the media format and coding and that carry the media once the call is set up.
For call setup, the body of a SIP message contains a Session Description Protocol data unit, which specifies the media format and media communication protocol. Voice and video media streams are carried between the terminals using the Real-time Transport Protocol or Secure Real-time Transport Protocol; every resource of a SIP network, such as user agents, call routers, voicemail boxes, are identified by a Uniform Resource Identifier. The syntax of the URI follows the general standard syntax used in Web services and e-mail; the URI scheme used for SIP is sip and a typical SIP URI has the form sip:username@domainname or sip:username@hostport, where domainname requires DNS SRV records to locate the servers for SIP domain while hostport can be an IP address or a qualified domain name of the host and port. If secure transmission is required, the scheme sips is used. SIP employs design elements similar to the HTTP request/response transaction model; each transaction consists of a client request that invokes a particular method or function on the server and at least one response.
SIP reuses most of the header fields, encoding rules and status codes of HTTP, providing a readable text-based format. SIP can be carried by several transport layer protocols including Transmission Control Protocol, User Datagram Protocol, Stream Control Transmission Protocol. SIP clients use TCP or UDP on port numbers 5060 or 5061 for SIP traffic to servers and other endpoints. Port 5060 is used for non-encrypted signaling traffic whereas port 5061 is used for traffic encrypted with Transport Layer Security. SIP-based telephony networks implement call processing features of Signaling System 7, for which special SIP protocol extensions exist, although the two protocols themselves are different. SS7 is a centralized protocol, characterized by a complex central network architecture and dumb endpoints. SIP is a client-server protocol of equipotent peers. SIP features are implemented in the communicating endpoints, while the traditional SS7 architecture is in use only between switching centers; the network elements that use the Session Initiation Protocol for commun