Electronic mail is a method of exchanging messages between people using electronic devices. Invented by Ray Tomlinson, email first entered limited use in the 1960s and by the mid-1970s had taken the form now recognized as email. Email operates across computer networks, which today is the Internet; some early email systems required the author and the recipient to both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward and store messages. Neither the users nor their computers are required to be online simultaneously. An ASCII text-only communications medium, Internet email was extended by Multipurpose Internet Mail Extensions to carry text in other character sets and multimedia content attachments. International email, with internationalized email addresses using UTF-8, has been standardized, but as of 2017 it has not been adopted; the history of modern Internet email services reaches back to the early ARPANET, with standards for encoding email messages published as early as 1973.
An email message sent in the early 1970s looks similar to a basic email sent today. Email had an important role in creating the Internet, the conversion from ARPANET to the Internet in the early 1980s produced the core of the current services; the term electronic mail was used generically for any electronic document transmission. For example, several writers in the early 1970s used the term to describe fax document transmission; as a result, it is difficult to find the first citation for the use of the term with the more specific meaning it has today. Electronic mail has been most called email or e-mail since around 1993, but variations of the spelling have been used: email is the most common form used online, is required by IETF Requests for Comments and working groups and by style guides; this spelling appears in most dictionaries. E-mail is the format that sometimes appears in edited, published American English and British English writing as reflected in the Corpus of Contemporary American English data, but is falling out of favor in some style guides.
Mail was the form used in the original protocol standard, RFC 524. The service is referred to as mail, a single piece of electronic mail is called a message. EMail is a traditional form, used in RFCs for the "Author's Address" and is expressly required "for historical reasons". E-mail is sometimes used, capitalizing the initial E as in similar abbreviations like E-piano, E-guitar, A-bomb, H-bomb. An Internet e-mail consists of an content. Computer-based mail and messaging became possible with the advent of time-sharing computers in the early 1960s, informal methods of using shared files to pass messages were soon expanded into the first mail systems. Most developers of early mainframes and minicomputers developed similar, but incompatible, mail applications. Over time, a complex web of gateways and routing systems linked many of them. Many US universities were part of the ARPANET, which aimed at software portability between its systems; that portability helped make the Simple Mail Transfer Protocol influential.
For a time in the late 1980s and early 1990s, it seemed that either a proprietary commercial system or the X.400 email system, part of the Government Open Systems Interconnection Profile, would predominate. However, once the final restrictions on carrying commercial traffic over the Internet ended in 1995, a combination of factors made the current Internet suite of SMTP, POP3 and IMAP email protocols the standard; the diagram to the right shows a typical sequence of events that takes place when sender Alice transmits a message using a mail user agent addressed to the email address of the recipient. The MUA formats the message in email format and uses the submission protocol, a profile of the Simple Mail Transfer Protocol, to send the message content to the local mail submission agent, in this case smtp.a.org. The MSA determines the destination address provided in the SMTP protocol, in this case firstname.lastname@example.org, a qualified domain address. The part before the @ sign is the local part of the address the username of the recipient, the part after the @ sign is a domain name.
The MSA resolves a domain name to determine the qualified domain name of the mail server in the Domain Name System. The DNS server for the domain b.org responds with any MX records listing the mail exchange servers for that domain, in this case mx.b.org, a message transfer agent server run by the recipient's ISP. smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the message to other MTAs before the message reaches the final message delivery agent; the MDA delivers it to the mailbox of user bob. Bob's MUA picks up the message using either the Post Office Protocol or the Internet Message Access Protocol. In addition to this example and complications exist in the email system: Alice or Bob may use a client connected to a corporate email system, such as IBM Lotus Notes or Microsoft Exchange; these systems have their own internal email format and their clients communicate with the email server using a vendor-specific, proprietary protocol. The server sends or receives email via the Internet through the product's Internet mail gateway which does any necessary reformatt
The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. It is a network of networks that consists of private, academic and government networks of local to global scope, linked by a broad array of electronic and optical networking technologies; the Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web, electronic mail and file sharing. Some publications no longer capitalize "internet"; the origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET served as a backbone for interconnection of regional academic and military networks in the 1980s; the funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, the merger of many networks.
The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, generated a sustained exponential growth as generations of institutional and mobile computers were connected to the network. Although the Internet was used by academia since the 1980s, commercialization incorporated its services and technologies into every aspect of modern life. Most traditional communication media, including telephony, television, paper mail and newspapers are reshaped, redefined, or bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, video streaming websites. Newspaper and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators; the Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or sell goods and services online.
Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers. The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force, a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Today's list of New Seven Wonders; when the term Internet is used to refer to the specific global system of interconnected Internet Protocol networks, the word is a proper noun that should be written with an initial capital letter.
In common use and the media, it is erroneously not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective; the Internet is often referred to as the Net, as a short form of network. As early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven; the designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. The terms Internet and World Wide Web are used interchangeably in everyday speech. However, the World Wide Web or the Web is only one of a large number of Internet services; the Web is a collection of interconnected documents and other web resources, linked by hyperlinks and URLs. As another point of comparison, Hypertext Transfer Protocol, or HTTP, is the language used on the Web for information transfer, yet it is just one of many languages or protocols that can be used for communication on the Internet.
The term Interweb is a portmanteau of Internet and World Wide Web used sarcastically to parody a technically unsavvy user. Research into packet switching, one of the fundamental Internet technologies, started in the early 1960s in the work of Paul Baran and Donald Davies. Packet-switched networks such as the NPL network, ARPANET, the Merit Network, CYCLADES, Telenet were developed in the late 1960s and early 1970s; the ARPANET project led to the development of protocols for internetworking, by which multiple separate networks could be joined into a network of networks. ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, the NLS system at SRI International by Douglas Engelbart in Menlo Park, California, on 29 October 1969; the third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
A workstation is a special computer designed for technical or scientific applications. Intended to be used by one person at a time, they are connected to a local area network and run multi-user operating systems; the term workstation has been used loosely to refer to everything from a mainframe computer terminal to a PC connected to a network, but the most common form refers to the group of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, DEC, HP, NeXT and IBM which opened the door for the 3D graphics animation revolution of the late 1990s. Workstations offered higher performance than mainstream personal computers with respect to CPU and graphics, memory capacity, multitasking capability. Workstations were optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation and rendering of images, mathematical plots; the form factor is that of a desktop computer, consist of a high resolution display, a keyboard and a mouse at a minimum, but offer multiple displays, graphics tablets, 3D mice, etc.
Workstations were the first segment of the computer market to present advanced accessories and collaboration tools. The increasing capabilities of mainstream PCs in the late 1990s have blurred the lines somewhat with technical/scientific workstations; the workstation market employed proprietary hardware which made them distinct from PCs. However, by the early 2000s this difference disappeared, as workstations now use commoditized hardware dominated by large PC vendors, such as Dell, Hewlett-Packard and Fujitsu, selling Microsoft Windows or Linux systems running on x86-64 processors; the first computer that might qualify as a "workstation" was the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1960. One peculiar feature of the machine was. To perform addition, it required a memory-resident table of decimal addition rules; this saved on the cost of logic circuitry. The machine was code-named CADET and rented for $1000 a month.
In 1965, IBM introduced the IBM 1130 scientific computer, meant as the successor to the 1620. Both of these systems came with the ability to run programs written in other languages. Both the 1620 and the 1130 were built into desk-sized cabinets. Both were available with add-on disk drives and both paper-tape and punched-card I/O. A console typewriter for direct interaction was standard on each. Early examples of workstations were dedicated minicomputers. A notable example was the PDP-8 from Digital Equipment Corporation, regarded to be the first commercial minicomputer; the Lisp machines developed at MIT in the early 1970s pioneered some of the principles of the workstation computer, as they were high-performance, single-user systems intended for interactive use. Lisp Machines were commercialized beginning 1980 by companies like Symbolics, Lisp Machines, Texas Instruments and Xerox; the first computer designed for single-users, with high-resolution graphics facilities was the Xerox Alto developed at Xerox PARC in 1973.
Other early workstations include the Terak 8510/a, Three Rivers PERQ and the Xerox Star. In the early 1980s, with the advent of 32-bit microprocessors such as the Motorola 68000, a number of new participants in this field appeared, including Apollo Computer and Sun Microsystems, who created Unix-based workstations based on this processor. Meanwhile, DARPA's VLSI Project created several spinoff graphics products as well, notably the SGI 3130, Silicon Graphics' range of machines that followed, it was not uncommon to differentiate the target market for the products, with Sun and Apollo considered to be network workstations, while the SGI machines were graphics workstations. As RISC microprocessors became available in the mid-1980s, these were adopted by many workstation vendors. Workstations tended to be expensive several times the cost of a standard PC and sometimes costing as much as a new car. However, minicomputers sometimes cost as much as a house; the high expense came from using costlier components that ran faster than those found at the local computer store, as well as the inclusion of features not found in PCs of the time, such as high-speed networking and sophisticated graphics.
Workstation manufacturers tend to take a "balanced" approach to system design, making certain to avoid bottlenecks so that data can flow unimpeded between the many different subsystems within a computer. Additionally, given their more specialized nature, tend to have higher profit margins than commodity-driven PCs; the systems that come out of workstation companies feature SCSI or Fibre Channel disk storage systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, well-designed cooling. Additionally, the companies that make the products tend to have good repair/replacement plans. However, the line between workstation and PC is becoming blurred as the demand for fast computers and graphics have become
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall establishes a barrier between a trusted internal network and untrusted external network, such as the Internet. Firewalls are categorized as either network firewalls or host-based firewalls. Network run on network hardware. Host-based firewalls run on host computers and control network traffic out of those machines; the term firewall referred to a wall intended to confine a fire within a building. Uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment; the term was applied in the late 1980s to network technology that emerged when the Internet was new in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s, because they separated networks from one another, thus halting the spread of problems from one network to another.
The first reported type of network firewall is called a packet filter. Packet filters act by inspecting packets transferred between computers; when a packet does not match the packet filter's set of filtering rules, the packet filter either drops the packet, or rejects the packet else it is allowed to pass. Packets may be filtered by source and destination network addresses, protocol and destination port numbers; the bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol or User Datagram Protocol in conjunction with well-known ports, enabling firewalls of that era to distinguish between, thus control, specific types of traffic, unless the machines on each side of the packet filter used the same non-standard ports. The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first generation architecture.
From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their first-generation predecessors but maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 of the OSI model for their conversation, allowing examination of the overall exchange between the nodes; this type of firewall is vulnerable to denial-of-service attacks that bombard the firewall with fake connections in an attempt to overwhelm the firewall by filling its connection state memory. Marcus Ranum, Wei Xu, Peter Churchyard released an application firewall known as Firewall Toolkit in October 1993; this became the basis for Gauntlet firewall at Trusted Information Systems. The key benefit of application layer filtering is that it can understand certain applications and protocols.
This is useful as it is able to detect if an unwanted application or service is attempting to bypass the firewall using a disallowed protocol on an allowed port, or detect if a protocol is being abused in any harmful way. As of 2012, the so-called next-generation firewall is nothing more than the "wider" or "deeper" inspection at the application layer. For example, the existing deep packet inspection functionality of modern firewalls can be extended to include: Intrusion prevention systems User identity management integration Web application firewall. WAF attacks may be implemented in the tool "WAF Fingerprinting utilizing timing side channels" Firewalls are categorized as network-based or host-based. Network-based firewalls are positioned on the gateway computers of WANs and intranets, they are either software appliances running on general-purpose hardware, or hardware-based firewall computer appliances. Firewall appliances may offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for that network.
Host-based firewalls are positioned on the network node itself and control network traffic in and out of those machines. The host-based firewall may be a daemon or service as a part of the operating system or an agent application such as endpoint security or protection; each has disadvantages. However, each has a role in layered security. Firewalls vary in type depending on where communication originates, where it is intercepted, the state of communication being traced. Network layer firewalls called packet filters, operate at a low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set; the firewall administrator may define the rules. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls fall into two sub-categories and stateless. Used packet filters on various versions of Unix are ipfw, NPF, PF, ip
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node. A router is connected to two or more data lines from different networks; when a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Using information in its routing table or routing policy, it directs the packet to the next network on its journey; the most familiar type of routers are home and small office routers that forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Though routers are dedicated hardware devices, software-based routers exist. When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol; each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks. A router has two types of network element components organized onto separate planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, through which physical interface connection, it does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table; the control-plane logic strips non-essential directives from the table and builds a forwarding information base to be used by the forwarding plane. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections.
It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane. A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission, it can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' networks; the largest routers may be used in large enterprise networks. Smaller routers provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises; the most powerful routers are found in ISPs, academic and research facilities.
Large businesses may need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access routers, including small office/home office models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are responsible for enforcing quality of service across a wide area network, so they may have considerable memory installed, multiple WAN interface connections, substantial onboard data processing routines, they may provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations.
They lack some of the features of edge routers. External networks must be considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, other security functions, or these may be handled by separate devices. Routers commonly perform network address translation which restricts connections initiated from external connections but is not recognised as a security feature by all experts.. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be found and corrected. Routers are often distinguished on the basis of the network in which they operate. A router in a local area network of a single organisation is called an interior router. A router, operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network is called a border router, or gateway router. Routers intended for ISP and major enterprise connectivity exchange routing information using the Border Gateway Protocol.
RFC 4098 standard defines the types of BGP routers according to their functions: Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and