The Internet is the global system of interconnected computer networks that use the Internet protocol suite to link devices worldwide. It is a network of networks that consists of private, academic and government networks of local to global scope, linked by a broad array of electronic and optical networking technologies; the Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web, electronic mail and file sharing. Some publications no longer capitalize "internet"; the origins of the Internet date back to research commissioned by the federal government of the United States in the 1960s to build robust, fault-tolerant communication with computer networks. The primary precursor network, the ARPANET served as a backbone for interconnection of regional academic and military networks in the 1980s; the funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, the merger of many networks.
The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, generated a sustained exponential growth as generations of institutional and mobile computers were connected to the network. Although the Internet was used by academia since the 1980s, commercialization incorporated its services and technologies into every aspect of modern life. Most traditional communication media, including telephony, television, paper mail and newspapers are reshaped, redefined, or bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, video streaming websites. Newspaper and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators; the Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, social networking. Online shopping has grown exponentially both for major retailers and small businesses and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or sell goods and services online.
Business-to-business and financial services on the Internet affect supply chains across entire industries. The Internet has no single centralized governance in either technological implementation or policies for access and usage; the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers. The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force, a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Today's list of New Seven Wonders; when the term Internet is used to refer to the specific global system of interconnected Internet Protocol networks, the word is a proper noun that should be written with an initial capital letter.
In common use and the media, it is erroneously not capitalized, viz. the internet. Some guides specify that the word should be capitalized when used as a noun, but not capitalized when used as an adjective; the Internet is often referred to as the Net, as a short form of network. As early as 1849, the word internetted was used uncapitalized as an adjective, meaning interconnected or interwoven; the designers of early computer networks used internet both as a noun and as a verb in shorthand form of internetwork or internetworking, meaning interconnecting computer networks. The terms Internet and World Wide Web are used interchangeably in everyday speech. However, the World Wide Web or the Web is only one of a large number of Internet services; the Web is a collection of interconnected documents and other web resources, linked by hyperlinks and URLs. As another point of comparison, Hypertext Transfer Protocol, or HTTP, is the language used on the Web for information transfer, yet it is just one of many languages or protocols that can be used for communication on the Internet.
The term Interweb is a portmanteau of Internet and World Wide Web used sarcastically to parody a technically unsavvy user. Research into packet switching, one of the fundamental Internet technologies, started in the early 1960s in the work of Paul Baran and Donald Davies. Packet-switched networks such as the NPL network, ARPANET, the Merit Network, CYCLADES, Telenet were developed in the late 1960s and early 1970s; the ARPANET project led to the development of protocols for internetworking, by which multiple separate networks could be joined into a network of networks. ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, the NLS system at SRI International by Douglas Engelbart in Menlo Park, California, on 29 October 1969; the third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of
Frame Relay is a standardized wide area network technology that specifies the physical and data link layers of digital telecommunications channels using a packet switching methodology. Designed for transport across Integrated Services Digital Network infrastructure, it may be used today in the context of many other network interfaces. Network providers implement Frame Relay for voice and data as an encapsulation technique used between local area networks over a wide area network; each end-user gets a private line to a Frame Relay node. The Frame Relay network handles the transmission over a changing path transparent to all end-user extensively used WAN protocols, it is less expensive than leased lines and, one reason for its popularity. The extreme simplicity of configuring user equipment in a Frame Relay network offers another reason for Frame Relay's popularity. With the advent of Ethernet over fiber optics, MPLS, VPN and dedicated broadband services such as cable modem and DSL, the end may loom for the Frame Relay protocol and encapsulation.
However many rural areas remain lacking cable modem services. In such cases, the least expensive type of non-dial-up connection remains a 64-kbit/s Frame Relay line, thus a retail chain, for instance, may use Frame Relay for connecting rural stores into their corporate WAN. The designers of Frame Relay aimed to provide a telecommunication service for cost-efficient data transmission for intermittent traffic between local area networks and between end-points in a wide area network. Frame Relay puts data in variable-size units called "frames" and leaves any necessary error-correction up to the end-points; this speeds up overall data transmission. For most services, the network provides a permanent virtual circuit, which means that the customer sees a continuous, dedicated connection without having to pay for a full-time leased line, while the service-provider figures out the route each frame travels to its destination and can charge based on usage. An enterprise can select a level of service quality, prioritizing some frames and making others less important.
Frame Relay can run on full T-carrier system carriers. Frame Relay complements and provides a mid-range service between basic rate ISDN, which offers bandwidth at 128 kbit/s, Asynchronous Transfer Mode, which operates in somewhat similar fashion to Frame Relay but at speeds from 155.520 Mbit/s to 622.080 Mbit/s. Frame Relay has its technical base in the older X.25 packet-switching technology, designed for transmitting data on analog voice lines. Unlike X.25, whose designers expected analog signals with a high chance of transmission errors, Frame Relay is a fast packet switching technology operating over links with a low chance of transmission errors, which means that the protocol does not attempt to correct errors. When a Frame Relay network detects an error in a frame, it drops that frame; the end points have the responsibility for detecting and retransmitting dropped frames. Frame Relay serves to connect local area networks with major backbones, as well as on public wide-area networks and in private network environments with leased lines over T-1 lines.
It requires a dedicated connection during the transmission period. Frame Relay does not provide an ideal path for voice or video transmission, both of which require a steady flow of transmissions. However, under certain circumstances and video transmission do use Frame Relay. Frame Relay originated as an extension of integrated services digital network, its designers aimed to enable a packet-switched network to transport over circuit-switched technology. The technology has become a stand-alone and cost-effective means of creating a WAN. Frame Relay switches create virtual circuits to connect remote LANs to a WAN; the Frame Relay network exists between a LAN border device a router, the carrier switch. The technology used by the carrier to transport data between the switches is variable and may differ among carriers; the sophistication of the technology requires a thorough understanding of the terms used to describe how Frame Relay works. Without a firm understanding of Frame Relay, it is difficult to troubleshoot its performance.
Frame-relay frame structure mirrors exactly that defined for LAP-D. Traffic analysis can distinguish Frame Relay format from LAP-D by its lack of a control field; each Frame Relay protocol data unit consists of the following fields: Flag Field. The flag is used to perform high-level data link synchronization which indicates the beginning and end of the frame with the unique pattern 01111110. To ensure that the 01111110 pattern does not appear somewhere inside the frame, bit stuffing and destuffing procedures are used. Address Field; each address field may occupy either octet 2 to 3, octet 2 to 4, or octet 2 to 5, depending on the range of the address in use. A two-octet address field comprises the EA=ADDRESS FIELD EXTENSION BITS and the C/R=COMMAND/RESPONSE BIT. DLCI-Data Link Connection Identifier Bits; the DLCI serves to identify the virtual connection so that the receiving end knows which information connection a frame belongs to. Note that this DLCI has only local significance. A single physical channel can multiplex several different virtual connections.
FECN, BECN, DE bits. These bits report congestion: FECN=Forward Explicit Congestion Notific
A router is a networking device that forwards data packets between computer networks. Routers perform the traffic directing functions on the Internet. Data sent through the internet, such as a web page or email, is in the form of data packets. A packet is forwarded from one router to another router through the networks that constitute an internetwork until it reaches its destination node. A router is connected to two or more data lines from different networks; when a data packet comes in on one of the lines, the router reads the network address information in the packet to determine the ultimate destination. Using information in its routing table or routing policy, it directs the packet to the next network on its journey; the most familiar type of routers are home and small office routers that forward IP packets between the home computers and the Internet. An example of a router would be the owner's cable or DSL router, which connects to the Internet through an Internet service provider. More sophisticated routers, such as enterprise routers, connect large business or ISP networks up to the powerful core routers that forward data at high speed along the optical fiber lines of the Internet backbone.
Though routers are dedicated hardware devices, software-based routers exist. When multiple routers are used in interconnected networks, the routers can exchange information about destination addresses using a routing protocol; each router builds up a routing table listing the preferred routes between any two systems on the interconnected networks. A router has two types of network element components organized onto separate planes: Control plane: A router maintains a routing table that lists which route should be used to forward a data packet, through which physical interface connection, it does this using internal preconfigured directives, called static routes, or by learning routes dynamically using a routing protocol. Static and dynamic routes are stored in the routing table; the control-plane logic strips non-essential directives from the table and builds a forwarding information base to be used by the forwarding plane. Forwarding plane: The router forwards data packets between incoming and outgoing interface connections.
It forwards them to the correct network type using information that the packet header contains matched to entries in the FIB supplied by the control plane. A router may have interfaces for different types of physical layer connections, such as copper cables, fiber optic, or wireless transmission, it can support different network layer transmission standards. Each network interface is used to enable data packets to be forwarded from one transmission system to another. Routers may be used to connect two or more logical groups of computer devices known as subnets, each with a different network prefix. Routers may provide connectivity within enterprises, between enterprises and the Internet, or between internet service providers' networks; the largest routers may be used in large enterprise networks. Smaller routers provide connectivity for typical home and office networks. All sizes of routers may be found inside enterprises; the most powerful routers are found in ISPs, academic and research facilities.
Large businesses may need more powerful routers to cope with ever-increasing demands of intranet data traffic. A hierarchical internetworking model for interconnecting routers in large networks is in common use. Access routers, including small office/home office models, are located at home and customer sites such as branch offices that do not need hierarchical routing of their own, they are optimized for low cost. Some SOHO routers are capable of running alternative free Linux-based firmware like Tomato, OpenWrt or DD-WRT. Distribution routers aggregate traffic from multiple access routers. Distribution routers are responsible for enforcing quality of service across a wide area network, so they may have considerable memory installed, multiple WAN interface connections, substantial onboard data processing routines, they may provide connectivity to groups of file servers or other external networks. In enterprises, a core router may provide a collapsed backbone interconnecting the distribution tier routers from multiple buildings of a campus, or large enterprise locations.
They lack some of the features of edge routers. External networks must be considered as part of the overall security strategy of the local network. A router may include a firewall, VPN handling, other security functions, or these may be handled by separate devices. Routers commonly perform network address translation which restricts connections initiated from external connections but is not recognised as a security feature by all experts.. Some experts argue that open source routers are more secure and reliable than closed source routers because open source routers allow mistakes to be found and corrected. Routers are often distinguished on the basis of the network in which they operate. A router in a local area network of a single organisation is called an interior router. A router, operated in the Internet backbone is described as exterior router. While a router that connects a LAN with the Internet or a wide area network is called a border router, or gateway router. Routers intended for ISP and major enterprise connectivity exchange routing information using the Border Gateway Protocol.
RFC 4098 standard defines the types of BGP routers according to their functions: Edge router: Also called a provider edge router, is placed at the edge of an ISP network. The router uses External BGP to EBGP
X.25 is an ITU-T standard protocol suite for packet-switched wide area network communication. An X.25 WAN consists of packet-switching exchange nodes as the networking hardware, leased lines, plain old telephone service connections, or ISDN connections as physical links. X.25 was defined by the International Telegraph and Telephone Consultative Committee in a series of drafts and finalized in a publication known as The Orange Book in 1976. X.25 networks were popular during the 1980s with telecommunications companies and in financial transaction systems such as automated teller machines. However, most uses have moved to Internet Protocol systems instead. X.25 is still used and available in niche applications such as Retronet that allows vintage computers to use the internet. X.25 is one of the oldest packet-switched services available. It was developed before the OSI Reference Model; the protocol suite is designed as three conceptual layers, which correspond to the lower three layers of the seven-layer OSI model.
It supports functionality not found in the OSI network layer. X.25 was developed in the ITU-T Study Group VII based upon a number of emerging data network projects. Various updates and additions were worked into the standard recorded in the ITU series of technical books describing the telecommunication systems; these books were published every fourth year with different-colored covers. The X.25 specification is only part of the larger set of X-Series specifications on public data networks. The public data network was the common name given to the international collection of X.25 providers. Their combined network had large global coverage into the 1990s. Publicly accessible X.25 networks were set up in most countries during the 1970s and 1980s, to lower the cost of accessing various online services. Beginning in the early 1990s, in North America, use of X.25 networks started to be replaced by Frame Relay services offered by national telephone companies. Most systems that required X.25 now use TCP/IP, however it is possible to transport X.25 over TCP/IP when necessary.
X.25 networks are still in use throughout the world. A variant called AX.25 is used by amateur packet radio. Racal Paknet, now known as Widanet, is still in operation in many regions of the world, running on an X.25 protocol base. In some countries, like the Netherlands or Germany, it is possible to use a stripped version of X.25 via the D-channel of an ISDN-2 connection for low-volume applications such as point-of-sale terminals. Additionally X.25 is still under heavy use in the aeronautical business though a transition to modern protocols like X.400 is without option as X.25 hardware becomes rare and costly. As as March 2006, the United States National Airspace Data Interchange Network has used X.25 to interconnect remote airfields with Air Route Traffic Control Centers. France was one of the last remaining countries where commercial end-user service based on X.25 operated. Known as Minitel it was based on Videotex, itself running on X.25. In 2002, Minitel had about 9 million users, in 2011, it still accounted for about 2 million users in France when France Télécom announced it would shut down the service by 30 June 2012.
As planned, service was terminated 30 June 2012. There were 800,000 terminals still in operation at the time; the general concept of the X. 25 was to create a global packet-switched network. Much of the X.25 system is a description of the rigorous error correction needed to achieve this, as well as more efficient sharing of capital-intensive physical resources. The X. 25 specification defines only the interface between an X. 25 network. X.75, a protocol similar to X.25, defines the interface between two X.25 networks to allow connections to traverse two or more networks. X.25 does not specify how the network operates internally – many X.25 network implementations used something similar to X.25 or X.75 internally, but others used quite different protocols internally. The ISO protocol equivalent to X.25, ISO 8208, is compatible with X.25, but additionally includes provision for two X.25 DTEs to be directly connected to each other with no network in between. By separating the Packet-Layer Protocol, ISO 8208 permits operation over additional networks such as ISO 8802 LLC2 and the OSI data link layer.
X.25 defined three basic protocol levels or architectural layers. In the original specifications these were referred to as levels and had a level number, whereas all ITU-T X.25 recommendations and ISO 8208 standards released after 1984 refer to them as layers. The layer numbers were dropped to avoid confusion with the OSI Model layers. Physical layer: This layer specifies the physical, electrical and procedural characteristics to control the physical link between a DTE and a DCE. Common implementations use X. 21, EIA-449 or other serial protocols. Data link layer: The data link layer consists of the link access procedure for data interchange on the link between a DTE and a DCE. In its implementation, the Link Access Procedure, Balanced is a data link protocol that manages a communication session and controls the packet framing, it is a bit-oriented protocol that provides orderly delivery. Packet layer: This layer defined a packet-layer protocol for exchanging control and user data packets to form a packet-switching network based on virtual calls, acco
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
A mathematical model is a description of a system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in the natural sciences and engineering disciplines, as well as in the social sciences. A model may help to explain a system and to study the effects of different components, to make predictions about behaviour. Mathematical models can take many forms, including dynamical systems, statistical models, differential equations, or game theoretic models; these and other types of models can overlap, with a given model involving a variety of abstract structures. In general, mathematical models may include logical models. In many cases, the quality of a scientific field depends on how well the mathematical models developed on the theoretical side agree with results of repeatable experiments. Lack of agreement between theoretical mathematical models and experimental measurements leads to important advances as better theories are developed.
In the physical sciences, a traditional mathematical model contains most of the following elements: Governing equations Supplementary sub-models Defining equations Constitutive equations Assumptions and constraints Initial and boundary conditions Classical constraints and kinematic equations Mathematical models are composed of relationships and variables. Relationships can be described by operators, such as algebraic operators, differential operators, etc. Variables are abstractions of system parameters of interest. Several classification criteria can be used for mathematical models according to their structure: Linear vs. nonlinear: If all the operators in a mathematical model exhibit linearity, the resulting mathematical model is defined as linear. A model is considered to be nonlinear otherwise; the definition of linearity and nonlinearity is dependent on context, linear models may have nonlinear expressions in them. For example, in a statistical linear model, it is assumed that a relationship is linear in the parameters, but it may be nonlinear in the predictor variables.
A differential equation is said to be linear if it can be written with linear differential operators, but it can still have nonlinear expressions in it. In a mathematical programming model, if the objective functions and constraints are represented by linear equations the model is regarded as a linear model. If one or more of the objective functions or constraints are represented with a nonlinear equation the model is known as a nonlinear model. Nonlinearity in simple systems, is associated with phenomena such as chaos and irreversibility. Although there are exceptions, nonlinear systems and models tend to be more difficult to study than linear ones. A common approach to nonlinear problems is linearization, but this can be problematic if one is trying to study aspects such as irreversibility, which are tied to nonlinearity. Static vs. dynamic: A dynamic model accounts for time-dependent changes in the state of the system, while a static model calculates the system in equilibrium, thus is time-invariant.
Dynamic models are represented by differential equations or difference equations. Explicit vs. implicit: If all of the input parameters of the overall model are known, the output parameters can be calculated by a finite series of computations, the model is said to be explicit. But sometimes it is the output parameters which are known, the corresponding inputs must be solved for by an iterative procedure, such as Newton's method or Broyden's method. In such a case the model is said to be implicit. For example, a jet engine's physical properties such as turbine and nozzle throat areas can be explicitly calculated given a design thermodynamic cycle at a specific flight condition and power setting, but the engine's operating cycles at other flight conditions and power settings cannot be explicitly calculated from the constant physical properties. Discrete vs. continuous: A discrete model treats objects as discrete, such as the particles in a molecular model or the states in a statistical model.
Deterministic vs. probabilistic: A deterministic model is one in which every set of variable states is uniquely determined by parameters in the model and by sets of previous states of these variables. Conversely, in a stochastic model—usually called a "statistical model"—randomness is present, variable states are not described by unique values, but rather by probability distributions. Deductive, inductive, or floating: A deductive model is a logical structure based on a theory. An inductive model arises from empirical findings and generalization from them; the floating model rests on neither theory nor observation, but is the invocation of expected structure. Application of mathematics in social sciences outside of economics has been criticized for unfounded models. Application of catastrophe theory in science has been characterized as a floating model. Mathematical models are of great importance in the natural sciences in physics. Physical theories are invariably expressed using mathematic
The interplanetary Internet is a conceived computer network in space, consisting of a set of network nodes that can communicate with each other. Communication would be delayed by the great interplanetary distances, so the IPN needs a new set of protocols and technology that are tolerant to large delays and errors. Although the Internet as it is known today tends to be a busy network of networks with high traffic, negligible delay and errors, a wired backbone, the interplanetary Internet is a store and forward network of internets, disconnected, has a wireless backbone fraught with error-prone links and delays ranging from tens of minutes to hours when there is a connection. Space communication technology has evolved from expensive, one-of-a-kind point-to-point architectures, to the re-use of technology on successive missions, to the development of standard protocols agreed upon by space agencies of many countries; this last phase has gone on since 1982 through the efforts of the Consultative Committee for Space Data Systems, a body composed of the major space agencies of the world.
It has 11 member agencies, 28 observer agencies, over 140 industrial associates. The evolution of space data system standards has gone on in parallel with the evolution of the Internet, with conceptual cross-pollination where fruitful, but as a separate evolution. Since the late 1990s, familiar Internet protocols and CCSDS space link protocols have integrated and converged in several ways. Internet Protocol use without CCSDS has taken place on spacecraft, e.g. demonstrations on the UoSAT-12 satellite, operationally on the Disaster Monitoring Constellation. Having reached the era where networking and IP on board spacecraft have been shown to be feasible and reliable, a forward-looking study of the bigger picture was the next phase; the Interplanetary Internet study at NASA's Jet Propulsion Laboratory was started by a team of scientists at JPL led by Vinton Cerf and the late Adrian Hooke. Cerf is one of the pioneers of the Internet on Earth, holds the position of distinguished visiting scientist at JPL.
Hooke was one of the founders and directors of CCSDS. While IP-like SCPS protocols are feasible for short hops, such as ground station to orbiter, rover to lander, lander to orbiter, probe to flyby, so on, delay-tolerant networking is needed to get information from one region of the Solar System to another, it becomes apparent that the concept of a "region" is a natural architectural factoring of the Interplanetary Internet. A "region" is an area. Region characteristics include communications, the maintenance of resources ownership, other factors; the Interplanetary Internet is a "network of regional internets". What is needed is a standard way to achieve end-to-end communication through multiple regions in a disconnected, variable-delay environment using a generalized suite of protocols. Examples of regions might include the terrestrial Internet as a region, a region on the surface of the Moon or Mars, or a ground-to-orbit region; the recognition of this requirement led to the concept of a "bundle" as a high-level way to address the generalized Store-and-Forward problem.
Bundles are an area of new protocol development in the upper layers of the OSI model, above the Transport Layer with the goal of addressing the issue of bundling store-and-forward information so that it can reliably traverse radically dissimilar environments constituting a "network of regional internets". Delay-tolerant networking was designed to enable standardized communications over long distances and through time delays. At its core is something called the Bundle Protocol, similar to the Internet Protocol, or IP, that serves as the heart of the Internet here on Earth; the big difference between the regular Internet Protocol and the Bundle Protocol is that IP assumes a seamless end-to-end data path, while BP is built to account for errors and disconnections — glitches that plague deep-space communications. Bundle Service Layering, implemented as the Bundling protocol suite for delay-tolerant networking, will provide general purpose delay-tolerant protocol services in support of a range of applications: custody transfer and reassembly, end-to-end reliability, end-to-end security, end-to-end routing among them.
The Bundle Protocol was first tested in space on the UK-DMC satellite in 2008. An example of one of these end-to-end applications flown on a space mission is the CCSDS File Delivery Protocol, used on the comet mission, Deep Impact. CFDP is an international standard for reliable file transfer in both directions. CFDP should not be confused with Coherent File Distribution Protocol, which has the same acronym and is an IETF-documented experimental protocol for deploying files to multiple targets in a networked environment. In addition to reliably copying a file from one entity to another entity, CFDP has the capability to reliably transmit arbitrary small messages defined by the user, in the metadata accompanying the file, to reliably transmit commands relating to file system management that are to be executed automatically on the remote end-point entity upon successful reception of a file; the Consultative Committee for Space Data Systems packet telemetry standard defines the protocol used for the transmission of spacecraft instrument data over the deep-space channel.