New York City
The City of New York called either New York City or New York, is the most populous city in the United States. With an estimated 2017 population of 8,622,698 distributed over a land area of about 302.6 square miles, New York is the most densely populated major city in the United States. Located at the southern tip of the state of New York, the city is the center of the New York metropolitan area, the largest metropolitan area in the world by urban landmass and one of the world's most populous megacities, with an estimated 20,320,876 people in its 2017 Metropolitan Statistical Area and 23,876,155 residents in its Combined Statistical Area. A global power city, New York City has been described as the cultural and media capital of the world, exerts a significant impact upon commerce, research, education, tourism, art and sports; the city's fast pace has inspired the term New York minute. Home to the headquarters of the United Nations, New York is an important center for international diplomacy.
Situated on one of the world's largest natural harbors, New York City consists of five boroughs, each of, a separate county of the State of New York. The five boroughs – Brooklyn, Manhattan, The Bronx, Staten Island – were consolidated into a single city in 1898; the city and its metropolitan area constitute the premier gateway for legal immigration to the United States. As many as 800 languages are spoken in New York, making it the most linguistically diverse city in the world. New York City is home to more than 3.2 million residents born outside the United States, the largest foreign-born population of any city in the world. In 2017, the New York metropolitan area produced a gross metropolitan product of US$1.73 trillion. If greater New York City were a sovereign state, it would have the 12th highest GDP in the world. New York is home to the highest number of billionaires of any city in the world. New York City traces its origins to a trading post founded by colonists from the Dutch Republic in 1624 on Lower Manhattan.
The city and its surroundings came under English control in 1664 and were renamed New York after King Charles II of England granted the lands to his brother, the Duke of York. New York served as the capital of the United States from 1785 until 1790, it has been the country's largest city since 1790. The Statue of Liberty greeted millions of immigrants as they came to the U. S. by ship in the late 19th and early 20th centuries and is an international symbol of the U. S. and its ideals of liberty and peace. In the 21st century, New York has emerged as a global node of creativity and entrepreneurship, social tolerance, environmental sustainability, as a symbol of freedom and cultural diversity. Many districts and landmarks in New York City are well known, with the city having three of the world's ten most visited tourist attractions in 2013 and receiving a record 62.8 million tourists in 2017. Several sources have ranked New York the most photographed city in the world. Times Square, iconic as the world's "heart" and its "Crossroads", is the brightly illuminated hub of the Broadway Theater District, one of the world's busiest pedestrian intersections, a major center of the world's entertainment industry.
The names of many of the city's landmarks and parks are known around the world. Manhattan's real estate market is among the most expensive in the world. New York is home to the largest ethnic Chinese population outside of Asia, with multiple signature Chinatowns developing across the city. Providing continuous 24/7 service, the New York City Subway is the largest single-operator rapid transit system worldwide, with 472 rail stations. Over 120 colleges and universities are located in New York City, including Columbia University, New York University, Rockefeller University, which have been ranked among the top universities in the world. Anchored by Wall Street in the Financial District of Lower Manhattan, New York has been called both the most economically powerful city and the leading financial center of the world, the city is home to the world's two largest stock exchanges by total market capitalization, the New York Stock Exchange and NASDAQ. In 1664, the city was named in honor of the Duke of York.
James's older brother, King Charles II, had appointed the Duke proprietor of the former territory of New Netherland, including the city of New Amsterdam, which England had seized from the Dutch. During the Wisconsinan glaciation, 75,000 to 11,000 years ago, the New York City region was situated at the edge of a large ice sheet over 1,000 feet in depth; the erosive forward movement of the ice contributed to the separation of what is now Long Island and Staten Island. That action left bedrock at a shallow depth, providing a solid foundation for most of Manhattan's skyscrapers. In the precolonial era, the area of present-day New York City was inhabited by Algonquian Native Americans, including the Lenape, whose homeland, known as Lenapehoking, included Staten Island; the first documented visit into New York Harbor by a European was in 1524 by Giovanni da Verrazzano, a Florentine explorer in the service of the French crown. He named it Nouvelle Angoulême. A Spanish expedition led by captain Estêvão Gomes, a Portuguese sailing for Emperor Charles V, arrived in New York Harbor in January 1525 and charted the mouth of the Hudson River, which he named Río de San Antonio.
The Padrón Rea
Disk storage is a general category of storage mechanisms where data is recorded by various electronic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive containing a non-removable disk, the floppy disk drive and its removable floppy disk, various optical disc drives and associated optical disc media.. Audio information was recorded by analog methods; the first video disc used an analog recording method. In the music industry, analog recording has been replaced by digital optical technology where the data is recorded in a digital format with optical information; the first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage.
Disk storage is now used in both computer storage and consumer electronic storage, e.g. audio CDs and video discs. Data on modern disks is stored in fixed length blocks called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records. Capacity decreased. Digital disk drives are block storage devices; each disk is divided into logical blocks. Blocks are addressed using their logical block addresses. Read from or writing to disk happens at the granularity of blocks; the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders and sectors. The sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it; the smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples. The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display; the information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself; the data is passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval. Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device; the other is the side-to-side motion of the head across the disk. There are two types of disk rotation methods: constant linear velocity varies the rotational speed of the optical disc depending upon the position of the head, constant angular velocity spins the media at one constant speed regardless of where the head is positioned. Track positioning follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g. HDDs, FDDs, Iomega zip drives, use concentric tracks to store data.
During a sequential read or write operation, after the drive accesses all the sectors in a track it repositions the head to the next track. This will cause a momentary delay in the flow of data between the computer. In contrast, optical audio and video discs use a single spiral track that starts at the inner most point on the disc and flows continuously to the outer edge; when reading or writing data there is no need to stop the flow of data to switch tracks. This is similar to vinyl records except vinyl records started at the outer edge and spiraled in toward the center; the disk drive interface is the mechanism/protocol of communicat
Air travel is a form of travel in vehicles such as helicopters, hot air balloons, gliders, hang gliding, airplanes, jets, or anything else that can sustain flight. Use of air travel has increased in recent decades - worldwide it doubled between the mid-1980s and the year 2000. Air travel can be separated into two general classifications: national/domestic and international flights. Flights from one point to another within the same country are called domestic flights. Flights from a point in one country to a point within a different country are known as international flights. Travelers can use international flights in either private or public travel. Travel class on an airplane is split into a two, three or four class model service. U. S. domestic flights have two classes: economy class and a domestic first class partitioned into cabins. International flights may have up to four classes: economy class. Most air travel ends at a commercial airport; the typical procedure is check-in. For longer journeys, air travel may consist of several flights with a layover in between.
The number of layovers depends on the number of hub airports the journey is routed through. Airlines rely either on the point-to-point model or the spoke-and-hub model to operate flights in between airports; the point-to-point model used by low-cost carriers such as Southwest, relies on scheduling flights directly between destination airports. The spoke-and-hub model, used by carriers such as American and Delta, relies on scheduling flights to and from hub airports; the hub-and-spoke model allows airlines to connect more destinations and provide more frequent routes, while the point-to-point system allows airlines to avoid layovers and have more cost effective operations. Modern aircraft consume less fuel per person and mile travelled than cars when booked; this argument in favor of air travel is counterweighted by two facts: The distances travelled are significantly larger and will not replace car travel but instead add to it, Not every flight is booked out. Instead, the scheduled flights are predominant.
According to the ATAG, flights produced 781 million tonnes of the greenhouse gas CO2 in 2015 globally, as compared to an estimated total of 36 billion tonnes anthropogenic CO2. Deep vein thrombosis is the third-most common vascular disease, next to heart attack, it is estimated. Risk increases with exposure to more flights within a short time frame and with increasing duration of flights. During flight, the aircraft cabin pressure is maintained at the equivalent of 6,000–8,000 ft above sea level. Most healthy travelers will not notice any effects. However, for travelers with cardiopulmonary diseases, cerebrovascular disease, anemia, or sickle cell disease, conditions in an aircraft can exacerbate underlying medical conditions. Aircraft cabin air is dry 10%–20% humidity, which can cause dryness of the mucous membranes of the eyes and airways. Http://openflights.org/ a tool that lets you map your flights around the world
In electronics, a digital-to-analog converter is a system that converts a digital signal into an analog signal. An analog-to-digital converter performs the reverse function. There are several DAC architectures. Digital-to-analog conversion can degrade a signal, so a DAC should be specified that has insignificant errors in terms of the application. DACs are used in music players to convert digital data streams into analog audio signals, they are used in televisions and mobile phones to convert digital video data into analog video signals which connect to the screen drivers to display monochrome or color images. These two applications use DACs at opposite ends of the frequency/resolution trade-off; the audio DAC is a low-frequency, high-resolution type while the video DAC is a high-frequency low- to medium-resolution type. Due to the complexity and the need for matched components, all but the most specialized DACs are implemented as integrated circuits. Discrete DACs would be high speed low resolution power hungry types, as used in military radar systems.
High speed test equipment sampling oscilloscopes, may use discrete DACs. A DAC converts an abstract finite-precision number into a physical quantity. In particular, DACs are used to convert finite-precision time series data to a continually varying physical signal. An ideal DAC converts the abstract numbers into a conceptual sequence of impulses that are processed by a reconstruction filter using some form of interpolation to fill in data between the impulses. A conventional practical DAC converts the numbers into a piecewise constant function made up of a sequence of rectangular functions, modeled with the zero-order hold. Other DAC methods produce a pulse-density modulated output that can be filtered to produce a smoothly varying signal; as per the Nyquist–Shannon sampling theorem, a DAC can reconstruct the original signal from the sampled data provided that its bandwidth meets certain requirements. Digital sampling introduces quantization error that manifests as low-level noise in the reconstructed signal.
DACs and ADCs are part of an enabling technology that has contributed to the digital revolution. To illustrate, consider a typical long-distance telephone call; the caller's voice is converted into an analog electrical signal by a microphone the analog signal is converted to a digital stream by an ADC. The digital stream is divided into network packets where it may be sent along with other digital data, not audio; the packets are received at the destination, but each packet may take a different route and may not arrive at the destination in the correct time order. The digital voice data is extracted from the packets and assembled into a digital data stream. A DAC converts this back into an analog electrical signal, which drives an audio amplifier, which in turn drives a loudspeaker, which produces sound. Most modern audio signals are stored in digital form and, in order to be heard through speakers, they must be converted into an analog signal. DACs are therefore found in CD players, digital music players, PC sound cards.
Specialist standalone DACs can be found in high-end hi-fi systems. These take the digital output of a compatible CD player or dedicated transport and convert the signal into an analog line-level output that can be fed into an amplifier to drive speakers. Similar digital-to-analog converters can be found in digital speakers such as USB speakers, in sound cards. In voice over IP applications, the source must first be digitized for transmission, so it undergoes conversion via an ADC, is reconstructed into analog using a DAC on the receiving party's end. Video sampling tends to work on a different scale altogether thanks to the nonlinear response both of cathode ray tubes and the human eye, using a "gamma curve" to provide an appearance of evenly distributed brightness steps across the display's full dynamic range - hence the need to use RAMDACs in computer video applications with deep enough colour resolution to make engineering a hardcoded value into the DAC for each output level of each channel impractical.
Given this inherent distortion, it is not unusual for a television or video projector to truthfully claim a linear contrast ratio of 1000:1 or greater, equivalent to 10 bits of audio precision though it may only accept signals with 8-bit precision and use an LCD panel that only represents 6 or 7 bits per channel. Video signals from a digital source, such as a computer, must be converted to analog form if they are to be displayed on an analog monitor; as of 2007, analog inputs were more used than digital, but this changed as flat panel displays with DVI and/or HDMI connections became more widespread. A video DAC is, incorporated in any digital video player with analog outputs; the DAC is integrated with some memory, which contains conversion tables for gamma correction and brightness, to make a device called a RAMDAC. A device, distantly related to the DAC is t
In online gaming, lag is a noticeable delay between the action of players and the reaction of the server supporting the game. The tolerance for lag depends on the type of game. For instance, a strategy game or a turn-based game with a low pace may have a high threshold or be unaffected by high delays, whereas a twitch gameplay game such as a first-person shooter with a higher pace may require lower delay to be able to provide satisfying gameplay. However, the specific characteristics of the game matter. For example, fast chess is a turn-based game, fast action and may not tolerate high lag; some twitch games can be designed such that only events that don't impact the outcome of the game introduce lag, allowing for fast local response most of the time. Ping refers to the network latency between a player's client and the game server as measured with the ping utility or equivalent. Ping is reported quantitatively as an average time in milliseconds; the lower one's ping is, the less lag the player will experience.
High ping and low ping are used terms in online gaming, where high ping refers to a ping that causes a severe amount of lag. This usage is a gaming cultural colloquialism and is not found or used in professional computer networking circles. In games where timing is key, such as first-person shooter and real-time strategy games, a low ping is always desirable, as a low ping means smoother gameplay by allowing faster updates of game data between the players' clients and game server. High latency can cause lag. Game servers may disconnect a client if the latency is too high and may pose a detriment to other players' gameplay. Client software will mandate disconnection if the latency is too high. High ping may cause servers to crash due to instability. In some first-person shooter games, a high ping may cause the player to unintentionally gain unfair advantages, such as disappearing from one location and instantaneously reappearing in another, simulating the effect of teleportation, thus making it hard for other players to judge their character's position and subsequently making the player much more difficult to target.
To counter this, many game servers automatically kick players with a ping higher than average. Conversely, a high ping can make it difficult for the player to play the game due to negative effects occurring, making it difficult for the player to track other players and move their character. Rather than using the traditional ICMP echo request and reply network packets to determine ping times, video game programmers build their own latency detection into existing game packets instead; some factors that might affect ping include: communication protocol used, Internet throughput, the quality of a user's Internet service provider and the configuration of firewalls. Ping is affected by geographical location. For instance, if someone is in India, playing on a server located in the United States, the distance between the two is greater than it would be for players located within the US, therefore it takes longer for data to be transmitted. However, the amount of packet-switching and network hardware in between the two computers is more significant.
For instance, wireless network interface cards must modulate digital signals into radio signals, more costly than the time it takes an electrical signal to traverse a typical span of cable. As such, lower ping can result in faster internet upload rates. While a single-player game maintains the main game state on the local machine, an online game requires it to be maintained on a central server in order to avoid inconsistencies between individual clients; as such, the client has no direct control over the central game state and may only send change requests to the server, can only update the local game state by receiving updates from the server. This need to communicate causes a delay between the clients and the server, is the fundamental cause behind lag. While there may be numerous underlying reasons for why a player experiences lag, they can be summarized as insufficient hardware in either the client or the server, or a poor connection between the client and server. Hardware related issues cause lag due to the fundamental structure of the game architecture.
Games consist of a looped sequence of states, or "frames". During each frame, the game performs necessary calculations; when all processing is finished, the game will update the game state and produce an output, such as a new image on the screen and/or a packet to be sent to the server. The frequency at which frames are generated is referred to as the frame rate; as the central game state is located on the server, the updated information must be sent from the client to the server in order to take effect. In addition, the client must receive the necessary information from the server in order to update the state. Generating packets to send to the server and processing the received packets can only be done as as the client is able to update its local state. Although packets could theoretically be generated and sent faster than this, it would only result in sending redundant data if the game state cannot be updated between each packet. A low frame rate would therefore make the game less responsive to updates and may force it to skip outdated data.
Conversely, the same holds true for the server. The frame rate of the server determines how it can process data from clients and send updates; this type of problem is difficult to compensate for. Apart
In electronics, an analog-to-digital converter is a system that converts an analog signal, such as a sound picked up by a microphone or light entering a digital camera, into a digital signal. An ADC may provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number representing the magnitude of the voltage or current; the digital output is a two's complement binary number, proportional to the input, but there are other possibilities. There are several ADC architectures. Due to the complexity and the need for matched components, all but the most specialized ADCs are implemented as integrated circuits. A digital-to-analog converter performs the reverse function. An ADC converts a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal; the conversion involves quantization of the input, so it introduces a small amount of error or noise. Furthermore, instead of continuously performing the conversion, an ADC does the conversion periodically, sampling the input, limiting the allowable bandwidth of the input signal.
The performance of an ADC is characterized by its bandwidth and signal-to-noise ratio. The bandwidth of an ADC is characterized by its sampling rate; the SNR of an ADC is influenced by many factors, including the resolution and accuracy, aliasing and jitter. The SNR of an ADC is summarized in terms of its effective number of bits, the number of bits of each measure it returns that are on average not noise. An ideal ADC has an ENOB equal to its resolution. ADCs are required SNR of the signal to be digitized. If an ADC operates at a sampling rate greater than twice the bandwidth of the signal per the Nyquist–Shannon sampling theorem, perfect reconstruction is possible; the presence of quantization error limits the SNR of an ideal ADC. However, if the SNR of the ADC exceeds that of the input signal, its effects may be neglected resulting in an perfect digital representation of the analog input signal; the resolution of the converter indicates the number of discrete values it can produce over the range of analog values.
The resolution determines the magnitude of the quantization error and therefore determines the maximum possible average signal-to-noise ratio for an ideal ADC without the use of oversampling. The values are stored electronically in binary form, so the resolution is expressed as the audio bit depth. In consequence, the number of discrete values available is assumed to be a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels; the values can represent the ranges depending on the application. Resolution can be defined electrically, expressed in volts; the change in voltage required to guarantee a change in the output code level is called the least significant bit voltage. The resolution Q of the ADC is equal to the LSB voltage; the voltage resolution of an ADC is equal to its overall voltage measurement range divided by the number of intervals: Q = E F S R 2 M, where M is the ADC's resolution in bits and EFSR is the full scale voltage range.
EFSR is given by E F S R = V R e f H i − V R e f L o w, where VRefHi and VRefLow are the upper and lower extremes of the voltages that can be coded. The number of voltage intervals is given by N = 2 M, where M is the ADC's resolution in bits; that is, one voltage interval is assigned in between two consecutive code levels. Example: Coding scheme as in figure 1 Full scale measurement range = 0 to 1 volt ADC resolution is 3 bits: 23 = 8 quantization levels ADC voltage resolution, Q = 1 V / 8 = 0.125 V. In many cases, the useful resolution of a converter is limited by the signal-to-noise ratio and other errors in the overall system expressed as an ENOB. Quantization error is introduced by quantization in an ideal ADC, it is a rounding error between the analog input voltage to the output digitized value. The error is signal-dependent. In an ideal ADC, where the quantization error is uniformly distributed between −1/2 LSB and +1/2 LSB, the signal has a uniform distribution covering all quantization levels, the Signal-to-quantization-noise ratio is given by S Q N R = 20 log 10 ≈ 6.02 ⋅ Q d B Where Q is the number of quantization bits.
For example, for a 16-bit ADC, the quantization error is 96.3 dB below the maximum level. Quantization error is distributed from DC to the Nyquist frequency if part of the ADC's bandwidth is not used, as is the case
Packet switching is a method of grouping data, transmitted over a digital network into packets. Packets are made of a payload. Data in the header are used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. Packet switching is the primary basis for data communications in computer networks worldwide. In the early 1960s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching with the goal to provide a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense; this concept contrasted and contradicted then-established principles of pre-allocation of network bandwidth fortified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965.
Davies is credited with coining the modern term packet switching and inspiring numerous packet switching networks in the decade following, including the incorporation of the concept in the early ARPANET in the United States. A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, upon completion of the transmission the channel is made available for the transfer of other traffic Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques; as they traverse networking hardware, such as switches and routers, packets are received, buffered and retransmitted, resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket.
Packet-based communication may be implemented without intermediate forwarding nodes. In case of a shared physical medium, the packets may be delivered according to a multiple access scheme. Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages; the concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation starting in the late 1950s in the US and Donald Davies at the National Physical Laboratory in the UK. In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment radar defense system.
They sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies. Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative; the concept was first presented to the Air Force in the summer of 1961 as briefing B-265 published as RAND report P-2626 in 1962, in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, survivable communications network; the work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, delivery of these messages by store and forward switching. Davies developed a similar message routing concept in 1965, he called it packet switching, proposed building a nationwide network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Baran's work. Roger Scantlebury, a member of Davies' team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles and suggested it for use in the ARPANET.
Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in 1967, the NPL Data Communications Network entered service in 1969. Leonard Kleinrock conducted early research in queueing theory for his doctoral dissertation at MIT in 1961-2 and published it as a book in 1964 in the field of digital message switching. Following the 1967 ACM Symposium, Lawrence Roberts asked Kleinrock to carry out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET; the NPL team carried out simulation work on packet networks. The French CYCLADES network, designed by Louis Pouzin in the early 1970s, was the first to employ what came to be known as the end-to-end principle, make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a centralized service of the network i