A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Digital subscriber line
Digital subscriber line is a family of technologies that are used to transmit digital data over telephone lines. In telecommunications marketing, the term DSL is understood to mean asymmetric digital subscriber line, the most installed DSL technology, for Internet access. DSL service can be delivered with wired telephone service on the same telephone line since DSL uses higher frequency bands for data. On the customer premises, a DSL filter on each non-DSL outlet blocks any high-frequency interference to enable simultaneous use of the voice and DSL services; the bit rate of consumer DSL services ranges from 256 kbit/s to over 100 Mbit/s in the direction to the customer, depending on DSL technology, line conditions, service-level implementation. Bit rates of 1 Gbit/s have been reached. In ADSL, the data throughput in the upstream direction is lower, hence the designation of asymmetric service. In symmetric digital subscriber line services, the downstream and upstream data rates are equal. Researchers at Bell Labs have reached speeds over 1 Gbit/s for symmetrical broadband access services using traditional copper telephone lines, though such speeds have not yet been deployed elsewhere.
It was thought that it was not possible to operate a conventional phone line beyond low-speed limits. In the 1950s, ordinary twisted-pair telephone cable carried four megahertz television signals between studios, suggesting that such lines would allow transmitting many megabits per second. One such circuit in the United Kingdom ran some 10 miles between the BBC studios in Newcastle-upon-Tyne and the Pontop Pike transmitting station, it was able to give the studios a low quality cue feed but not one suitable for transmission. However, these cables had other impairments besides Gaussian noise, preventing such rates from becoming practical in the field; the 1980s saw the development of techniques for broadband communications that allowed the limit to be extended. A patent was filed in 1979 for the use of existing telephone wires for both telephones and data terminals that were connected to a remote computer via a digital data carrier system; the motivation for digital subscriber line technology was the Integrated Services Digital Network specification proposed in 1984 by the CCITT as part of Recommendation I.120 reused as ISDN digital subscriber line.
Employees at Bellcore developed asymmetric digital subscriber line by placing wide-band digital signals at frequencies above the existing baseband analog voice signal carried on conventional twisted pair cabling between telephone exchanges and customers. A patent was filed in 1988. Joseph W. Lechleider's contribution to DSL was his insight that an asymmetric arrangement offered more than double the bandwidth capacity of symmetric DSL; this allowed Internet service providers to offer efficient service to consumers, who benefited from the ability to download large amounts of data but needed to upload comparable amounts. ADSL supports two modes of transport -- interleaved channel. Fast channel is preferred for streaming multimedia, where an occasional dropped bit is acceptable, but lags are less so. Interleaved channel works better for file transfers, where the delivered data must be error-free but latency incurred by the retransmission of error-containing packets is acceptable. Consumer-oriented ADSL was designed to operate on existing lines conditioned for Basic Rate Interface ISDN services, which itself is a digital circuit switching service, though most incumbent local exchange carriers provision rate-adaptive digital subscriber line to work on any available copper pair facility, whether conditioned for BRI or not.
Engineers developed high speed DSL facilities such as high bit rate digital subscriber line and symmetric digital subscriber line to provision traditional Digital Signal 1 services over standard copper pair facilities. Older ADSL standards delivered 8 Mbit/s to the customer over about 2 km of unshielded twisted-pair copper wire. Newer variants improved these rates. Distances greater than 2 km reduce the bandwidth usable on the wires, thus reducing the data rate, but ADSL loop extenders increase these distances by repeating the signal, allowing the LEC to deliver DSL speeds to any distance. Until the late 1990s, the cost of digital signal processors for DSL was prohibitive. All types of DSL employ complex digital signal processing algorithms to overcome the inherent limitations of the existing twisted pair wires. Due to the advancements of very-large-scale integration technology, the cost of the equipment associated with a DSL deployment lowered significantly; the two main pieces of equipment are a digital subscriber line access multiplexer at one end and a DSL modem at the other end.
A DSL connection can be deployed over existing cable. Such deployment including equipment, is much cheaper than installing a new, high-bandwidth fiber-optic cable over the same route and distance; this is true both for SDSL variations. The commercial success of DSL and similar technologies reflects the advances made in electronics over the decades that have increased performance and reduced costs while digging trenches in the ground for new cables remains expensive. In the case of ADSL, competition in Internet access caused subscription fees to drop over the years, thus making ADSL more economical than dial up access. Telephone companies were pressured into moving to A
General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin
Packet switching is a method of grouping data, transmitted over a digital network into packets. Packets are made of a payload. Data in the header are used by networking hardware to direct the packet to its destination where the payload is extracted and used by application software. Packet switching is the primary basis for data communications in computer networks worldwide. In the early 1960s, American computer scientist Paul Baran developed the concept Distributed Adaptive Message Block Switching with the goal to provide a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the US Department of Defense; this concept contrasted and contradicted then-established principles of pre-allocation of network bandwidth fortified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965.
Davies is credited with coining the modern term packet switching and inspiring numerous packet switching networks in the decade following, including the incorporation of the concept in the early ARPANET in the United States. A simple definition of packet switching is: The routing and transferring of data by means of addressed packets so that a channel is occupied during the transmission of the packet only, upon completion of the transmission the channel is made available for the transfer of other traffic Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques; as they traverse networking hardware, such as switches and routers, packets are received, buffered and retransmitted, resulting in variable latency and throughput depending on the link capacity and the traffic load on the network. Packets are forwarded by intermediate network nodes asynchronously using first-in, first-out buffering, but may be forwarded according to some scheduling discipline for fair queuing, traffic shaping, or for differentiated or guaranteed quality of service, such as weighted fair queuing or leaky bucket.
Packet-based communication may be implemented without intermediate forwarding nodes. In case of a shared physical medium, the packets may be delivered according to a multiple access scheme. Packet switching contrasts with another principal networking paradigm, circuit switching, a method which pre-allocates dedicated network bandwidth for each communication session, each having a constant bit rate and latency between nodes. In cases of billable services, such as cellular communication services, circuit switching is characterized by a fee per unit of connection time when no data is transferred, while packet switching may be characterized by a fee per unit of information transmitted, such as characters, packets, or messages; the concept of switching small blocks of data was first explored independently by Paul Baran at the RAND Corporation starting in the late 1950s in the US and Donald Davies at the National Physical Laboratory in the UK. In the late 1950s, the US Air Force established a wide area network for the Semi-Automatic Ground Environment radar defense system.
They sought a system that might survive a nuclear attack to enable a response, thus diminishing the attractiveness of the first strike advantage by enemies. Baran developed the concept of distributed adaptive message block switching in support of the Air Force initiative; the concept was first presented to the Air Force in the summer of 1961 as briefing B-265 published as RAND report P-2626 in 1962, in report RM 3420 in 1964. Report P-2626 described a general architecture for a large-scale, survivable communications network; the work focuses on three key ideas: use of a decentralized network with multiple paths between any two points, dividing user messages into message blocks, delivery of these messages by store and forward switching. Davies developed a similar message routing concept in 1965, he called it packet switching, proposed building a nationwide network in the UK. He gave a talk on the proposal in 1966, after which a person from the Ministry of Defence told him about Baran's work. Roger Scantlebury, a member of Davies' team met Lawrence Roberts at the 1967 ACM Symposium on Operating System Principles and suggested it for use in the ARPANET.
Davies had chosen some of the same parameters for his original network design as did Baran, such as a packet size of 1024 bits. In 1966, Davies proposed that a network should be built at the laboratory to serve the needs of NPL and prove the feasibility of packet switching. After a pilot experiment in 1967, the NPL Data Communications Network entered service in 1969. Leonard Kleinrock conducted early research in queueing theory for his doctoral dissertation at MIT in 1961-2 and published it as a book in 1964 in the field of digital message switching. Following the 1967 ACM Symposium, Lawrence Roberts asked Kleinrock to carry out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET; the NPL team carried out simulation work on packet networks. The French CYCLADES network, designed by Louis Pouzin in the early 1970s, was the first to employ what came to be known as the end-to-end principle, make the hosts responsible for the reliable delivery of data on a packet-switched network, rather than this being a centralized service of the network i