General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin
Digital loop carrier
A digital loop carrier is a system which uses digital transmission to extend the range of the local loop farther than would be possible using only twisted pair copper wires. A DLC digitizes and multiplexes the individual signals carried by the local loops onto a single datastream on the DLC segment. Subscriber Loop Carrier systems address a number of problems: Electrical constraints on long loops. Insufficient available cable pairs. Cable route congestion Construction challenges when limited cable pairs are available Expense due to cable cost and the associated labour-intensive installation work Long loops, such as those terminating at more than 18,000 feet from the central office, pose electrical challenges; when the subscriber goes off-hook, a cable pair behaves like a single loop inductance coil with a -48 V dc potential and an Electric current of between 20–50 mA dc. Electric current values vary with cable gauge. A minimum current of around 20 mA dc is required to convey terminal signalling information to the network.
There is a minimum power level required to provide adequate volume for the voice signal. A variety of schemes were implemented before DLC technology to offset the impedance long loops offered to signalling and volume levels, they included the following: Use heavy-gauge conductors – Up to 19 gauge, costly and bulky. The heavy-gauge cables yielded far fewer pairs per cable and led to early congestion in cable routes in bridge crossings and other areas of limited space. Increase battery voltage – This violation of operating standards could pose a safety hazard. Add amplifiers to power the voice signal on long loops; this however, requires volumes of auxiliary equipment, a myriad number of cross wiring points, extensive record-keeping. Add signal regeneration and signal extension equipment – The comments regarding amplifiers apply here as well. Add loading coils to reduce the attenuation of voice signals over long loops; these have detrimental effect to new transmission technologies using the local loop, like DSL, must be removed.
DLC eliminates the need for these remedies by extending out closer to the customer the line card which digitises the voice signal for use by the PSTN. Once the voice signal is digitised, it is manipulated and is no longer subject to the vagaries of the analog loop caused by distance, impedance and noise; the DLC solution was dubbed "pair gain". In a typical configuration, DLC remote terminals are installed in new neighbourhoods or buildings as a means of reducing the labour and complexity of installing individual local loops from the customer to the central office. A fibre optic cable or several copper pairs for the whole system from the CO to the DLC remote terminal replace the individual pair needed for each loop. DLC remote terminals are stored in Serving Area Interfaces–metal cabinets alongside or near roadways that overlie communications rights-of-ways. With the growth in popularity of digital subscriber line and the benefits provided by shorter metallic loops used with DLC systems, digital loop carriers are sometimes integrated with digital subscriber line access multiplexers, both systems taking advantage of the digital transmission link from the DLC to the CO. Fibre in the loop systems are functionally equivalent to DLC.
FITL accomplishes the same two primary functions DLC was intended for: pair gain and the elimination of electrical constraints due to long metallic loops. FITL architectures vary from deploying fibre feeder plants to "fibre to the curb" and "fibre to the home" where an optical network unit is located at each home. See Remote concentrator
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, application servers. Client–server systems are today most implemented by the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it.
This implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many simple, replaceable server components. The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall, the paper that introduced Kendall's notation. In earlier papers, such as the Erlang, more concrete terms such as " operators" are used. In computing, "server" dates at least to RFC 5, one of the earliest documents describing ARPANET, is contrasted with "user", distinguishing two types of host: "server-host" and "user-host"; the use of "serving" dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests remote, with the 1981 version reading: SERVER n. A kind of DAEMON which performs a service for the requester, which runs on a computer other than the one on which the server runs.
Speaking, the term server refers to a computer program or process. Through metonymy, it refers to a device used for running several server programs. On a network, such a device is called a host. In addition to server, the words serve and service are used, though servicer and servant are not; the word service may refer to either the abstract form of e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Used as "servers serve users", in the sense of "obey", today one says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests"; the server is part of the client–server model. The nature of communication between a client and server is response; this is in contrast with peer-to-peer model. In principle, any computerized process that can be used or called by another process is a server, the calling process or processes is a client, thus any general purpose computer connected to a network can host servers.
For example, if files on a device are shared by some process, that process is a file server. Web server software can run on any capable computer, so a laptop or a personal computer can host a web server. While request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response; the purpose of a server is to share data as well as to distribute work. A server computer can serve its own computer programs as well; the following table shows several scenarios. The entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and every action taken by an ordinary Internet user requires one or more interactions with one or more server.
There are exceptions. Hardware requirement for servers vary depending on the server's purpose and its software. Since servers are accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface, they are managed remotely. Remote management can be conducted via various methods including Microsoft Management Console, PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large traditional single servers would need to be run for long periods without interruption. Ava
A telephone exchange is a telecommunications system used in the public switched telephone network or in large enterprises. An exchange consists of electronic components and in older systems human operators that interconnect telephone subscriber lines or virtual circuits of digital systems to establish telephone calls between subscribers. In historical perspective, telecommunication terms have been used with different semantics over time; the term telephone exchange is used synonymously with central office, a Bell System term. A central office is defined as a building used to house the inside plant equipment of several telephone exchanges, each serving a certain geographical area; such an area has been referred to as the exchange. Central office locations may be identified in North America as wire centers, designating a facility from which a telephone obtains dial tone. For business and billing purposes, telephony carriers define rate centers, which in larger cities may be clusters of central offices, to define specified geographical locations for determining distance measurements.
In the United States and Canada, the Bell System established in the 1940s a uniform system of identifying central offices with a three-digit central office code, used as a prefix to subscriber telephone numbers. All central offices within a larger region aggregated by state, were assigned a common numbering plan area code. With the development of international and transoceanic telephone trunks driven by direct customer dialing, similar efforts of systematic organization of the telephone networks occurred in many countries in the mid-20th century. For corporate or enterprise use, a private telephone exchange is referred to as a private branch exchange, when it has connections to the public switched telephone network. A PBX is installed in enterprise facilities collocated with large office spaces or within an organizational campus to serve the local private telephone system and any private leased line circuits. Smaller installations might deploy a PBX or key telephone system in the office of a receptionist.
In the era of the electrical telegraph, post offices, railway stations, the more important governmental centers, stock exchanges few nationally distributed newspapers, the largest internationally important corporations and wealthy individuals were the principal users of such telegraphs. Despite the fact that telephone devices existed before the invention of the telephone exchange, their success and economical operation would have been impossible on the same schema and structure of the contemporary telegraph, as prior to the invention of the telephone exchange switchboard, early telephones were hardwired to and communicated with only a single other telephone. A telephone exchange is a telephone system located at service centers responsible for a small geographic area that provided the switching or interconnection of two or more individual subscriber lines for calls made between them, rather than requiring direct lines between subscriber stations; this made it possible for subscribers to call each other at businesses, or public spaces.
These made telephony an available and comfortable communication tool for everyday use, it gave the impetus for the creation of a whole new industrial sector. As with the invention of the telephone itself, the honor of "first telephone exchange" has several claimants. One of the first to propose a telephone exchange was Hungarian Tivadar Puskás in 1877 while he was working for Thomas Edison; the first experimental telephone exchange was based on the ideas of Puskás, it was built by the Bell Telephone Company in Boston in 1877. The world's first state-administered telephone exchange opened on November 12, 1877 in Friedrichsberg close to Berlin under the direction of Heinrich von Stephan. George W. Coy designed and built the first commercial US telephone exchange which opened in New Haven, Connecticut in January, 1878; the switchboard was built from "carriage bolts, handles from teapot lids and bustle wire" and could handle two simultaneous conversations. Charles Glidden is credited with establishing an exchange in Lowell, MA. with 50 subscribers in 1878.
In Europe other early telephone exchanges were based in London and Manchester, both of which opened under Bell patents in 1879. Belgium had its first International Bell exchange a year later. In 1887 Puskás introduced the multiplex switchboard.. Exchanges consisted of one to several hundred plug boards staffed by switchboard operators; each operator sat in front of a vertical panel containing banks of ¼-inch tip-ring-sleeve jacks, each of, the local termination of a subscriber's telephone line. In front of the jack panel lay a horizontal panel containing two rows of patch cords, each pair connected to a cord circuit; when a calling party lifted the receiver, the local loop current lit a signal lamp near the jack. The operator responded by inserting the rear cord into the subscriber's jack and switched her headset into the circuit to ask, "Number, please?" For a local call, the operator inserted the front cord of the pair into the called party's local jack and started the ringing cycle. For a long distance call, she plugged into a trunk circuit to connect to another operator in another bank of boards or at a remote central office.
In 1918, the average time to complete the connection for a long-distance call was 15 minutes. Early manual switchboards required the operator to operate listening keys and ringing keys, but by the late 1910s and 1920s, advances in switchboard technology led to features which allowed the call to be automatic
In computer architecture, a bus is a communication system that transfers data between components inside a computer, or between computers. This expression covers all related hardware components and software, including communication protocols. Early computer buses were parallel electrical wires with multiple hardware connections, but the term is now used for any physical arrangement that provides the same logical function as a parallel electrical bus. Modern computer buses can use both parallel and bit serial connections, can be wired in either a multidrop or daisy chain topology, or connected by switched hubs, as in the case of USB. Computer systems consist of three main parts: the central processing unit that processes data, memory that holds the programs and data to be processed, I/O devices as peripherals that communicate with the outside world. An early computer might contain a hand-wired CPU of vacuum tubes, a magnetic drum for main memory, a punch tape and printer for reading and writing data respectively.
A modern system might have a multi-core CPU, DDR4 SDRAM for memory, a solid-state drive for secondary storage, a graphics card and LCD as a display system, a mouse and keyboard for interaction, a Wi-Fi connection for networking. In both examples, computer buses of one form or another move data between all of these devices. In most traditional computer architectures, the CPU and main memory tend to be coupled. A microprocessor conventionally is a single chip which has a number of electrical connections on its pins that can be used to select an "address" in the main memory and another set of pins to read and write the data stored at that location. In most cases, the CPU and memory share signalling operate in synchrony; the bus connecting the CPU and memory is one of the defining characteristics of the system, referred to as the system bus. It is possible to allow peripherals to communicate with memory in the same fashion, attaching adaptors in the form of expansion cards directly to the system bus.
This is accomplished through some sort of standardized electrical connector, several of these forming the expansion bus or local bus. However, as the performance differences between the CPU and peripherals varies some solution is needed to ensure that peripherals do not slow overall system performance. Many CPUs feature a second set of pins similar to those for communicating with memory, but able to operate at different speeds and using different protocols. Others use smart controllers to place the data directly in memory, a concept known as direct memory access. Most modern systems combine both solutions; as the number of potential peripherals grew, using an expansion card for every peripheral became untenable. This has led to the introduction of bus systems designed to support multiple peripherals. Common examples are the SATA ports in modern computers, which allow a number of hard drives to be connected without the need for a card. However, these high-performance systems are too expensive to implement in low-end devices, like a mouse.
This has led to the parallel development of a number of low-performance bus systems for these solutions, the most common example being the standardized Universal Serial Bus. All such examples may be referred to as peripheral buses, although this terminology is not universal. In modern systems the performance difference between the CPU and main memory has grown so great that increasing amounts of high-speed memory is built directly into the CPU, known as a cache. In such systems, CPUs communicate using high-performance buses that operate at speeds much greater than memory, communicate with memory using protocols similar to those used for peripherals in the past; these system buses are used to communicate with most other peripherals, through adaptors, which in turn talk to other peripherals and controllers. Such systems are architecturally more similar to multicomputers, communicating over a bus rather than a network. In these cases, expansion buses are separate and no longer share any architecture with their host CPU.
What would have been a system bus is now known as a front-side bus. Given these changes, the classical terms "system", "expansion" and "peripheral" no longer have the same connotations. Other common categorization systems are based on the bus's primary role, connecting devices internally or externally, PCI vs. SCSI for instance. However, many common modern bus systems can be used for both. Other examples, like InfiniBand and I²C were designed from the start to be used both internally and externally; the internal bus known as internal data bus, memory bus, system bus or Front-Side-Bus, connects all the internal components of a computer, such as CPU and memory, to the motherboard. Internal data buses are referred to as a local bus, because they are intended to connect to local devices; this bus is rather quick and is independent of the rest of the computer operations. The external bus, or expansion bus, is made up of the electronic pathways that connect the different external devices, such as printer etc. to the computer.
Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential