Synchronization is the coordination of events to operate a system in unison. The conductor of an orchestra keeps the orchestra synchronized or in time. Systems that operate with all parts in synchrony are said to be synchronous or in sync—and those that are not are asynchronous. Today, time synchronization can occur between systems around the world through satellite navigation signals. Time-keeping and synchronization of clocks has been a critical problem in long-distance ocean navigation. Before radio navigation and satellite-based navigation, navigators required accurate time in conjunction with astronomical observations to determine how far east or west their vessel traveled; the invention of an accurate marine chronometer revolutionized marine navigation. By the end of the 19th century, important ports provided time signals in the form of a signal gun, flag, or dropping time ball so that mariners could check their chronometers for error. Synchronization was important in the operation of 19th century railways, these being the first major means of transport fast enough for differences in local time between adjacent towns to be noticeable.
Each line handled the problem by synchronizing all its stations to headquarters as a standard railroad time. In some territories, sharing of single railroad tracks was controlled by the timetable; the need for strict timekeeping led the companies to settle on one standard, civil authorities abandoned local mean solar time in favor of that standard. In electrical engineering terms, for digital logic and data transfer, a synchronous circuit requires a clock signal. However, the use of the word "clock" in this sense is different from the typical sense of a clock as a device that keeps track of time-of-day. In a different sense, electronic systems are sometimes synchronized to make events at points far apart appear simultaneous or near-simultaneous from a certain perspective. Timekeeping technologies such as the GPS satellites and Network Time Protocol provide real-time access to a close approximation to the UTC timescale and are used for many terrestrial synchronization applications of this kind.
Synchronization is an important concept in the following fields: Computer science Cryptography Multimedia Music Neuroscience Photography Physics Synthesizers Telecommunication Synchronization of multiple interacting dynamical systems can occur when the systems are autonomous oscillators. For instance, integrate-and-fire oscillators with either two-way or one-way coupling can synchronize when the strength of the coupling is greater than the differences among the free-running natural oscillator frequencies. Poincare phase oscillators are model systems that can interact and synchronize within random or regular networks. In the case of global synchronization of phase oscillators, an abrupt transition from unsynchronized to full synchronization takes place when the coupling strength exceeds a critical threshold; this is known as the Kuramoto model phase transition. Synchronization is an emergent property that occurs in a broad range of dynamical systems, including neural signaling, the beating of the heart and the synchronization of fire-fly light waves.
Synchronization of movement is defined as similar movements between two or more people who are temporally aligned. This is different to mimicry. Muscular bonding is the idea; this sparked some of the first research into movement synchronization and its effects on human emotion. In groups, synchronization of movement has been shown to increase conformity and trust however more research on group synchronization is needed to determine its effects on the group as a whole and on individuals within a group. In dyads, groups of two people, synchronization has been demonstrated to increase affiliation, self-esteem and altruistic behaviour and increase rapport. During arguments, synchrony between the arguing pair has been noted to decrease, however it is not clear whether this is due to the change in emotion or other factors. There is evidence to show that movement synchronization requires other people to cause its beneficial effects, as the effect on affiliation does not occur when one of the dyad is synchronizing their movements to something outside the dyad.
This is known as interpersonal synchrony. There has been dispute regarding the true effect of synchrony in these studies. Research in this area detailing the positive effects of synchrony, have attributed this to synchrony alone. Indeed, the Reinforcement of Cooperation Model suggests that perception of synchrony leads to reinforcement that cooperation is occurring, which leads to the pro-social effects of synchrony. More research is required to separate the effect of intentionality from the beneficial effect of synchrony. Film synchronization of image and sound in sound film. Synchronization is important in fields such as digital telephony and digital audio where streams of sam
A modem is a hardware device that converts data between transmission media so that it can be transmitted from computer to computer. The goal is to produce a signal that can be transmitted and decoded to reproduce the original digital data. Modems can be used with any means of transmitting analog signals from light-emitting diodes to radio. A common type of modem is one that turns the digital data of a computer into modulated electrical signal for transmission over telephone lines and demodulated by another modem at the receiver side to recover the digital data. Modems are classified by the maximum amount of data they can send in a given unit of time expressed in bits per second or bytes per second. Modems can be classified by their symbol rate, measured in baud; the baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying with two possible frequencies, corresponding to two distinct symbols, to carry 300 bits per second using 300 baud.
By contrast, the original ITU V.22 standard, which could transmit and receive four distinct symbols, transmitted 1,200 bits by sending 600 symbols per second using phase-shift keying News wire services in the 1920s used multiplex devices that satisfied the definition of a modem. However, the modem function was incidental to the multiplexing function, so they are not included in the history of modems. Modems grew out of the need to connect teleprinters over ordinary phone lines instead of the more expensive leased lines, used for current loop–based teleprinters and automated telegraphs. In 1941, the Allies developed a voice encryption system called SIGSALY which used a vocoder to digitize speech encrypted the speech with one-time pad and encoded the digital data as tones using frequency shift keying. Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, command-and-control centers to the SAGE director centers scattered around the United States and Canada.
SAGE modems were described by AT&T's Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems; the 201A and 201B Data-Phones were synchronous modems using two-bit-per-baud phase-shift keying. The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels each running on their own set of two wires; the famous Bell 103A dataset standard was introduced by AT&T in 1962. It provided full-duplex service at 300 bit/s over normal phone lines. Frequency-shift keying was used, with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz; the available 103A2 gave an important boost to the use of remote low-speed terminals such as the Teletype Model 33 ASR and KSR, the IBM 2741.
AT&T reduced modem costs by introducing the answer-only 113B/C modems. For many years, the Bell System maintained a monopoly on the use of its phone lines and what devices could be connected to them. However, the FCC's seminal Carterfone Decision of 1968, the FCC concluded that electronic devices could be connected to the telephone system as long as they used an acoustic coupler. Since most handsets were supplied by Western Electric and thus of a standard design, acoustic couplers were easy to build. Acoustically coupled Bell 103A-compatible 300 bit/s modems were common during the 1970s. Well-known models included the Novation CAT and the Anderson-Jacobson, the latter spun off from an in-house project at Stanford Research Institute. An lower-cost option was the Pennywhistle modem, designed to be built using parts from electronics scrap and surplus stores. In December 1972, Vadic introduced the VA3400, notable for full-duplex operation at 1,200 bit/s over the phone network. Like the 103A, it used different frequency bands for receive.
In November 1976, AT&T introduced the 212A modem to compete with Vadic. It used the lower frequency set for transmission. One could use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic's 1,200-bit/s mode, AT&T's 212A mode, 103A operation; the Hush-a-Phone decision applied only to mechanical connections, but the Carterfone decision of 1968, led to the FCC introducing a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. This opened the door to direct-connect modems that plugged directly into the phone line rather than via a handset. However, the cost of passing the tests was considerable, acoustically coupled modems remained common into the early 1980s.
The falling prices of electronics in the late 1970s led to an increasing number of direct-connect models around 1980. In spite of being directly connected, these modems were operated like their earlier acoustic versions – dialing and other phone-control operations were completed by hand, using an attached handset
Fiber to the x
Fiber to the x or fiber in the loop is a generic term for any broadband network architecture using optical fiber to provide all or part of the local loop used for last mile telecommunications. As fiber optic cables are able to carry much more data than copper cables over long distances, copper telephone networks built in the 20th century are being replaced by fiber. FTTX is a generalization for several configurations of fiber deployment, arranged into two groups: FTTP/FTTH/FTTB and FTTC/N. Residential areas served by balanced pair distribution plant call for a trade-off between cost and capacity; the closer the fiber head, the higher the cost of construction and the higher the channel capacity. In places not served by metallic facilities, little cost is saved by not running fiber to the home. Fiber to the x is the key method used to drive next-generation access, which describes a significant upgrade to the Broadband available by making a step change in speed and quality of the service; this is thought of as asymmetrical with a download speed of 24 Mbit/s plus and a fast upload speed.
The Definition of UK Superfast Next Generation Broadband OFCOM have defined NGA as in "Ofcom's March 2010'Review of the wholesale local access market" "Super-fast broadband is taken to mean broadband products that provide a maximum download speed, greater than 24 Mbit/s. This threshold is considered to be the maximum speed that can be supported on current generation networks." A similar network called a hybrid fiber-coaxial network is used by cable television operators but is not synonymous with "fiber In the loop", although similar advanced services are provided by the HFC networks. Fixed wireless and mobile wireless technologies such as Wi-Fi, WiMAX and 3GPP Long Term Evolution are an alternative for providing Internet access; the telecommunications industry differentiates between several distinct FTTX configurations. The terms in most widespread use today are: FTTP: This term is used either as a blanket term for both FTTH and FTTB, or where the fiber network includes both homes and small businesses FTTH: Fiber reaches the boundary of the living space, such as a box on the outside wall of a home.
Passive optical networks and point-to-point Ethernet are architectures that are capable of delivering triple-play services over FTTH networks directly from an operator's central office. FTTB: Fiber reaches the boundary of the building, such as the basement in a multi-dwelling unit, with the final connection to the individual living space being made via alternative means, similar to the curb or pole technologies FTTD: Fiber connection is installed from the main computer room to a terminal or fiber media converter near the user's desk FTTR: Fiber connection is installed from the router to the ISP's fiber network FTTO: Fiber connection is installed from the main computer room/core switch to a special mini-switch located at the user's workstation or service points; this mini-switch provides Ethernet services to end user devices via standard twisted pair patch cords. The switches are located decentrally all over the building, but managed from one central point FTTF: This is similar to FTTB. In a fiber to the front yard scenario, each fiber node serves a single subscriber.
This allows for multi-gigabit speeds using XG-fast technology. The fiber node may be reverse-powered by the subscriber modem FTTE / FTTZ: is a form of structured cabling used in enterprise local area networks, where fiber is used to link the main computer equipment room to an enclosure close to the desk or workstation. FTTE and FTTZ are not considered part of the FTTX group of technologies, despite the similarity in name. FTTdp: This is similar to FTTC / FTTN but is one-step closer again moving the end of the fiber to within meters of the boundary of the customers premises in last junction possible junction box known as the "distribution point" this allows for near-gigabit speeds FTTN / FTTLA: Fiber is terminated in a street cabinet miles away from the customer premises, with the final connections being copper. FTTN is an interim step toward full FTTH and is used to deliver'advanced' triple-play telecommunications services FTTC / FTTK: This is similar to FTTN, but the street cabinet or pole is closer to the user's premises within 1,000 feet, within range for high-bandwidth copper technologies such as wired ethernet or IEEE 1901 power line networking and wireless Wi-Fi technology.
FTTC is ambiguously called FTTP, leading to confusion with the distinct fiber-to-the-premises systemTo promote consistency when comparing FTTH penetration rates between countries, the three FTTH Councils of Europe, North America, Asia-Pacific agreed upon definitions for FTTH and FTTB in 2006, with an update in 2009, 2011 and another in 2015. The FTTH Councils do not have formal definitions for FTTC and FTTN. While fiber optic cables can carry data at high speeds over long distances, copper cables used in traditional telephone lines and ADSL cannot. For example, the common form of Gigabit Ethernet runs over economical category 5e, category 6 or augmented category 6 un
Internet service provider
An Internet service provider is an organization that provides services for accessing, using, or participating in the Internet. Internet service providers may be organized in various forms, such as commercial, community-owned, non-profit, or otherwise owned. Internet services provided by ISPs include Internet access, Internet transit, domain name registration, web hosting, Usenet service, colocation; the Internet was developed as a network between government research laboratories and participating departments of universities. Other companies and organizations joined by direct connection to the backbone, or by arrangements through other connected companies, sometime using dialup tools such as UUCP. By the late 1980s, a process was set in place towards commercial use of the Internet; the remaining restrictions were removed by 1991, shortly after the introduction of the World Wide Web. During the 1980s, online service providers such as CompuServe and America On Line began to offer limited capabilities to access the Internet, such as e-mail interchange, but full access to the Internet was not available to the general public.
In 1989, the first Internet service providers, companies offering the public direct access to the Internet for a monthly fee, were established in Australia and the United States. In Brookline, The World became the first commercial ISP in the US, its first customer was served in November 1989. These companies offered dial-up connections, using the public telephone network to provide last-mile connections to their customers; the barriers to entry for dial-up ISPs were low and many providers emerged. However, cable television companies and the telephone carriers had wired connections to their customers and could offer Internet connections at much higher speeds than dial-up using broadband technology such as cable modems and digital subscriber line; as a result, these companies became the dominant ISPs in their service areas, what was once a competitive ISP market became a monopoly or duopoly in countries with a commercial telecommunications market, such as the United States. On 23 April 2014, the U.
S. Federal Communications Commission was reported to be considering a new rule that will permit ISPs to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality. On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of a U. S. Congress H. R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported that the FCC will present the notion of applying Title II of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015.
Adoption of this notion would reclassify Internet service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC is expected to enforce net neutrality in its vote, according to The New York Times. On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II of the Communications Act of 1934 and Section 706 in the Telecommunications Act of 1996 to the Internet; the FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On 12 March 2015, the FCC released the specific details of the net neutrality rules. On 13 April 2015, the FCC published the final rule on its new "Net Neutrality" regulations; these rules went into effect on 12 June 2015. Upon becoming FCC chairman in April 2017, Ajit Pai proposed an end to net neutrality, awaiting votes from the commission. On 21 November 2017, Pai announced that a vote will be held by FCC members on 14 December on whether to repeal the policy.
On 11 June 2018, the repeal of the FCC's network neutrality rules took effect. Access provider ISPs provide Internet access, employing a range of technologies to connect users to their network. Available technologies have ranged from computer modems with acoustic couplers to telephone lines, to television cable, Wi-Fi, fiber optics. For users and small businesses, traditional options include copper wires to provide dial-up, DSL asymmetric digital subscriber line, cable modem or Integrated Services Digital Network. Using fiber-optics to end users is called Fiber To The Home or similar names. For customers with more demanding requirements can use higher-speed DSL, metropolitan Ethernet, gigabit Ethernet, Frame Relay, ISDN Primary Rate Interface, ATM and synchronous optical networking. Wireless access is another option, including satellite Internet access. A mailbox provider is an organization that provides services for hosting electronic mail domains with access to storage for mail boxes
Concentrated solar power
Concentrated solar power systems generate solar power by using mirrors or lenses to concentrate a large area of sunlight, or solar thermal energy, onto a small area. Electricity is generated when the concentrated light is converted to heat, which drives a heat engine connected to an electrical power generator or powers a thermochemical reaction. CSP had a world's total installed capacity of 4,815 MW in 2016, up from 354 MW in 2005; as of 2017, Spain accounted for half of the world's capacity, at 2,300 MW, making this country the world leader in CSP. The United States follows with 1,740 MW. Interest is notable in North Africa and the Middle East, as well as India and China; the global market has been dominated by parabolic-trough plants, which accounted for 90% of CSP plants at one point. The largest CSP projects in the world are the Ivanpah Solar Power Facility in the United States and the Mojave Solar Project in the United States. In most cases, CSP technologies cannot compete on price with photovoltaic solar panels, which have experienced huge growth in recent years due to falling prices and much smaller operating costs.
CSP needs large amount of direct solar radiation, its energy generation falls with cloud cover. This is in contrast with photovoltaics, which can produce electricity from diffuse radiation. However, the advantage of CSP over PV is that as a thermal technology, running a conventional thermal power block, a CSP plant can store the heat of solar energy in molten salts, which enables these plants to continue to generate electricity whenever it is needed, whether day or night; this makes CSP a dispatchable form of solar. This is valuable in places where there is a high penetration of PV, such as California because an evening peak is being exacerbated as PV ramps down at sunset. CSP has other uses than electricity. Researchers are investigating solar thermal reactors for the production of solar fuels, making solar a transportable form of energy in the future; these researchers use the solar heat of CSP as a catalyst for thermochemistry to break apart molecules of H2O, to create hydrogen from solar energy with no carbon emissions.
By splitting both H2O and CO2, other much-used hydrocarbons – for example, the jet fuel used to fly commercial airplanes – could be created with solar energy rather than from fossil fuels. In 2017, CSP represented less than 2% of worldwide installed capacity of solar electricity plants. However, in recent years falling prices of CSP plants are making this technology competitive with other base-load power plants using fossil and nuclear fuel in high moisture and dusty atmosphere at sea level, such as the United Arab Emirates. Base-load CSP tariff in the dry Atacama region of Chile reached below ¢5.0/kWh in 2017 auctions. A legend has it that Archimedes used a "burning glass" to concentrate sunlight on the invading Roman fleet and repel them from Syracuse. In 1973 a Greek scientist, Dr. Ioannis Sakkas, curious about whether Archimedes could have destroyed the Roman fleet in 212 BC, lined up nearly 60 Greek sailors, each holding an oblong mirror tipped to catch the sun's rays and direct them at a tar-covered plywood silhouette 49 m away.
The ship caught fire after a few minutes. In 1866, Auguste Mouchout used a parabolic trough to producе steam for the first solar steam engine; the first patent for a solar collector was obtained by the Italian Alessandro Battaglia in Genoa, Italy, in 1886. Over the following years, invеntors such as John Ericsson and Frank Shuman developed concentrating solar-powered dеvices for irrigation, refrigеration, locomоtion. In 1913 Shuman finished a 55 HP parabolic solar thermal energy station in Maadi, Egypt for irrigation; the first solar-power system using a mirror dish was built by Dr. R. H. Goddard, well known for his research on liquid-fueled rockets and wrote an article in 1929 in which he asserted that all the previous obstacles had been addressed. Professor Giovanni Francia designed and built the first concentrated-solar plant, which entered into operation in Sant'Ilario, near Genoa, Italy in 1968; this plant had the architecture of today's power tower plants with a solar receiver in the center of a field of solar collectors.
The plant was able to produce 1 MW with superheated steam at 100 bar and 500 °C. The 10 MW Solar One power tower was developed in Southern California in 1981. Solar One was converted into Solar Two in 1995, implementing a new design with a molten salt mixture as the receiver working fluid and as a storage medium; the molten salt approach proved effective, Solar Two operated until it was decommissioned in 1999. The parabolic-trough technology of the nearby Solar Energy Generating Systems, begun in 1984, was more workable; the 354 MW SEGS was the largest solar power plant in the world, until 2014. No commercial concentrated solar was constructed from 1990 when SEGS was completed until 2006 when the Compact linear Fresnel reflector system at Liddell Power Station in Australia was built. Few other plants were built with this design although the 5 MW Kimberlina Solar Thermal Energy Plant opened in 2009. In 2007, 75 MW Nevada Solar One was built, a trough design and the first large plant since SEGS.
Between 2009 and 2013, Spain built over standardized in 50 MW blocks. Due to the success of Solar Two, a commercial power plant, called Solar Tres Power Tower, was buil
General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin
A patch panel, patch bay, patch field or jack field is a device or unit featuring a number of jacks of the same or similar type, for the use of connecting and routing circuits for monitoring and testing circuits in a convenient, flexible manner. Patch panels are used in computer networking, recording studios and television; the term "patch" came from early use in telephony and radio studios, where extra equipment kept on standby could be temporarily substituted for failed devices. This reconnection was done via patch cords and patch panels, like the jack fields of cord-type telephone switchboards. In recording studios and radio broadcast studios, concert sound reinforcement systems, patchbays are used to facilitate the connection of different devices, such as microphones, electric or electronic instruments, recording gear, amplifiers, or broadcasting equipment. Patchbays make it easier to connect different devices in different orders for different projects, because all of the changes can be made at the patchbay.
Additionally, patchbays make it easier to troubleshoot problems such as ground loops. This means that devices mounted in racks or keyboard instruments can be connected without having to hunt around behind the rack or instrument with a flashlight for the right jack. Using a patchbay saves wear and tear on the input jacks of studio gear and instruments, because all of the connections are made with the patchbay. Patch panels are being used more prevalently in domestic installations, owing to the popularity of "Structured Wiring" installs, they are found in home cinema installations more and more. It is conventional to have the top row of jacks wired at the rear to outputs and bottom row of jacks wired to inputs. Patch bays may be half-normal or full-normal, "normal" indicating that the top and bottom jacks are connected internally; when a patch bay has bottom half-normal wiring with no patch cord inserted into either jack, the top jack is internally linked to the bottom jack via break contacts on the bottom jack.
With top half-normal wiring, the same happens but vice versa. If a patch bay is wired to full-normal it includes break contacts in both rows of jacks. Dedicated switching equipment can be an alternative to patch bays in some applications. Switches can make routing as easy as pushing a button, can provide other benefits over patch bays, including routing a signal to any number of destinations simultaneously. However, switching equipment that can emulate the capabilities of a given patch bay is much more expensive. For example, an S-Video matrix routing switcher with the same capability as a 16-point S-Video patch panel may cost ten times more, though it would have more capabilities. Like patch panels, switching equipment for nearly any type of signal is available, including analog and digital video and audio, as well as RF, MIDI, telephone and electrical. There are various types of switches for audio and video, from simple selector switches to sophisticated production switchers. However, emulating or exceeding the capabilities of audio or video patch panels requires specialized devices like routing switches and crossbar switches.
Switching equipment may be electronic, mechanical, or electro-mechanical. Some switcher hardware can be controlled via computer or other external devices; some have pre-programmed operational capabilities. There are software switcher applications used to route signals and control data within a "pure digital" computer environment. Cable management Distribution frame, cheaper but less convenient Wiring closet This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C". Media related to Patch panels at Wikimedia Commons