General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin
An electrical load is an electrical component or portion of a circuit that consumes electric power. This is opposed to a power source, such as a generator, which produces power. In electric power circuits examples of loads are lights; the term may refer to the power consumed by a circuit. The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal is the load. For example, if a CD player is connected to an amplifier, the CD player is the source and the amplifier is the load. Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load; when a high-power appliance switches on, it reduces the load impedance.
If the load impedance is not much higher than the power supply impedance, the voltages will drop. In a domestic environment, switching on a heating appliance may cause incandescent lights to dim noticeably; when discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. The Thévenin equivalent of a circuit looks like this: With no load, all of V S falls across the output. We would like to ignore the details of the load circuit, as we did for the power supply, represent it as as possible. If we use an input resistance to represent the load, the complete circuit looks like this: Whereas the voltage source by itself was an open circuit, adding the load makes a closed circuit and allows charge to flow; this current places a voltage drop across R S, so the voltage at the output terminal is no longer V S. The output voltage can be determined by the voltage division rule: V O U T = V S ⋅ R L R L + R S If the source resistance is not negligibly small compared to the load impedance, the output voltage will fall.
This illustration uses simple resistances, but similar discussion can be applied in alternating current circuits using resistive and inductive elements. Dummy load
Communication is the act of conveying meanings from one entity or group to another through the use of mutually understood signs and semiotic rules. The main steps inherent to all communication are: The formation of communicative motivation or reason. Message composition. Message encoding. Transmission of the encoded message as a sequence of signals using a specific channel or medium. Noise sources such as natural forces and in some cases human activity begin influencing the quality of signals propagating from the sender to one or more receivers. Reception of signals and reassembling of the encoded message from a sequence of received signals. Decoding of the reassembled encoded message. Interpretation and making sense of the presumed original message; the scientific study of communication can be divided into: Information theory which studies the quantification and communication of information in general. The channel of communication can be visual, auditory and haptic, electromagnetic, or biochemical.
Human communication is unique for its extensive use of abstract language. Development of civilization has been linked with progress in telecommunication. Nonverbal communication describes the processes of conveying a type of information in the form of non-linguistic representations. Examples of nonverbal communication include haptic communication, chronemic communication, body language, facial expressions, eye contact, how one dresses. Nonverbal communication relates to the intent of a message. Examples of intent are voluntary, intentional movements like shaking a hand or winking, as well as involuntary, such as sweating. Speech contains nonverbal elements known as paralanguage, e.g. rhythm, intonation and stress. It establishes trust. Written texts include nonverbal elements such as handwriting style, the spatial arrangement of words and the use of emoticons to convey emotion. Nonverbal communication demonstrates one of Paul Wazlawick's laws: you cannot not communicate. Once proximity has formed awareness, living creatures begin interpreting.
Some of the functions of nonverbal communication in humans are to complement and illustrate, to reinforce and emphasize, to replace and substitute, to control and regulate, to contradict the denovative message. Nonverbal cues are relied on to express communication and to interpret others' communication and can replace or substitute verbal messages. However, non-verbal communication is ambiguous; when verbal messages contradict non-verbal messages, observation of non-verbal behaviour is relied on to judge another's attitudes and feelings, rather than assuming the truth of the verbal message alone. There are several reasons as to why non-verbal communication plays a vital role in communication: "Non-verbal communication is omnipresent." They are included in every single communication act. To have total communication, all non-verbal channels such as the body, voice, touch, distance and other environmental forces must be engaged during face-to-face interaction. Written communication can have non-verbal attributes.
E-mails and web chats allow an individual's the option to change text font colours, stationary and capitalization in order to capture non-verbal cues into a verbal medium. "Non-verbal behaviours are multifunctional." Many different non-verbal channels are engaged at the same time in communication acts and allow the chance for simultaneous messages to be sent and received. "Non-verbal behaviours may form a universal language system." Smiling, pointing and glaring are non-verbal behaviours that are used and understood by people regardless of nationality. Such non-verbal signals allow the most basic form of communication when verbal communication is not effective due to language barriers. Verbal communication is the written conveyance of a message. Human language can be defined as a system of symbols and the grammars by which the symbols are manipulated; the word "language" refers to common properties of languages. Language learning occurs most intensively during human childhood. Most of the thousands of human languages use patterns of sound or gesture for symbols which enable communication with others around them.
Languages tend to share certain properties. There is no defined line between a dialect. Constructed languages such as Esperanto, programming languages, various mathematical formalism is not restricted to the properties shared by human languages; as mentioned, language can be characterized as symbolic. Charles Ogden and I. A Richards developed The Triangle of Meaning model to explain the symbol, the referent, the meaning; the properties of language are governed by rules. Language follows phonological rules, syntactic rules, semantic rules, pragmatic rules; the meanings that are attached to words can be otherwise known as denotative.
A tape transport is the collection of parts of a magnetic tape player or recorder that the actual tape passes through. Transport parts include the head, pinch roller, tape pins, tape guide; the tape transport as a whole is called the transport mechanism. The capstan is a rotating spindle used to move recording tape through the mechanism of a tape recorder; the tape is threaded between the capstan and one or more rubber-covered wheels, called pinch rollers, which press against the capstan, thus providing friction necessary for the capstan to pull the tape. The capstan is always placed downstream from the tape heads. To maintain the required tension against the tape heads and other part of the tape transport, a small amount of drag is placed on the supply reel. Tape recorder capstans have a function similar to nautical capstans, which however have no pinch rollers, the line being wound around them; the use of a capstan allows the tape to run at a constant speed. Capstans are precision-machined spindles, polished smooth: any out-of-roundness or imperfections can cause uneven motion and an audible effect called flutter.
The alternative to capstan drive driving the tape takeup reel, causes problems both with the speed difference between a full and empty reel and with speed variations as described. Dual capstans, where one is on each side of the heads, are claimed to provide smoother tape travel across the heads and result in less variance in the recorded/playback signal; the pinch roller is a rubberized, free-spinning wheel used to press magnetic tape against a capstan shaft in order to create friction necessary to drive the tape along the magnetic heads. Most magnetic tape recorders use one capstan motor and one pinch roller located after the magnetic heads in the direction of the moving tape; however multiple pinch rollers may be employed in association with one or more capstans. An example of the application of multiple pinch rollers is the Technics-RS1520 tape recorder, which utilizes two pinch rollers located on opposite sides of a single capstan shaft, providing a more stable transport across two sets of magnetic heads.
Dual pinch rollers are used in auto-reverse cassette decks to drive the tape in both directions as needed. In this case, only one pinch roller is pressed against its corresponding capstan at a time. A tension arm is a device used in magnetic tape recorders/reproducers to control the tension of the magnetic tape during machine operation; the recorders equipped with a tension arm can utilize more than one of them to control tape tension in different direction of winding or during different modes of tape operation. Tension arms can be found on digital data recorders and other types of recorders/reproducers using continuous tape media such as magnetic digital tape, perforated paper tape, analog magnetic tape. One of many US Patents pertaining to Tension arm Workbench Guide to Tape Recorder Servicing. G. Howard Poteet, 1977
In telecommunications, transmission is the process of sending and propagating an analogue or digital information signal over a physical point-to-point or point-to-multipoint transmission medium, either wired, optical fiber or wireless. One example of transmission is the sending of a signal with limited duration, for example a block or packet of data, a phone call, or an email. Transmission technologies and schemes refer to physical layer protocol duties such as modulation, line coding, error control, bit synchronization and multiplexing, but the term may involve higher-layer protocol duties, for example, digitizing an analog message signal, data compression. Transmission of a digital message, or of a digitized analog signal, is known as digital communication
An atmosphere is a layer or a set of layers of gases surrounding a planet or other material body, held in place by the gravity of that body. An atmosphere is more to be retained if the gravity it is subject to is high and the temperature of the atmosphere is low; the atmosphere of Earth is composed of nitrogen, argon, carbon dioxide and other gases in trace amounts. Oxygen is used by most organisms for respiration; the atmosphere helps to protect living organisms from genetic damage by solar ultraviolet radiation, solar wind and cosmic rays. The current composition of the Earth's atmosphere is the product of billions of years of biochemical modification of the paleoatmosphere by living organisms; the term stellar atmosphere describes the outer region of a star and includes the portion above the opaque photosphere. Stars with sufficiently low temperatures may have outer atmospheres with compound molecules. Atmospheric pressure at a particular location is the force per unit area perpendicular to a surface determined by the weight of the vertical column of atmosphere above that location.
On Earth, units of air pressure are based on the internationally recognized standard atmosphere, defined as 101.325 kPa. It is measured with a barometer. Atmospheric pressure decreases with increasing altitude due to the diminishing mass of gas above; the height at which the pressure from an atmosphere declines by a factor of e is called the scale height and is denoted by H. For an atmosphere with a uniform temperature, the scale height is proportional to the temperature and inversely proportional to the product of the mean molecular mass of dry air and the local acceleration of gravity at that location. For such a model atmosphere, the pressure declines exponentially with increasing altitude. However, atmospheres are not uniform in temperature, so estimation of the atmospheric pressure at any particular altitude is more complex. Surface gravity differs among the planets. For example, the large gravitational force of the giant planet Jupiter retains light gases such as hydrogen and helium that escape from objects with lower gravity.
Secondly, the distance from the Sun determines the energy available to heat atmospheric gas to the point where some fraction of its molecules' thermal motion exceed the planet's escape velocity, allowing those to escape a planet's gravitational grasp. Thus and cold Titan and Pluto are able to retain their atmospheres despite their low gravities. Since a collection of gas molecules may be moving at a wide range of velocities, there will always be some fast enough to produce a slow leakage of gas into space. Lighter molecules move faster than heavier ones with the same thermal kinetic energy, so gases of low molecular weight are lost more than those of high molecular weight, it is thought that Venus and Mars may have lost much of their water when, after being photo dissociated into hydrogen and oxygen by solar ultraviolet, the hydrogen escaped. Earth's magnetic field helps to prevent this, as the solar wind would enhance the escape of hydrogen. However, over the past 3 billion years Earth may have lost gases through the magnetic polar regions due to auroral activity, including a net 2% of its atmospheric oxygen.
The net effect, taking the most important escape processes into account, is that an intrinsic magnetic field does not protect a planet from atmospheric escape and that for some magnetizations the presence of a magnetic field works to increase the escape rate. Other mechanisms that can cause atmosphere depletion are solar wind-induced sputtering, impact erosion and sequestration—sometimes referred to as "freezing out"—into the regolith and polar caps. Atmospheres have dramatic effects on the surfaces of rocky bodies. Objects that have no atmosphere, or that have only an exosphere, have terrain, covered in craters. Without an atmosphere, the planet has no protection from meteoroids, all of them collide with the surface as meteorites and create craters. Most meteoroids burn up as meteors before hitting a planet's surface; when meteoroids do impact, the effects are erased by the action of wind. As a result, craters are rare on objects with atmospheres. Wind erosion is a significant factor in shaping the terrain of rocky planets with atmospheres, over time can erase the effects of both craters and volcanoes.
In addition, since liquids can not exist without pressure, an atmosphere allows liquid to be present at the surface, resulting in lakes and oceans. Earth and Titan are known to have liquids at their surface and terrain on the planet suggests that Mars had liquid on its surface in the past. A planet's initial atmospheric composition is related to the chemistry and temperature of the local solar nebula during planetary formation and the subsequent escape of interior gases; the original atmospheres started with a rotating disc of gases that collapsed to form a series of spaced rings that condensed to form the planets. The planet's atmospheres were modified over time by various complex factors, resulting in quite different outcomes; the atmospheres of the planets Venus and Mars are composed of carbon dioxide, with small quantities of nitrogen, argon and traces of other gases. The composition of Earth's atmosphere is governed by the by-products of the life that it sust
Positive feedback is a process that occurs in a feedback loop in which the effects of a small disturbance on a system include an increase in the magnitude of the perturbation. That is, A produces more of B which in turn produces more of A. In contrast, a system in which the results of a change act to reduce or counteract it has negative feedback. Both concepts play an important role in science and engineering, including biology and cybernetics. Mathematically, positive feedback is defined as a positive loop gain around a closed loop of cause and effect; that is, positive feedback is in phase with the input, in the sense that it adds to make the input larger. Positive feedback tends to cause system instability; when the loop gain is positive and above 1, there will be exponential growth, increasing oscillations, chaotic behavior or other divergences from equilibrium. System parameters will accelerate towards extreme values, which may damage or destroy the system, or may end with the system latched into a new stable state.
Positive feedback may be controlled by signals in the system being filtered, damped, or limited, or it can be cancelled or reduced by adding negative feedback. Positive feedback is used in digital electronics to force voltages away from intermediate voltages into'0' and'1' states. On the other hand, thermal runaway is a type of positive feedback that can destroy semiconductor junctions. Positive feedback in chemical reactions can increase the rate of reactions, in some cases can lead to explosions. Positive feedback in mechanical design causes tipping-point, or'over-centre', mechanisms to snap into position, for example in switches and locking pliers. Out of control, it can cause bridges to collapse. Positive feedback in economic systems can cause boom-then-bust cycles. A familiar example of positive feedback is the loud squealing or howling sound produced by audio feedback in public address systems: the microphone picks up sound from its own loudspeakers, amplifies it, sends it through the speakers again.
Positive feedback enhances or amplifies an effect by it having an influence on the process which gave rise to it. For example, when part of an electronic output signal returns to the input, is in phase with it, the system gain is increased; the feedback from the outcome to the originating process can be direct, or it can be via other state variables. Such systems can give rich qualitative behaviors, but whether the feedback is instantaneously positive or negative in sign has an important influence on the results. Positive feedback reinforces and negative feedback moderates the original process. Positive and negative in this sense refer to loop gains greater than or less than zero, do not imply any value judgements as to the desirability of the outcomes or effects. A key feature of positive feedback is thus; when a change occurs in a system, positive feedback causes further change, in the same direction. A simple feedback loop is shown in the diagram. If the loop gain AB is positive a condition of positive or regenerative feedback exists.
If the functions A and B are linear and AB is smaller than unity the overall system gain from the input to output is finite, but can be large as AB approaches unity. In that case, it can be shown that the overall or "closed loop" gain from input to output is: G c = A / When AB > 1, the system is unstable, so does not have a well-defined gain. Thus depending on the feedback, state changes can be divergent; the result of positive feedback is to augment changes, so that small perturbations may result in big changes. A system in equilibrium in which there is positive feedback to any change from its current state may be unstable, in which case the system is said to be in an unstable equilibrium; the magnitude of the forces that act to move such a system away from its equilibrium are an increasing function of the "distance" of the state from the equilibrium. Positive feedback does not imply instability of an equilibrium, for example stable on and off states may exist in positive-feedback architectures.
In the real world, positive feedback loops do not cause ever-increasing growth, but are modified by limiting effects of some sort. According to Donella Meadows: "Positive feedback loops are sources of growth, explosion and collapse in systems. A system with an unchecked positive loop will destroy itself. That's. A negative loop will kick in sooner or later."Hysteresis, in which the starting point affects where the system ends up, can be generated by positive feedback. When the gain of the feedback loop is above 1 the output moves away from the input: if it is above the input it moves towards the nearest positive limit, while if it is below the input it moves towards the nearest negative limit. Once it reaches the limit, it will be stable. However, if the input goes past the limit the feedback will change sign and the output will move in the opposite direction until it hits the opposite limit; the system therefore shows bistable behaviour. The terms positive and negative were first applied to feedback before World War II.
The idea of positive feedback was current in the 1920s with the introduction of the regenerative circuit. Friis & Jensen described regeneration in a set of electronic amplifiers as a case where the "feed-back" action is positive in contrast to negative feed-back action, which they mention only in passing. Harold Stephen Black's classic 1934 paper first details the use of negative feedback in electronic amp