A metal is a material that, when freshly prepared, polished, or fractured, shows a lustrous appearance, conducts electricity and heat well. Metals are malleable or ductile. A metal may be an alloy such as stainless steel. In physics, a metal is regarded as any substance capable of conducting electricity at a temperature of absolute zero. Many elements and compounds that are not classified as metals become metallic under high pressures. For example, the nonmetal iodine becomes a metal at a pressure of between 40 and 170 thousand times atmospheric pressure; some materials regarded as metals can become nonmetals. Sodium, for example, becomes a nonmetal at pressure of just under two million times atmospheric pressure. In chemistry, two elements that would otherwise qualify as brittle metals—arsenic and antimony—are instead recognised as metalloids, on account of their predominately non-metallic chemistry. Around 95 of the 118 elements in the periodic table are metals; the number is inexact as the boundaries between metals and metalloids fluctuate due to a lack of universally accepted definitions of the categories involved.
In astrophysics the term "metal" is cast more to refer to all chemical elements in a star that are heavier than the lightest two and helium, not just traditional metals. A star fuses lighter atoms hydrogen and helium, into heavier atoms over its lifetime. Used in that sense, the metallicity of an astronomical object is the proportion of its matter made up of the heavier chemical elements. Metals are present in many aspects of modern life; the strength and resilience of some metals has led to their frequent use in, for example, high-rise building and bridge construction, as well as most vehicles, many home appliances, tools and railroad tracks. Precious metals were used as coinage, but in the modern era, coinage metals have extended to at least 23 of the chemical elements; the history of metals is thought to begin with the use of copper about 11,000 years ago. Gold, iron and brass were in use before the first known appearance of bronze in the 5th millennium BCE. Subsequent developments include the production of early forms of steel.
Metals are lustrous, at least when freshly prepared, polished, or fractured. Sheets of metal thicker than a few micrometres appear opaque; the solid or liquid state of metals originates in the capacity of the metal atoms involved to lose their outer shell electrons. Broadly, the forces holding an individual atom’s outer shell electrons in place are weaker than the attractive forces on the same electrons arising from interactions between the atoms in the solid or liquid metal; the electrons involved become delocalised and the atomic structure of a metal can be visualised as a collection of atoms embedded in a cloud of mobile electrons. This type of interaction is called a metallic bond; the strength of metallic bonds for different elemental metals reaches a maximum around the center of the transition metal series, as these elements have large numbers of delocalized electrons. Although most elemental metals have higher densities than most nonmetals, there is a wide variation in their densities, lithium being the least dense and osmium the most dense.
Magnesium and titanium are light metals of significant commercial importance. Their respective densities of 1.7, 2.7 and 4.5 g/cm3 can be compared to those of the older structural metals, like iron at 7.9 and copper at 8.9 g/cm3. An iron ball would thus weigh about as much as three aluminium balls. Metals are malleable and ductile, deforming under stress without cleaving; the nondirectional nature of metallic bonding is thought to contribute to the ductility of most metallic solids. In contrast, in an ionic compound like table salt, when the planes of an ionic bond slide past one another, the resultant change in location shifts ions of the same charge into close proximity, resulting in the cleavage of the crystal; such a shift is not observed in a covalently bonded crystal, such as a diamond, where fracture and crystal fragmentation occurs. Reversible elastic deformation in metals can be described by Hooke's Law for restoring forces, where the stress is linearly proportional to the strain. Heat or forces larger than a metal's elastic limit may cause a permanent deformation, known as plastic deformation or plasticity.
An applied force may be a compressive force, or a shear, bending or torsion force. A temperature change may affect the movement or displacement of structural defects in the metal such as grain boundaries, point vacancies and screw dislocations, stacking faults and twins in both crystalline and non-crystalline metals. Internal slip and metal fatigue may ensue; the atoms of metallic substances are arranged in one of three common crystal structures, namely body-centered cubic, face-centered cubic, hexagonal close-packed. In bcc, each atom is positioned at the center of a cube of eight others. In fcc and hcp, each atom is surrounded by twelve others; some metals adopt different structures depending on the temperature. The
An optical fiber is a flexible, transparent fiber made by drawing glass or plastic to a diameter thicker than that of a human hair. Optical fibers are used most as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths than electrical cables. Fibers are used instead of metal wires. Fibers are used for illumination and imaging, are wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope. Specially designed fibers are used for a variety of other applications, some of them being fiber optic sensors and fiber lasers. Optical fibers include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers.
Multi-mode fibers have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters. Being able to join optical fibers with low loss is important in fiber optic communication; this is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors; the field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics.
The term was coined by Indian physicist Narinder Singh Kapany, acknowledged as the father of fiber optics. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s. John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall wrote about the property of total internal reflection in an introductory book about the nature of light in 1870:When the light passes from air into water, the refracted ray is bent towards the perpendicular... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be reflected at the surface.... The angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′.
In the late 19th and early 20th centuries, light was guided through bent glass rods to illuminate body cavities. Practical applications such as close internal illumination during dentistry appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was forgotten. In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding; that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. Their article titled "A flexible fibrescope, using static scanning" was published in the journal Nature in 1954.
The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, Lawrence E. Curtiss, researchers at the University of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers. A variety of other image transmission applications soon followed. Kapany coined the term fiber optics, wrote a 1960 article in Scientific American that introduced the topic to a wide audience, wrote the first book about the new field; the first working fiber-optical data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. NASA used fiber optics in the television cameras. At the time, the use in the cameras was classified confidential, employees handling the cameras had to be supervised by someone with an appropriate security clearance. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables were the first, in 1965, to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer, making fibers a practical communication medium.
They proposed th
An electrical load is an electrical component or portion of a circuit that consumes electric power. This is opposed to a power source, such as a generator, which produces power. In electric power circuits examples of loads are lights; the term may refer to the power consumed by a circuit. The term is used more broadly in electronics for a device connected to a signal source, whether or not it consumes power. If an electric circuit has an output port, a pair of terminals that produces an electrical signal, the circuit connected to this terminal is the load. For example, if a CD player is connected to an amplifier, the CD player is the source and the amplifier is the load. Load affects the performance of circuits with respect to output voltages or currents, such as in sensors, voltage sources, amplifiers. Mains power outlets provide an easy example: they supply power at constant voltage, with electrical appliances connected to the power circuit collectively making up the load; when a high-power appliance switches on, it reduces the load impedance.
If the load impedance is not much higher than the power supply impedance, the voltages will drop. In a domestic environment, switching on a heating appliance may cause incandescent lights to dim noticeably; when discussing the effect of load on a circuit, it is helpful to disregard the circuit's actual design and consider only the Thévenin equivalent. The Thévenin equivalent of a circuit looks like this: With no load, all of V S falls across the output. We would like to ignore the details of the load circuit, as we did for the power supply, represent it as as possible. If we use an input resistance to represent the load, the complete circuit looks like this: Whereas the voltage source by itself was an open circuit, adding the load makes a closed circuit and allows charge to flow; this current places a voltage drop across R S, so the voltage at the output terminal is no longer V S. The output voltage can be determined by the voltage division rule: V O U T = V S ⋅ R L R L + R S If the source resistance is not negligibly small compared to the load impedance, the output voltage will fall.
This illustration uses simple resistances, but similar discussion can be applied in alternating current circuits using resistive and inductive elements. Dummy load
In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses; this article covers two-conductor transmission line such as parallel line, coaxial cable and microstrip. Some sources refer to waveguide, dielectric waveguide, optical fibre as transmission line, however these lines require different analytical techniques and so are not covered by this article. Ordinary electrical cables suffice to carry low frequency alternating current, such as mains power, which reverses direction 100 to 120 times per second, audio signals. However, they cannot be used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses.
Radio frequency currents tend to reflect from discontinuities in the cable such as connectors and joints, travel back down the cable toward the source. These reflections act as bottlenecks. Transmission lines use specialized construction, impedance matching, to carry electromagnetic signals with minimal reflections and power losses; the distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line, coaxial cable, planar transmission lines such as stripline and microstrip; the higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At microwave frequencies and above, power losses in transmission lines become excessive, waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves.
Some sources define waveguides as a type of transmission line. At higher frequencies, in the terahertz and visible ranges, waveguides in turn become lossy, optical methods, are used to guide electromagnetic waves; the theory of sound wave propagation is similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are used to build structures to conduct acoustic waves. Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin and Oliver Heaviside. In 1855 Lord Kelvin formulated a diffusion model of the current in a submarine cable; the model predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885 Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. In many electric circuits, the length of the wires connecting the components can for the most part be ignored; that is, the voltage on the wire at a given time can be assumed to be the same at all points.
However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. A common rule of thumb is that the cable or wire should be treated as a transmission line if the length is greater than 1/10 of the wavelength. At this length the phase delay and the interference of any reflections on the line become important and can lead to unpredictable behaviour in systems which have not been designed using transmission line theory. For the purposes of analysis, an electrical transmission line can be modelled as a two-port network, as follows: In the simplest case, the network is assumed to be linear, the two ports are assumed to be interchangeable. If the transmission line is uniform along its length its behaviour is described by a single parameter called the characteristic impedance, symbol Z0.
This is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, about 300 ohms for a common type of untwisted pair used in radio transmission; when sending power down a transmission line, it is desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched; some of the power, fed into a transmission line is lost because of its resistance. This effect is called resistive loss. At high frequencies, another effect cal
A transmission medium is a material substance that can propagate energy waves. For example, the transmission medium for sounds is a gas, but solids and liquids may act as a transmission medium for sound; the absence of a material medium in vacuum may constitute a transmission medium for electromagnetic waves such as light and radio waves. While material substance is not required for electromagnetic waves to propagate, such waves are affected by the transmission media they pass through, for instance by absorption or by reflection or refraction at the interfaces between media; the term transmission medium refers to a technical device that employs the material substance to transmit or guide waves. Thus, an optical fiber or a copper cable is a transmission medium. Not only this but is able to guide the transmission of networks. A transmission medium can be classified as a: Linear medium, if different waves at any particular point in the medium can be superposed. Electromagnetic radiation can be transmitted through an optical medium, such as optical fiber, or through twisted pair wires, coaxial cable, or dielectric-slab waveguides.
It may pass through any physical material, transparent to the specific wavelength, such as water, glass, or concrete. Sound is, by definition, the vibration of matter, so it requires a physical medium for transmission, as do other kinds of mechanical waves and heat energy. Science incorporated various aether theories to explain the transmission medium. However, it is now known that electromagnetic waves do not require a physical transmission medium, so can travel through the "vacuum" of free space. Regions of the insulative vacuum can become conductive for electrical conduction through the presence of free electrons, holes, or ions. A physical medium in data communications is the transmission path over. Many transmission media are used as communications channel. For telecommunications purposes in the United States, Federal Standard 1037C, transmission media are classified as one of the following: Guided —waves are guided along a solid medium such as a transmission line. Wireless —transmission and reception are achieved by means of an antenna.
One of the most common physical medias used in networking is copper wire. Copper wire to carry signals to long distances using low amounts of power; the unshielded twisted pair is eight strands of copper wire, organized into four pairs. Another example of a physical medium is optical fiber, which has emerged as the most used transmission medium for long-distance communications. Optical fiber is a thin strand of glass. Four major factors favor optical fiber over copper- data rates, distance and costs. Optical fiber can carry huge amounts of data compared to copper, it can be run for hundreds of miles without the need for signal repeaters, in turn, reducing maintenance costs and improving the reliability of the communication system because repeaters are a common source of network failures. Glass is lighter than copper allowing for less need for specialized heavy-lifting equipment when installing long-distance optical fiber. Optical fiber for indoor applications cost a dollar a foot, the same as copper.
Multimode and single mode are two types of used optical fiber. Multimode fiber uses LEDs as the light source and can carry signals over shorter distances, about 2 kilometers. Single mode can carry signals over distances of tens of miles. Wireless media may carry surface waves or skywaves, either longitudinally or transversely, are so classified. In both communications, communication is in the form of electromagnetic waves. With guided transmission media, the waves are guided along a physical path. Unguided transmission media are methods that allow the transmission of data without the use of physical means to define the path it takes. Examples of this include radio or infrared. Unguided media do not guide them; the term direct link is used to refer to the transmission path between two devices in which signals propagate directly from transmitters to receivers with no intermediate devices, other than amplifiers or repeaters used to increase signal strength. This term can apply to unguided media. A transmission may be simplex, full-duplex.
In simplex transmission, signals are transmitted in only one direction. In the half-duplex operation, both stations may only one at a time. In full duplex operation, both stations may transmit simultaneously. In the latter case, the medium is carrying signals in both directions at same time. There are two types of transmission media: guided and unguided. Guided Media: Unshielded Twisted Pair Shielded Twisted Pair Coaxial Cable Optical Fiber hubUnguided Media: Transmission media looking at analysis of using them unguided transmission media is data signals that flow through the air, they are not bound to a channel to follow. Following are unguided media used for data communication: Radio Transmission Microwave Transmission and reception of data is performed in four steps; the data is coded as binary numbers at the sender end A carrie
In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object. Energy is a conserved quantity; the SI unit of energy is the joule, the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton. Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field, the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, the thermal energy due to an object's temperature. Mass and energy are related. Due to mass–energy equivalence, any object that has mass when stationary has an equivalent amount of energy whose form is called rest energy, any additional energy acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale.
Living organisms require exergy to stay alive, such as the energy. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy; the processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.
The word energy derives from the Ancient Greek: translit. Energeia, lit.'activity, operation', which appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was accepted; the modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807, Thomas Young was the first to use the term "energy" instead of vis viva, in its modern sense. Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, in 1853, William Rankine coined the term "potential energy".
The law of conservation of energy was first postulated in the early 19th century, applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the generation of heat; these developments led to the theory of conservation of energy, formalized by William Thomson as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, Walther Nernst, it led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time. Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time.
In 1843, James Prescott Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water insulated from heat transfer, it showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units, the unit of energy is the joule, named after James Prescott Joule, it is a derived unit. It is equal to the energy expended in applying a force of one newton through a distance of one metre; however energy is expressed in many other units not part of the SI, such as ergs, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate is the watt, a joule per second. Thus, one joule is one watt-second, 3600 joules equal one wa
General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin