General Services Administration
The General Services Administration, an independent agency of the United States government, was established in 1949 to help manage and support the basic functioning of federal agencies. GSA supplies products and communications for U. S. government offices, provides transportation and office space to federal employees, develops government-wide cost-minimizing policies and other management tasks. GSA employs about 12,000 federal workers and has an annual operating budget of $20.9 billion. GSA oversees $66 billion of procurement annually, it contributes to the management of about $500 billion in U. S. federal property, divided chiefly among 8,700 owned and leased buildings and a 215,000 vehicle motor pool. Among the real estate assets managed by GSA are the Ronald Reagan Building and International Trade Center in Washington, D. C. – the largest U. S. federal building after the Pentagon – and the Hart-Dole-Inouye Federal Center. GSA's business lines include the Federal Acquisition Service and the Public Buildings Service, as well as several Staff Offices including the Office of Government-wide Policy, the Office of Small Business Utilization, the Office of Mission Assurance.
As part of FAS, GSA's Technology Transformation Services helps federal agencies improve delivery of information and services to the public. Key initiatives include FedRAMP, Cloud.gov, the USAGov platform, Data.gov, Performance.gov, Challenge.gov. GSA is a member of the Procurement G6, an informal group leading the use of framework agreements and e-procurement instruments in public procurement. In 1947 President Harry Truman asked former President Herbert Hoover to lead what became known as the Hoover Commission to make recommendations to reorganize the operations of the federal government. One of the recommendations of the commission was the establishment of an "Office of the General Services." This proposed office would combine the responsibilities of the following organizations: U. S. Treasury Department's Bureau of Federal Supply U. S. Treasury Department's Office of Contract Settlement National Archives Establishment All functions of the Federal Works Agency, including the Public Buildings Administration and the Public Roads Administration War Assets AdministrationGSA became an independent agency on July 1, 1949, after the passage of the Federal Property and Administrative Services Act.
General Jess Larson, Administrator of the War Assets Administration, was named GSA's first Administrator. The first job awaiting Administrator Larson and the newly formed GSA was a complete renovation of the White House; the structure had fallen into such a state of disrepair by 1949 that one inspector of the time said the historic structure was standing "purely from habit." Larson explained the nature of the total renovation in depth by saying, "In order to make the White House structurally sound, it was necessary to dismantle, I mean dismantle, everything from the White House except the four walls, which were constructed of stone. Everything, except the four walls without a roof, was stripped down, that's where the work started." GSA worked with President Truman and First Lady Bess Truman to ensure that the new agency's first major project would be a success. GSA completed the renovation in 1952. In 1986 GSA headquarters, U. S. General Services Administration Building, located at Eighteenth and F Streets, NW, was listed on the National Register of Historic Places, at the time serving as Interior Department offices.
In 1960 GSA created the Federal Telecommunications System, a government-wide intercity telephone system. In 1962 the Ad Hoc Committee on Federal Office Space created a new building program to address obsolete office buildings in Washington, D. C. resulting in the construction of many of the offices that now line Independence Avenue. In 1970 the Nixon administration created the Consumer Product Information Coordinating Center, now part of USAGov. In 1974 the Federal Buildings Fund was initiated, allowing GSA to issue rent bills to federal agencies. In 1972 GSA established the Automated Data and Telecommunications Service, which became the Office of Information Resources Management. In 1973 GSA created the Office of Federal Management Policy. GSA's Office of Acquisition Policy centralized procurement policy in 1978. GSA was responsible for emergency preparedness and stockpiling strategic materials to be used in wartime until these functions were transferred to the newly-created Federal Emergency Management Agency in 1979.
In 1984 GSA introduced the federal government to the use of charge cards, known as the GMA SmartPay system. The National Archives and Records Administration was spun off into an independent agency in 1985; the same year, GSA began to provide governmentwide policy oversight and guidance for federal real property management as a result of an Executive Order signed by President Ronald Reagan. In 2003 the Federal Protective Service was moved to the Department of Homeland Security. In 2005 GSA reorganized to merge the Federal Supply Service and Federal Technology Service business lines into the Federal Acquisition Service. On April 3, 2009, President Barack Obama nominated Martha N. Johnson to serve as GSA Administrator. After a nine-month delay, the United States Senate confirmed her nomination on February 4, 2010. On April 2, 2012, Johnson resigned in the wake of a management-deficiency report that detailed improper payments for a 2010 "Western Regions" training conference put on by the Public Buildings Service in Las Vegas.
In July 1991 GSA contractors began the excavation of what is now the Ted Weiss Federal Building in New York City. The planning for that buildin
Physics is the natural science that studies matter, its motion, behavior through space and time, that studies the related entities of energy and force. Physics is one of the most fundamental scientific disciplines, its main goal is to understand how the universe behaves. Physics is one of the oldest academic disciplines and, through its inclusion of astronomy the oldest. Over much of the past two millennia, chemistry and certain branches of mathematics, were a part of natural philosophy, but during the scientific revolution in the 17th century these natural sciences emerged as unique research endeavors in their own right. Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, the boundaries of physics which are not rigidly defined. New ideas in physics explain the fundamental mechanisms studied by other sciences and suggest new avenues of research in academic disciplines such as mathematics and philosophy. Advances in physics enable advances in new technologies.
For example, advances in the understanding of electromagnetism and nuclear physics led directly to the development of new products that have transformed modern-day society, such as television, domestic appliances, nuclear weapons. Astronomy is one of the oldest natural sciences. Early civilizations dating back to beyond 3000 BCE, such as the Sumerians, ancient Egyptians, the Indus Valley Civilization, had a predictive knowledge and a basic understanding of the motions of the Sun and stars; the stars and planets were worshipped, believed to represent gods. While the explanations for the observed positions of the stars were unscientific and lacking in evidence, these early observations laid the foundation for astronomy, as the stars were found to traverse great circles across the sky, which however did not explain the positions of the planets. According to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, all Western efforts in the exact sciences are descended from late Babylonian astronomy.
Egyptian astronomers left monuments showing knowledge of the constellations and the motions of the celestial bodies, while Greek poet Homer wrote of various celestial objects in his Iliad and Odyssey. Natural philosophy has its origins in Greece during the Archaic period, when pre-Socratic philosophers like Thales rejected non-naturalistic explanations for natural phenomena and proclaimed that every event had a natural cause, they proposed ideas verified by reason and observation, many of their hypotheses proved successful in experiment. The Western Roman Empire fell in the fifth century, this resulted in a decline in intellectual pursuits in the western part of Europe. By contrast, the Eastern Roman Empire resisted the attacks from the barbarians, continued to advance various fields of learning, including physics. In the sixth century Isidore of Miletus created an important compilation of Archimedes' works that are copied in the Archimedes Palimpsest. In sixth century Europe John Philoponus, a Byzantine scholar, questioned Aristotle's teaching of physics and noting its flaws.
He introduced the theory of impetus. Aristotle's physics was not scrutinized until John Philoponus appeared, unlike Aristotle who based his physics on verbal argument, Philoponus relied on observation. On Aristotle's physics John Philoponus wrote: “But this is erroneous, our view may be corroborated by actual observation more than by any sort of verbal argument. For if you let fall from the same height two weights of which one is many times as heavy as the other, you will see that the ratio of the times required for the motion does not depend on the ratio of the weights, but that the difference in time is a small one, and so, if the difference in the weights is not considerable, that is, of one is, let us say, double the other, there will be no difference, or else an imperceptible difference, in time, though the difference in weight is by no means negligible, with one body weighing twice as much as the other”John Philoponus' criticism of Aristotelian principles of physics served as an inspiration for Galileo Galilei ten centuries during the Scientific Revolution.
Galileo cited Philoponus in his works when arguing that Aristotelian physics was flawed. In the 1300s Jean Buridan, a teacher in the faculty of arts at the University of Paris, developed the concept of impetus, it was a step toward the modern ideas of momentum. Islamic scholarship inherited Aristotelian physics from the Greeks and during the Islamic Golden Age developed it further placing emphasis on observation and a priori reasoning, developing early forms of the scientific method; the most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn al-Haytham, in which he conclusively disproved the ancient Greek idea about vision, but came up with a new theory. In the book, he presented a study of the phenomenon of the camera obscura (his thousand-year-old
Phase distortion synthesis
Phase distortion synthesis is a synthesis method introduced in 1984 by Casio in its CZ range of synthesizers. In outline, it is similar to phase modulation synthesis as championed by Yamaha Corporation, in the sense that both methods dynamically change the harmonic content of a carrier waveform by influence of another waveform in the time domain. However, the application and results of the two methods are quite distinct. Casio made five different synthesizers using their original concept of PD synthesis; the VZ-1 and co's synthesis method Interactive phase distortion is much more similar to the aforementioned phase modulation, rather than a direct evolution of phase distortion. Casio's implementation of PD used oscillators generated by modulator and carrier waveforms, synchronised to each other per-cycle; the modulators were various angular waves that could'distort' the carrier's sine into other shapes, to a degree derived from the "DCW" envelope. In doing so, many harmonics were created in the output.
As modulators were rich in harmonic content, they could create spectra more linear, i.e. more similar to traditional subtractive spectra, than Yamaha's phase modulation synthesis. PM does not require oscillator sync but was for a long time limited to sine waves, which meant output spectra bore the non-linear hallmark of Bessel functions. PD is a different type of PM - whose different modulators caused significant difference in operation and sound between PD and PM, thus the two aren't directly equivalent. The phase transforms are all assembled from piecewise linear functions under binary logic control and shows characteristic sharp knees as they move from minimum to maximum, where the frequency counter's accumulator wraps around and starts over; the sharp knees are smoothed by the roundness of the modulated sine wave and not too noticeable in the resulting signal. As well as being more capable of generating traditional linear spectra, the CZ synthesizers can emulate resonant filter sweeps; this was done using sine waves at the resonant frequency and windowed at the fundamental frequency.
Frequencies could be controlled but not resonance amount. Figure 19 from the 1985 CZ-series patent shows how to emulate the variable resonance found in analogue voltage-controlled filters: The base frequency counter, wrapping around every period; the resonance frequency counter at a higher frequency, being reset when the base counter wraps around. The resonance frequency counter used as a sine wave readout. Note the nasty sudden jump at the reset! The inverted base frequency counter. Multiplying c by d; the sudden jump in c is now leveled out. To summarize in other terms: The resonance is a form of digital hard sync, composed of a sine wave at the resonant frequency, amplitude enveloped by and hard-synced to a window function at the fundamental frequency; the window function can take various shapes, including sawtooth and triangle, thus determining the'basal' spectrum upon which the resonant effect is superimposed. Since the amplitude of all available window functions ends at zero, this removes sharp discontinuities in the synced sine wave, a well-known way to reduce aliasing in digital sync.
However, some aliasing is still present due to discontinuities in the function's derivatives. Thus, filter sweep effects are generated the same way as sync effects: by modulating the frequency of the resonance, the timbre changes and subtracting harmonics to/from the chosen fundamental spectrum around the chosen resonant frequency; as outlined above, phase distortion broadly applies similar mathematical concepts to phase modulation synthesis, but their implementation and results are not equivalent. Whereas PM - pioneered by John Chowning and commercially used by Yamaha - uses an oscillating modulator that can have its own period, PD applies an angular modulator of straight-line segments hard-synchronised to the same period as its corresponding carrier, i.e. modulating each cycle identically. PM/FM produces Bessel function-derived spectra unless linearised by the application of feedback, whereas PD produces more linear spectra; this manifests in PD synths' reputation for being easier to produce traditional subtractive sounds, such as those associated with analogue synths, which are characterised by linear spectra.
These facts demonstrate how although the broad concept - alteration of phase - is the same and results differ greatly. Casio's own engine named Interactive Phase Distortion, which featured in their VZ synths bears little resemblance to'actual' PD, being based around an idiosyncratic type of PM instead. In iPD, multiple oscillators are combined in various configurable routings and can modulate each other using PM or ring modulation. However, unlike in Yamaha's implementations, direct PM is restricted to a carrier:modulator ratio of 0:1 - with other ratios requiring workarounds and making some oscillators contribute little or nothing to the desired sound. IPD has some added features that give it advantages in some contexts, but it is not as versatile as Yamaha's method for'pure' phase modulation. Ishibashi.
Sound recording and reproduction
Sound recording and reproduction is an electrical, electronic, or digital inscription and re-creation of sound waves, such as spoken voice, instrumental music, or sound effects. The two main classes of sound recording technology are analog digital recording. Acoustic analog recording is achieved by a microphone diaphragm that senses changes in atmospheric pressure caused by acoustic sound waves and records them as a mechanical representation of the sound waves on a medium such as a phonograph record. In magnetic tape recording, the sound waves vibrate the microphone diaphragm and are converted into a varying electric current, converted to a varying magnetic field by an electromagnet, which makes a representation of the sound as magnetized areas on a plastic tape with a magnetic coating on it. Analog sound reproduction is the reverse process, with a bigger loudspeaker diaphragm causing changes to atmospheric pressure to form acoustic sound waves. Digital recording and reproduction converts the analog sound signal picked up by the microphone to a digital form by the process of sampling.
This lets the audio data be transmitted by a wider variety of media. Digital recording stores audio as a series of binary numbers representing samples of the amplitude of the audio signal at equal time intervals, at a sample rate high enough to convey all sounds capable of being heard. A digital audio signal must be reconverted to analog form during playback before it is amplified and connected to a loudspeaker to produce sound. Prior to the development of sound recording, there were mechanical systems, such as wind-up music boxes and player pianos, for encoding and reproducing instrumental music. Long before sound was first recorded, music was recorded—first by written music notation also by mechanical devices. Automatic music reproduction traces back as far as the 9th century, when the Banū Mūsā brothers invented the earliest known mechanical musical instrument, in this case, a hydropowered organ that played interchangeable cylinders. According to Charles B. Fowler, this "...cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century."
The Banū Mūsā brothers invented an automatic flute player, which appears to have been the first programmable machine. Carvings in the Rosslyn Chapel from the 1560s may represent an early attempt to record the Chladni patterns produced by sound in stone representations, although this theory has not been conclusively proved. In the 14th century, a mechanical bell-ringer controlled by a rotating cylinder was introduced in Flanders. Similar designs appeared in barrel organs, musical clocks, barrel pianos, music boxes. A music box is an automatic musical instrument that produces sounds by the use of a set of pins placed on a revolving cylinder or disc so as to pluck the tuned teeth of a steel comb; the fairground organ, developed in 1892, used a system of accordion-folded punched cardboard books. The player piano, first demonstrated in 1876, used a punched paper scroll that could store a long piece of music; the most sophisticated of the piano rolls were hand-played, meaning that the roll represented the actual performance of an individual, not just a transcription of the sheet music.
This technology to record a live performance onto a piano roll was not developed until 1904. Piano rolls were in continuous mass production from 1896 to 2008. A 1908 U. S. Supreme Court copyright case noted that, in 1902 alone, there were between 70,000 and 75,000 player pianos manufactured, between 1,000,000 and 1,500,000 piano rolls produced; the first device that could record actual sounds as they passed through the air was the phonautograph, patented in 1857 by Parisian inventor Édouard-Léon Scott de Martinville. The earliest known recordings of the human voice are phonautograph recordings, called phonautograms, made in 1857, they consist of sheets of paper with sound-wave-modulated white lines created by a vibrating stylus that cut through a coating of soot as the paper was passed under it. An 1860 phonautogram of Au Clair de la Lune, a French folk song, was played back as sound for the first time in 2008 by scanning it and using software to convert the undulating line, which graphically encoded the sound, into a corresponding digital audio file.
On April 30, 1877, French poet, humorous writer and inventor Charles Cros submitted a sealed envelope containing a letter to the Academy of Sciences in Paris explaining his proposed method, called the paleophone. Though no trace of a working paleophone was found, Cros is remembered as the earliest inventor of a sound recording and reproduction machine; the first practical sound recording and reproduction device was the mechanical phonograph cylinder, invented by Thomas Edison in 1877 and patented in 1878. The invention soon spread across the globe and over the next two decades the commercial recording and sale of sound recordings became a growing new international industry, with the most popular titles selling millions of units by the early 1900s; the development of mass-production techniques enabled cylinder recordings to become a major new consumer item in industrial countries and the cylinder was the main consumer format from the late 1880s until around 1910. The next major technical development was the invention of the gramophone record credited to Emile Berliner and patented in 1887, though others had demonstrated simi
A frequency band is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term may refer to an interval of some other spectrum; the frequency range of a system is the range over which it is considered to provide satisfactory performance, such as a useful level of signal with acceptable distortion characteristics. A listing of the upper and lower limits of frequency limits for a system is not useful without a criterion for what the range represents. Many systems are characterized by the range of frequencies. Musical instruments produce different ranges of notes within the hearing range; the electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, each of these ranges can in turn be divided into smaller ranges. A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. A frequency band may be subdivided into many.
Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation
Signal processing is a subfield of mathematics and electrical engineering that concerns the analysis and modification of signals, which are broadly defined as functions conveying "information about the behavior or attributes of some phenomenon", such as sound and biological measurements. For example, signal processing techniques are used to improve signal transmission fidelity, storage efficiency, subjective quality, to emphasize or detect components of interest in a measured signal. According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. Oppenheim and Schafer further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s. Analog signal processing is for signals that have not been digitized, as in legacy radio, telephone and television systems; this involves linear electronic circuits as well as non-linear ones. The former are, for instance, passive filters, active filters, additive mixers and delay lines.
Non-linear circuits include compandors, voltage-controlled filters, voltage-controlled oscillators and phase-locked loops. Continuous-time signal processing is for signals; the methods of signal processing include time domain, frequency domain, complex frequency domain. This technology discusses the modeling of linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals Discrete-time signal processing is for sampled signals, defined only at discrete points in time, as such are quantized in time, but not in magnitude. Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers; this technology was a predecessor of digital signal processing, is still used in advanced processing of gigahertz signals. The concept of discrete-time signal processing refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the Fast Fourier transform, finite impulse response filter, Infinite impulse response filter, adaptive filters such as the Wiener and Kalman filters. Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatio-temporal domains. Nonlinear systems can produce complex behaviors including bifurcations, chaos and subharmonics which cannot be produced or analyzed using linear methods. Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks.
Statistical techniques are used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, construct techniques based on this model to reduce the noise in the resulting image. Audio signal processing – for electrical signals representing sound, such as speech or music Speech signal processing – for processing and interpreting spoken words Image processing – in digital cameras and various imaging systems Video processing – for interpreting moving pictures Wireless communication – waveform generations, filtering, equalization Control systems Array processing – for processing signals from arrays of sensors Process control – a variety of signals are used, including the industry standard 4-20 mA current loop Seismology Financial signal processing – analyzing financial data using signal processing techniques for prediction purposes. Feature extraction, such as image understanding and speech recognition. Quality improvement, such as noise reduction, image enhancement, echo cancellation.
Including audio compression, image compression, video compression. Genomics, Genomic signal processing In communication systems, signal processing may occur at: OSI layer 1 in the seven layer OSI model, the Physical Layer. Filters – for example analog or digital Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, later rebuilding the original signal or an approximation thereof. Signal compressors Digital signal processors Differential equations Recurrence relation Transform theory Time-frequency analysis – for processing non-stationary signals Spectral estimation – for determining the spectral content of a