1.
Spectrograph
–
A spectrograph is an instrument that separates light into a frequency spectrum and records the signal using a camera. There are several kinds of machines referred to as spectrographs, depending on the nature of the waves. The term was first used in July,1876 by Dr. Henry Draper when he invented the earliest version of this device, and this earliest version of the spectrograph was cumbersome to use and difficult to manage. One way to define a spectrograph is as a device that separates light by its wavelength, a spectrograph typically has a multi-channel detector system or imaging system that detects the spectrum of light. The first spectrographs used photographic paper as the detector, the star spectral classification and discovery of the main sequence, Hubbles law and the Hubble sequence were all made with spectrographs that used photographic paper. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector, more recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded, the forthcoming James Webb Space Telescope will contain both a near-infrared spectrograph and a mid-infrared spectrometer. An echelle spectrograph uses two diffraction gratings, rotated 90 degrees with respect to other and placed close to one another. Therefore, a point and not a slit is used. The small chip also means that the collimating optics need not to be optimized for coma or astigmatism, but the spherical aberration can be set to zero
Spectrograph
–
The KMOS spectrograph.
Spectrograph
–
Horizontal Solar Spectrograph at the Czech Astronomical Institute in Ondřejov, Czech Republic
2.
Spectral density
–
The power spectrum S x x of a time series x describes the distribution of power into frequency components composing that signal. According to Fourier analysis any physical signal can be decomposed into a number of discrete frequencies, the statistical average of a certain signal or sort of signal as analyzed in terms of its frequency content, is called its spectrum. When the energy of the signal is concentrated around a time interval, especially if its total energy is finite. More commonly used is the spectral density, which applies to signals existing over all time. The power spectral density then refers to the energy distribution that would be found per unit time. Summation or integration of the spectral components yields the total power or variance, identical to what would be obtained by integrating x 2 over the time domain, the spectrum of a physical process x often contains essential information about the nature of x. For instance, the pitch and timbre of an instrument are immediately determined from a spectral analysis. The color of a source is determined by the spectrum of the electromagnetic waves electric field E as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, however this article concentrates on situations in which the time series is known or directly measured. The power spectrum is important in signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics. Typically the process is a function of time but one can similarly discuss data in the domain being decomposed in terms of spatial frequency. Any signal that can be represented as an amplitude that varies in time has a frequency spectrum. This includes familiar entities such as light, musical notes, radio/TV. When these signals are viewed in the form of a frequency spectrum, in some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be corresponding to harmonics of a fundamental peak. In physics, the signal might be a wave, such as an electromagnetic wave, the power spectral density of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is expressed in watts per hertz. When a signal is defined in terms only of a voltage, for instance, in this case power is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance
Spectral density
–
The spectral density of a
fluorescent light as a function of optical wavelength shows peaks at atomic transitions, indicated by the numbered arrows.
3.
Frequencies
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
Frequencies
–
A resonant-reed frequency meter, an obsolete device used from about 1900 to the 1940s for measuring the frequency of alternating current. It consists of a strip of metal with reeds of graduated lengths, vibrated by an
electromagnet. When the unknown frequency is applied to the electromagnet, the reed which is
resonant at that frequency will vibrate with large amplitude, visible next to the scale.
Frequencies
–
As time elapses – represented here as a movement from left to right, i.e. horizontally – the five
sinusoidal waves shown vary regularly (i.e. cycle), but at different
rates. The red
wave (top) has the lowest frequency (i.e. varies at the slowest rate) while the purple wave (bottom) has the highest frequency (varies at the fastest rate).
Frequencies
Frequencies
–
Modern frequency counter
4.
Sound
–
In physics, sound is a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a transmission medium such as air or water. In physiology and psychology, sound is the reception of such waves, humans can hear sound waves with frequencies between about 20 Hz and 20 kHz. Sound above 20 kHz is ultrasound and below 20 Hz is infrasound, other animals have different hearing ranges. Acoustics is the science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound. A scientist who works in the field of acoustics is an acoustician, an audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound. Auditory sensation evoked by the oscillation described in, sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids. The sound waves are generated by a source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium, as the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure, velocity, at an instant in time, the pressure, velocity, and displacement vary in space. Note that the particles of the medium do not travel with the sound wave and this is intuitively obvious for a solid, and the same is true for liquids and gases. During propagation, waves can be reflected, refracted, or attenuated by the medium, the behavior of sound propagation is generally affected by three things, A complex relationship between the density and pressure of the medium. This relationship, affected by temperature, determines the speed of sound within the medium, if the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the wave will be decreased by the speed of the wind. Medium viscosity determines the rate at which sound is attenuated, for many media, such as air or water, attenuation due to viscosity is negligible. When sound is moving through a medium that does not have constant physical properties, the mechanical vibrations that can be interpreted as sound can travel through all forms of matter, gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium, sound cannot travel through a vacuum. Sound is transmitted through gases, plasma, and liquids as longitudinal waves and it requires a medium to propagate
Sound
–
A
drum produces sound via a vibrating membrane.
Sound
–
Audio engineers in R&D design audio equipment
Sound
–
U.S. Navy
F/A-18 approaching the sound barrier. The white halo is formed by condensed water droplets thought to result from a drop in air pressure around the aircraft (see
Prandtl-Glauert Singularity).
Sound
–
Human ear
5.
Animal communication
–
Animal communication is the transfer of information from one or a group of animals to one or more other animals that affects the current or future behaviour of the receivers. Information may be sent intentionally, as in a display, or unintentionally. Information may be transferred to an audience of several receivers, Animal communication is a rapidly growing area of study in disciplines including animal behaviour, sociobiology, neurobiology and animal cognition. Many aspects of behaviour, such as symbolic name use, emotional expression, learning. When the information from the changes the behaviour of a receiver. Signalling theory predicts that for a signal to be maintained in the population, signal production by senders and the perception and subsequent response of receivers are thought to coevolve. Signals often involve multiple mechanisms, e. g. both visual and auditory, and for a signal to be understood the behaviour of both sender and receiver require careful study. A notable example is the presentation of a parent herring gull’s bill to its chick as a signal for feeding, like many gulls, the herring gull has a brightly coloured bill, yellow with a red spot on the lower mandible near the tip. The complete signal therefore involves a distinctive feature, the red-spotted bill. While all primates use some form of gesture, Frans de Waal concluded that apes and he tested the hypothesis that gestures evolve into language by studying the gestures of bonobos and chimps. Facial expression, Facial gestures play an important role in animal communication, often a facial gesture is a signal of emotion. Dogs, for example, express anger through snarling and showing their teeth, in alarm their ears perk up, in fear the ears flatten while the dogs expose their teeth slightly and squint their eyes. Gaze following, Social animals coordinate their communication by monitoring of each others head, such behaviour has long been recognized as an important component of communication during human development, and gaze-following has recently received much attention in animals. g. By repositioning themselves to follow a gaze cue when faced with a barrier blocking their view, Color change, Color change can be separated into changes that occur during growth and development, and those triggered by mood, social context, or abiotic factors such as temperature. The latter are seen in many taxa, Some cephalopods, such as the octopus and the cuttlefish, have specialized skin cells that can change the apparent colour, opacity, and reflectiveness of their skin. In addition to their use for camouflage, rapid changes in color are used while hunting. Cuttlefish may display two different signals simultaneously from opposite sides of their body. When a male cuttlefish courts a female in the presence of males, he displays a male pattern facing the female
Animal communication
–
A lamb investigates a
rabbit, an example of interspecific communication using body posture and olfaction.
Animal communication
–
Bird calls can serve as alarms or keep members of a
flock in contact, while the longer and more complex
bird songs are associated with
courtship and mating.
Animal communication
–
This
Chihuahua is baring his teeth to signify an attack is imminent if the photographer comes closer to take his bone
Animal communication
–
The apparently excessive eye-spot signalling by the male peacock tail may be
runaway selection
6.
Music
–
Music is an art form and cultural activity whose medium is sound organized in time. The common elements of music are pitch, rhythm, dynamics, different styles or types of music may emphasize, de-emphasize or omit some of these elements. The word derives from Greek μουσική, Ancient Greek and Indian philosophers defined music as tones ordered horizontally as melodies and vertically as harmonies. Common sayings such as the harmony of the spheres and it is music to my ears point to the notion that music is often ordered and pleasant to listen to. However, 20th-century composer John Cage thought that any sound can be music, saying, for example, There is no noise, the creation, performance, significance, and even the definition of music vary according to culture and social context. There are many types of music, including music, traditional music, art music, music written for religious ceremonies. For example, it can be hard to draw the line between some early 1980s hard rock and heavy metal, within the arts, music may be classified as a performing art, a fine art or as an auditory art. People may make music as a hobby, like a teen playing cello in a youth orchestra, the word derives from Greek μουσική. According to the Online Etymological Dictionary, the music is derived from mid-13c. Musike, from Old French musique and directly from Latin musica the art of music and this is derived from the. Greek mousike of the Muses, from fem. of mousikos pertaining to the Muses, from Mousa Muse. In classical Greece, any art in which the Muses presided, Music is composed and performed for many purposes, ranging from aesthetic pleasure, religious or ceremonial purposes, or as an entertainment product for the marketplace. With the advent of recording, records of popular songs. Some music lovers create mix tapes of their songs, which serve as a self-portrait. An environment consisting solely of what is most ardently loved, amateur musicians can compose or perform music for their own pleasure, and derive their income elsewhere. Professional musicians sometimes work as freelancers or session musicians, seeking contracts and engagements in a variety of settings, There are often many links between amateur and professional musicians. Beginning amateur musicians take lessons with professional musicians, in community settings, advanced amateur musicians perform with professional musicians in a variety of ensembles such as community concert bands and community orchestras. However, there are many cases where a live performance in front of an audience is also recorded and distributed. Live concert recordings are popular in classical music and in popular music forms such as rock, where illegally taped live concerts are prized by music lovers
Music
–
A painting on an Ancient Greek vase depicts a music lesson (c. 510 BC).
Music
–
Jean-Gabriel Ferlan performing at a 2008 concert at the collège-lycée Saint-François Xavier
Music
–
The composer
Michel Richard Delalande, pen in hand.
Music
–
Funk places most of its emphasis on rhythm and
groove, with entire songs based around a
vamp on a single chord. Pictured are the influential funk musicians
George Clinton and
Parliament Funkadelic in 2006.
7.
Sonar
–
Sonar is a technique that uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels. Two types of technology share the name sonar, passive sonar is essentially listening for the sound made by vessels, active sonar is emitting pulses of sounds, Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of targets in the water. Acoustic location in air was used before the introduction of radar, Sonar may also be used in air for robot navigation, and SODAR is used for atmospheric investigations. The term sonar is used for the equipment used to generate. The acoustic frequencies used in sonar systems vary from low to extremely high. The study of sound is known as underwater acoustics or hydroacoustics. In the 19th century a bell was used as an ancillary to lighthouses to provide warning of hazards. The use of sound to locate underwater in the same way as bats use sound for aerial navigation seems to have been prompted by the Titanic disaster of 1912. S. Revenue Cutter Miami on the Grand Banks off Newfoundland Canada, in that test, Fessenden demonstrated depth sounding, underwater communications and echo ranging. The so-called Fessenden oscillator, at ca.500 Hz frequency, was unable to determine the bearing of the due to the 3 metre wavelength. The ten Montreal-built British H class submarines launched in 1915 were equipped with a Fessenden oscillator, during World War I the need to detect submarines prompted more research into the use of sound. Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers they used, lightweight sound-sensitive plastic film and fibre optics have been used for hydrophones, while Terfenol-D and PMN have been developed for projectors. By 1918, both France and Britain had built prototype active systems, the British tested their ASDIC on HMS Antrim in 1920, and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923, an anti-submarine school, HMS Osprey, and a training flotilla of four vessels were established on Portland in 1924. The US Sonar QB set arrived in 1931, by the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and others for submarines, incorporated into a complete anti-submarine attack system. The effectiveness of early ASDIC was hamstrung by the use of the charge as an anti-submarine weapon. This required a vessel to pass over a submerged contact before dropping charges over the stern. The hunter was effectively firing blind, during which time a commander could take evasive action
Sonar
–
French
F70 type frigates (here,
La Motte-Picquet) are fitted with VDS (Variable Depth Sonar) type DUBV43 or DUBV43C towed sonars
Sonar
–
Sonar image of
shipwreck of the
Latvian Naval Forces ship Virsaitis in
Estonian waters.
Sonar
–
Variable Depth Sonar and its winch
Sonar
–
AN/AQS-13 Dipping sonar deployed from an
H-3 Sea King.
8.
Radar
–
Radar is an object-detection system that uses radio waves to determine the range, angle, or velocity of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations, Radio waves from the transmitter reflect off the object and return to the receiver, giving information about the objects location and speed. Radar was developed secretly for military use by several nations in the period before, the term RADAR was coined in 1940 by the United States Navy as an acronym for RAdio Detection And Ranging or RAdio Direction And Ranging. The term radar has since entered English and other languages as a common noun, high tech radar systems are associated with digital signal processing, machine learning and are capable of extracting useful information from very high noise levels. Other systems similar to make use of other parts of the electromagnetic spectrum. One example is lidar, which uses ultraviolet, visible, or near infrared light from lasers rather than radio waves, as early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, an instructor at the Imperial Russian Navy school in Kronstadt. The next year, he added a spark-gap transmitter, in 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, the German inventor Christian Hülsmeyer was the first to use radio waves to detect the presence of distant metallic objects. In 1904, he demonstrated the feasibility of detecting a ship in dense fog and he obtained a patent for his detection device in April 1904 and later a patent for a related amendment for estimating the distance to the ship. He also got a British patent on September 23,1904 for a radar system. It operated on a 50 cm wavelength and the radar signal was created via a spark-gap. In 1915, Robert Watson-Watt used radio technology to advance warning to airmen. Watson-Watt became an expert on the use of direction finding as part of his lightning experiments. As part of ongoing experiments, he asked the new boy, Arnold Frederic Wilkins, Wilkins made an extensive study of available units before selecting a receiver model from the General Post Office. Its instruction manual noted that there was fading when aircraft flew by, in 1922, A. Hoyt Taylor and Leo C. Taylor submitted a report, suggesting that this might be used to detect the presence of ships in low visibility, eight years later, Lawrence A. Australia, Canada, New Zealand, and South Africa followed prewar Great Britain, and Hungary had similar developments during the war. Hugon, began developing a radio apparatus, a part of which was installed on the liner Normandie in 1935
Radar
–
Long-range radar
antenna, used to track space objects and ballistic missiles.
Radar
–
Radar of the type used for detection of aircraft. It rotates steadily, sweeping the airspace with a narrow beam.
Radar
–
Experimental radar antenna, US
Naval Research Laboratory, Anacostia, D. C., late 1930s
Radar
–
A
Chain Home tower in Great Baddow, United Kingdom
9.
Seismology
–
Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. A related field that uses geology to infer information regarding past earthquakes is paleoseismology, a recording of earth motion as a function of time is called a seismogram. A seismologist is a scientist who does research in seismology, scholarly interest in earthquakes can be traced back to antiquity. Early speculations on the causes of earthquakes were included in the writings of Thales of Miletus, Anaximenes of Miletus, Aristotle. In 132 CE, Zhang Heng of Chinas Han dynasty designed the first known seismoscope, in 1664, Athanasius Kircher argued that earthquakes were caused by the movement of fire within a system of channels inside the Earth. In 1703, Martin Lister and Nicolas Lemery proposed that earthquakes were caused by chemical explosions within the earth, the Lisbon earthquake of 1755, coinciding with the general flowering of science in Europe, set in motion intensified scientific attempts to understand the behaviour and causation of earthquakes. The earliest responses include work by John Bevis and John Michell, Michell determined that earthquakes originate within the Earth and were waves of movement caused by shifting masses of rock miles below the surface. From 1857, Robert Mallet laid the foundation of instrumental seismology and he is also responsible for coining the word seismology. In 1897, Emil Wiecherts theoretical calculations led him to conclude that the Earths interior consists of a mantle of silicates, surrounding a core of iron. In 1906 Richard Dixon Oldham identified the separate arrival of P-waves, S-waves and surface waves on seismograms, in 1910, after studying the 1906 San Francisco earthquake, Harry Fielding Reid put forward the elastic rebound theory which remains the foundation for modern tectonic studies. The development of this depended on the considerable progress of earlier independent streams of work on the behaviour of elastic materials. In 1926, Harold Jeffreys was the first to claim, based on his study of waves, that below the mantle. In 1937, Inge Lehmann determined that within the liquid outer core there is a solid inner core. By the 1960s, earth science had developed to the point where a comprehensive theory of the causation of seismic events had come together in the now well-established theory of plate tectonics, seismic waves are elastic waves that propagate in solid or fluid materials. There are two types of waves, Pressure waves or Primary waves and Shear or Secondary waves. S-waves are transverse waves that move perpendicular to the direction of propagation, therefore, they appear later than P-waves on a seismogram. Fluids cannot support perpendicular motion, so S-waves only travel in solids, the two main surface wave types are Rayleigh waves, which have some compressional motion, and Love waves, which do not. Rayleigh waves result from the interaction of vertically polarized P- and S-waves that satisfy the conditions on the surface
Seismology
–
Seismic velocities and boundaries in the interior of the
Earth sampled by seismic waves
Seismology
–
Seismogram records showing the three components of ground motion. The red line marks the first arrival of P-waves; the green line, the later arrival of S-waves.
10.
Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Two contrasting viewpoints on time divide prominent philosophers, one view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is referred to as Newtonian time. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, Time in physics is unambiguously operationally defined as what a clock reads. Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities, Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition. The operational definition leaves aside the question there is something called time, apart from the counting activity just mentioned, that flows. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy. Furthermore, it may be there is a subjective component to time. Temporal measurement has occupied scientists and technologists, and was a motivation in navigation. Periodic events and periodic motion have long served as standards for units of time, examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also of significant social importance, having economic value as well as value, due to an awareness of the limited time in each day. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day, increasingly, personal electronic devices display both calendars and clocks simultaneously. The number that marks the occurrence of an event as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic suggest that the moon was used to time as early as 6,000 years ago. Lunar calendars were among the first to appear, either 12 or 13 lunar months, without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months
Time
–
The flow of
sand in an
hourglass can be used to keep track of elapsed time. It also concretely represents the
present as being between the
past and the
future.
Time
Time
–
Horizontal
sundial in
Taganrog
Time
–
A contemporary
quartz watch
11.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
Frequency
–
A resonant-reed frequency meter, an obsolete device used from about 1900 to the 1940s for measuring the frequency of alternating current. It consists of a strip of metal with reeds of graduated lengths, vibrated by an
electromagnet. When the unknown frequency is applied to the electromagnet, the reed which is
resonant at that frequency will vibrate with large amplitude, visible next to the scale.
Frequency
–
As time elapses – represented here as a movement from left to right, i.e. horizontally – the five
sinusoidal waves shown vary regularly (i.e. cycle), but at different
rates. The red
wave (top) has the lowest frequency (i.e. varies at the slowest rate) while the purple wave (bottom) has the highest frequency (varies at the fastest rate).
Frequency
Frequency
–
Modern frequency counter
12.
Amplitude
–
The amplitude of a periodic variable is a measure of its change over a single period. There are various definitions of amplitude, which are all functions of the magnitude of the difference between the extreme values. In older texts the phase is called the amplitude. Peak-to-peak amplitude is the change between peak and trough, with appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. Peak-to-peak is a measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. In audio system measurements, telecommunications and other areas where the measurand is a signal that swings above and below a value but is not sinusoidal. If the reference is zero, this is the absolute value of the signal, if the reference is a mean value. Semi-amplitude means half the peak-to-peak amplitude, some scientists use amplitude or peak amplitude to mean semi-amplitude, that is, half the peak-to-peak amplitude. It is the most widely used measure of orbital wobble in astronomy, the RMS of the AC waveform. For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is used because it is both unambiguous and has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude. For alternating current electric power, the practice is to specify RMS values of a sinusoidal waveform. One property of root mean square voltages and currents is that they produce the same heating effect as direct current in a given resistance, the peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. Some common voltmeters are calibrated for RMS amplitude, but respond to the value of a rectified waveform. Many digital voltmeters and all moving coil meters are in this category, the RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. If the wave shape being measured is greatly different from a sine wave, true RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure current. The advent of microprocessor controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace
Amplitude
–
A
sinusoidal curve 1 = Peak amplitude (), 2 = Peak-to-peak amplitude (), 3 = Root mean square amplitude (), 4 =
Wave period (not an amplitude)
13.
Brightness
–
Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target and it is not necessarily proportional to luminance. This is a subjective attribute/property of an object being observed and one of the color appearance parameters of color appearance models, brightness refers to an absolute term and should not be confused with Lightness. The adjective bright derives from an Old English beorht with the same meaning via metathesis giving Middle English briht, the word is from a Common Germanic *berhtaz, ultimately from a PIE root with a closely related meaning, *bhereg- white, bright. Brightness was formerly used as a synonym for the term luminance. As defined by the US Federal Glossary of Telecommunication Terms, brightness should now be used only for non-quantitative references to physiological sensations and perceptions of light, a given target luminance can elicit different perceptions of brightness in different contexts, see, for example, Whites illusion. With regard to stars, brightness is quantified as apparent magnitude, brightness is, at least in some respects, the antonym of darkness. The United States Federal Trade Commission has assigned a meaning to brightness when applied to lamps. When appearing on light bulb packages, brightness means Luminous flux, Luminous flux is the total amount of light coming from a source, such as a lighting device. Luminance, the meaning of brightness, is the amount of light per solid angle coming from an area. The table below shows the ways of indicating the amount of light. The term brightness is used in discussions of sound timbres. Luma Luminance Luminosity Media related to brightness at Wikimedia Commons Poyntons Color FAQ
Brightness
–
Decreasing brightness with depth (underwater photo as example)
14.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
Logarithm
–
John Napier (1550–1617), the inventor of logarithms
Logarithm
–
The
graph of the logarithm to base 2 crosses the
x axis (horizontal axis) at 1 and passes through the points with
coordinates (2, 1), (4, 2), and (8, 3). For example, log 2 (8) = 3, because 2 3 = 8. The graph gets arbitrarily close to the y axis, but
does not meet or intersect it.
Logarithm
–
The logarithm keys (lo g for base-10 and ln for base- e) on a typical scientific calculator
Logarithm
–
A
nautilus displaying a logarithmic spiral
15.
Decibel
–
The decibel is a logarithmic unit used to express the ratio of two values of a physical quantity. One of these values is often a reference value, in which case the decibel is used to express the level of the other value relative to this reference. When used in way, the decibel symbol is often qualified with a suffix that indicates the reference quantity that has been used or some other property of the quantity being measured. For example, dBm indicates a power of one milliwatt. There are two different scales used when expressing a ratio in decibels depending on the nature of the quantities, when expressing power quantities, the number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level, when expressing field quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The difference in scales relates to the square law of fields in three-dimensional linear space. The decibel scales differ so that comparisons can be made between related power and field quantities when they are expressed in decibels. The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth of one bel, named in honor of Alexander Graham Bell, however, today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable, the standard telephone cable implied was a cable having uniformly distributed resistance of 88 ohms per loop mile and uniformly distributed shunt capacitance of 0.054 microfarad per mile. 1 TU was defined such that the number of TUs was ten times the logarithm of the ratio of measured power to a reference power level. The definition was conveniently chosen such that 1 TU approximated 1 MSC, in 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell, the bel is seldom used, as the decibel was the proposed working unit. However, the decibel is recognized by international bodies such as the International Electrotechnical Commission. The term field quantity is deprecated by ISO 80000-1, which favors root-power, in spite of their widespread use, suffixes are not recognized by the IEC or ISO. The ISO Standard 80000-3,2006 defines the following quantities, the decibel is one-tenth of a bel,1 dB =0.1 B
Decibel
–
Base units
16.
Frequency modulation
–
In telecommunications and signal processing, frequency modulation is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. This contrasts with amplitude modulation, in which the amplitude of the wave varies. This modulation technique is known as frequency-shift keying, FSK is widely used in modems and fax modems, and can also be used to send Morse code. Frequency modulation is used for FM radio broadcasting. For this reason, most music is broadcast over FM radio, frequency modulation has a close relationship with phase modulation, phase modulation is often used as an intermediate step to achieve frequency modulation. Mathematically both of these are considered a case of quadrature amplitude modulation. While most of the energy of the signal is contained within fc ± fΔ, the frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems. Mathematically, a baseband modulated signal may be approximated by a continuous wave signal with a frequency fm. This method is also named as Single-tone Modulation. As in other systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. e. The maximum deviation of the frequency from the carrier frequency. For a sine wave modulation, the index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave. If h ≪1, the modulation is called narrowband FM, sometimes modulation index h<0.3 rad is considered as Narrowband FM otherwise Wideband FM. In the case of digital modulation, the carrier f c is never transmitted, rather, one of two frequencies is transmitted, either f c + Δ f or f c − Δ f, depending on the binary state 0 or 1 of the modulation signal. If h ≫1, the modulation is called wideband FM, if the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals, for particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands. Since the sidebands are on sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example,3 kHz deviation modulated by a 2.2 kHz audio tone produces an index of 1.36. Suppose that we limit ourselves to only those sidebands that have an amplitude of at least 0.01
Frequency modulation
–
FM has better noise (
RFI) rejection than AM, as shown in this dramatic New York publicity demonstration by
General Electric in 1940. The radio has both AM and FM receivers. With a million volt arc as a source of interference behind it, the AM receiver produced only a roar of static, while the FM receiver clearly reproduced a music program from Armstrong's experimental FM transmitter W2XMN in New Jersey.
Frequency modulation
–
A signal may be carried by an
AM or FM radio wave.
Frequency modulation
–
An American FM radio transmitter in Buffalo, NY at WEDG
17.
Sinusoidal
–
A sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time is, y = A sin = A sin where, A = the amplitude, F = the ordinary frequency, the number of oscillations that occur each second of time. ω = 2πf, the frequency, the rate of change of the function argument in units of radians per second φ = the phase. When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ /ω seconds, a negative value represents a delay, and a positive value represents an advance. The sine wave is important in physics because it retains its shape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property and this property leads to its importance in Fourier analysis and makes it acoustically unique. The wavenumber is related to the frequency by. K = ω v =2 π f v =2 π λ where λ is the wavelength, f is the frequency, and v is the linear speed. This equation gives a wave for a single dimension, thus the generalized equation given above gives the displacement of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire, in two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex such as the height of a water wave in a pond after a stone has been dropped in. This wave pattern occurs often in nature, including wind waves, sound waves, a cosine wave is said to be sinusoidal, because cos = sin , which is also a sine wave with a phase-shift of π/2 radians. Because of this start, it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a frequency with no harmonics. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, on the other hand, if the sound contains aperiodic waves along with sine waves, then the sound will be perceived noisy as noise is characterized as being aperiodic or having a non-repetitive pattern. In 1822, French mathematician Joseph Fourier discovered that sinusoidal waves can be used as building blocks to describe and approximate any periodic waveform
Sinusoidal
–
The graphs of the sine and
cosine functions are sinusoids of different phases.
18.
PAL
–
Phase Alternating Line is a colour encoding system for analogue television used in broadcast television systems in most countries broadcasting at 625-line /50 field per second. Other common colour encoding systems are NTSC and SECAM, all the countries using PAL are currently in process of conversion or have already converted standards to DVB, ISDB or DTMB. This page primarily discusses the PAL colour encoding system, the articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation. To overcome NTSCs shortcomings, alternative standards were devised, resulting in the development of the PAL, the goal was to provide a colour TV standard for the European picture frequency of 50 fields per second, and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Hannover, Germany, with important input from Dr. Kruse, the format was patented by Telefunken in 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union on 3 January 1963. When asked, why the system was named PAL and not Bruch the inventor answered that a Bruch system would not have sold very well. The first broadcasts began in the United Kingdom in June 1967, the one BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce 625-lines in 1964. Telefunken PALcolor 708T was the first PAL commercial TV set and it was followed by Loewe-Farbfernseher S920 & F900. Telefunken was later bought by the French electronics manufacturer Thomson, Thomson also bought the Compagnie Générale de Télévision where Henri de France developed SECAM, the first European Standard for colour television. The term PAL was often used informally and somewhat imprecisely to refer to the 625-line/50 Hz television system in general, accordingly, DVDs were labelled as PAL or NTSC even though technically the discs do not carry either PAL or NTSC composite signal. CCIR 625/50 and EIA 525/60 are the names for these standards, PAL. Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the chrominance information added to the video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL and NTSC4.43, the SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. Early PAL receivers relied on the eye to do that cancelling, however. The effect is that phase errors result in changes, which are less objectionable than the equivalent hue changes of NTSC. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth reduced greatly compared to the luminance signal. The 4.43361875 MHz frequency of the carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency is 15625 Hz, the carrier frequency calculates as follows,4.43361875 MHz =283.75 ×15625 Hz +25 Hz
PAL
–
Television encoding systems by nation; countries now using (and once using) the PAL system are shown in blue.
19.
Band-pass filter
–
A band-pass filter is a device that passes frequencies within a certain range and rejects frequencies outside that range. An example of an analogue electronic band-pass filter is an RLC circuit and these filters can also be created by combining a low-pass filter with a high-pass filter. Bandpass is an adjective that describes a type of filter or filtering process, it is to be distinguished from passband, hence, one might say A dual bandpass filter has two passbands. A bandpass signal is a signal containing a band of frequencies not adjacent to zero frequency, an ideal bandpass filter would have a completely flat passband and would completely attenuate all frequencies outside the passband. Additionally, the out of the passband would have brickwall characteristics. In practice, no bandpass filter is ideal and this is known as the filter roll-off, and it is usually expressed in dB of attenuation per octave or decade of frequency. Generally, the design of a filter seeks to make the roll-off as narrow as possible, often, this is achieved at the expense of pass-band or stop-band ripple. The bandwidth of the filter is simply the difference between the upper and lower cutoff frequencies, optical band-pass filters are common in photography and theatre lighting work. These filters take the form of a transparent coloured film or sheet, a band-pass filter can be characterized by its Q factor. The Q-factor is the inverse of the fractional bandwidth, a high-Q filter will have a narrow passband and a low-Q filter will have a wide passband. These are respectively referred to as narrow-band and wide-band filters, Bandpass filters are widely used in wireless transmitters and receivers. The main function of such a filter in a transmitter is to limit the bandwidth of the signal to the band allocated for the transmission. This prevents the transmitter from interfering with other stations, in a receiver, a bandpass filter allows signals within a selected range of frequencies to be heard or decoded, while preventing signals at unwanted frequencies from getting through. A bandpass filter also optimizes the ratio and sensitivity of a receiver. Outside of electronics and signal processing, one example of the use of filters is in the atmospheric sciences. It is common to band-pass filter recent meteorological data with a range of, for example,3 to 10 days. In neuroscience, visual cortical simple cells were first shown by David Hubel and Torsten Wiesel to have properties that resemble Gabor filters. In astronomy, band-pass filters are used to only a single portion of the light spectrum into an instrument
Band-pass filter
–
Bandwidth measured at half-power points (gain -3 dB, √2/2, or about 0.707 relative to peak) on a diagram showing magnitude transfer function versus frequency for a band-pass filter.
20.
Fourier transform
–
The Fourier transform decomposes a function of time into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the frequencies of its constituent notes. The Fourier transform is called the frequency domain representation of the original signal, the term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a unified language, linear operations performed in one domain have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the domain corresponds to multiplication by the frequency. Also, convolution in the domain corresponds to ordinary multiplication in the frequency domain. Concretely, this means that any linear time-invariant system, such as a filter applied to a signal, after performing the desired operations, transformation of the result can be made back to the time domain. Functions that are localized in the domain have Fourier transforms that are spread out across the frequency domain and vice versa. The Fourier transform of a Gaussian function is another Gaussian function, Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can also be generalized to functions of variables on Euclidean space. In general, functions to which Fourier methods are applicable are complex-valued, the latter is routinely employed to handle periodic functions. The fast Fourier transform is an algorithm for computing the DFT, the Fourier transform of the function f is traditionally denoted by adding a circumflex, f ^. There are several conventions for defining the Fourier transform of an integrable function f, ℝ → ℂ. Here we will use the definition, f ^ = ∫ − ∞ ∞ f e −2 π i x ξ d x. When the independent variable x represents time, the transform variable ξ represents frequency. Under suitable conditions, f is determined by f ^ via the inverse transform, f = ∫ − ∞ ∞ f ^ e 2 π i ξ x d ξ, the functions f and f ^ often are referred to as a Fourier integral pair or Fourier transform pair. For other common conventions and notations, including using the angular frequency ω instead of the frequency ξ, see Other conventions, the Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and ξ momentum. Many other characterizations of the Fourier transform exist, for example, one uses the Stone–von Neumann theorem, the Fourier transform is the unique unitary intertwiner for the symplectic and Euclidean Schrödinger representations of the Heisenberg group. In 1822, Joseph Fourier showed that some functions could be written as an sum of harmonics
Fourier transform
21.
Digital signal processing
–
Digital signal processing is the use of digital processing, such as by computers, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a variable in a domain such as time, space. Digital signal processing and analog signal processing are subfields of signal processing, digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to system identification and can be implemented in the time, frequency. DSP is applicable to both streaming data and static data, the increasing use of computers has resulted in the increased use of, and need for, digital signal processing. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter, sampling is usually carried out in two stages, discretization and quantization. Discretization means that the signal is divided into equal intervals of time, quantization means each amplitude measurement is approximated by a value from a finite set. Rounding real numbers to integers is an example, the Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal. In practice, the frequency is often significantly higher than twice that required by the signals limited bandwidth. Theoretical DSP analyses and derivations are typically performed on discrete-time signal models with no amplitude inaccuracies, numerical methods require a quantized signal, such as those produced by an analog-to-digital converter. The processed result might be a frequency spectrum or a set of statistics, but often it is another quantized signal that is converted back to analog form by a digital-to-analog converter. In DSP, engineers usually study digital signals in one of the domains, time domain, spatial domain, frequency domain. They choose the domain in which to process a signal by making an assumption as to which domain best represents the essential characteristics of the signal. The most common processing approach in the time or space domain is enhancement of the signal through a method called filtering. Digital filtering generally consists of linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters, for example, A linear filter is a transformation of input samples. A causal filter uses only samples of the input or output signals. A non-causal filter can usually be changed into a filter by adding a delay to it
Digital signal processing
22.
Sampling (signal processing)
–
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a wave to a sequence of samples. A sample is a value or set of values at a point in time and/or space, a sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the value of the continuous signal at the desired points. Sampling can be done for functions varying in space, time, or any other dimension, then the sampled function is given by the sequence, s, for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples obtained in one second. Reconstructing a continuous function from samples is done by interpolation algorithms, the Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal lowpass filter whose input is a sequence of Dirac delta functions that are modulated by the sample values. When the time interval between adjacent samples is a constant, the sequence of functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the function with s. That purely mathematical abstraction is sometimes referred to as impulse sampling, most sampled signals are not simply stored and reconstructed. But the fidelity of a reconstruction is a customary measure of the effectiveness of sampling. That fidelity is reduced when s contains frequency components whose periodicity is smaller than 2 samples, the quantity ½ cycles/sample × fs samples/sec = fs/2 cycles/sec is known as the Nyquist frequency of the sampler. Therefore, s is usually the output of a lowpass filter, without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. In practice, the signal is sampled using an analog-to-digital converter. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion, various types of distortion can occur, including, Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, in a capacitor-based sample and hold circuit, aperture error is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width. Jitter or deviation from the precise sample timing intervals, noise, including thermal sensor noise, analog circuit noise, etc
Sampling (signal processing)
–
Signal sampling representation. The continuous signal is represented with a green colored line while the discrete samples are indicated by the blue vertical lines.
23.
Time series
–
A time series is a series of data points indexed in time order. Most commonly, a series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data, examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. Time series are very frequently plotted via line charts, Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values, Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in there is no natural ordering of the observations. Time series analysis is also distinct from data analysis where the observations typically relate to geographical locations. A stochastic model for a series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. Methods for time series analysis may be divided into two classes, frequency-domain methods and time-domain methods, the former include spectral analysis and wavelet analysis, the latter include auto-correlation and cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using scaled correlation, additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the stationary stochastic process has a certain structure which can be described using a small number of parameters. In these approaches, the task is to estimate the parameters of the model describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure, Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate. A time series is one type of Panel data, Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel. A data set may exhibit characteristics of both data and time series data. One way to tell is to ask what makes one data record unique from the other records, if the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a data field and an additional identifier which is unrelated to time. If the differentiation lies on the identifier, then the data set is a cross-sectional data set candidate
Time series
–
Time series: random data plus trend, with best-fit line and different applied filters
24.
Window function
–
In signal processing, a window function is a mathematical function that is zero-valued outside of some chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, in typical applications, the window functions used are non-negative, smooth, bell-shaped curves. Rectangle, triangle, and other functions can also be used, applications of window functions include spectral analysis/modification/resynthesis, the design of finite impulse response filters, as well as beamforming and antenna design. The Fourier transform of the function cos ωt is zero, except at frequency ±ω, However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their content only during a certain time period. In either case, the Fourier transform can be applied on one or more finite intervals of the waveform, in general, the transform is applied to the product of the waveform and a window function. Any window affects the spectral estimate computed by this method, windowing of a simple waveform like cos ωt causes its Fourier transform to develop non-zero values at frequencies other than ω. The leakage tends to be worst near ω and least at frequencies farthest from ω, if the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with the ability to distinguish them spectrally. If their frequencies are dissimilar and one component is weaker, then leakage from the component can obscure the weaker ones presence. But if the frequencies are similar, leakage can render them even when the sinusoids are of equal strength. The rectangular window has excellent resolution characteristics for sinusoids of comparable strength and this characteristic is sometimes described as low dynamic range. At the other extreme of dynamic range are the windows with the poorest resolution and sensitivity and that is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows. Therefore, high-dynamic-range windows are most often justified in wideband applications, in between the extremes are moderate windows, such as Hamming and Hann. They are commonly used in applications, such as the spectrum of a telephone channel. In summary, spectral analysis involves a trade-off between resolving comparable strength components with frequencies and resolving disparate strength components with dissimilar frequencies. That trade-off occurs when the function is chosen. When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function, but the DFT provides only a sparse sampling of the actual discrete-time Fourier transform spectrum. Figure 1 shows a portion of the DTFT for a rectangularly-windowed sinusoid, the actual frequency of the sinusoid is indicated as 0 on the horizontal axis
Window function
–
Figure 1: Zoomed view of spectral leakage
25.
Short-time Fourier transform
–
In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment, one then usually plots the changing spectra as a function of time. Simply, in the case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform of the signal is taken as the window is slid along the time axis. X is essentially the Fourier Transform of xw, a function representing the phase and magnitude of the signal over time. Often phase unwrapping is employed along either or both the axis, τ, and frequency axis, ω, to suppress any jump discontinuity of the phase result of the STFT. The time index τ is normally considered to be slow time, in the discrete time case, the data to be transformed could be broken up into chunks or frames. Each chunk is Fourier transformed, and the result is added to a matrix. This can be expressed as, S T F T ≡ X = ∑ n = − ∞ ∞ x w e − j ω n likewise, with signal x and window w. In this case, m is discrete and ω is continuous, but in most typical applications the STFT is performed on a computer using the Fast Fourier Transform, so both variables are discrete and quantized. If only a number of ω are desired, or if the STFT is desired to be evaluated for every shift m of the window. The STFT is invertible, that is, the signal can be recovered from the transform by the Inverse STFT. The most widely accepted way of inverting the STFT is by using the overlap-add method and this makes for a versatile signal processing method, referred to as the overlap and add with modifications method. Given the width and definition of the function w, we initially require the area of the window function to be scaled so that ∫ − ∞ ∞ w d τ =1. It easily follows that ∫ − ∞ ∞ w d τ =1 ∀ t and x = x ∫ − ∞ ∞ w d τ = ∫ − ∞ ∞ x w d τ. The continuous Fourier Transform is X = ∫ − ∞ ∞ x e − j ω t d t. Substituting x from above, X = ∫ − ∞ ∞ e − j ω t d t = ∫ − ∞ ∞ ∫ − ∞ ∞ x w e − j ω t d τ d t. Swapping order of integration, X = ∫ − ∞ ∞ ∫ − ∞ ∞ x w e − j ω t d t d τ = ∫ − ∞ ∞ d τ = ∫ − ∞ ∞ X d τ
Short-time Fourier transform
–
Example of short time Fourier transforms used to determine time of impact from audio signal.
26.
Great tit
–
The great tit is a passerine bird in the tit family Paridae. Until 2005 this species was lumped with other subspecies. The great tit remains the most widespread species in the genus Parus, the great tit is a distinctive bird with a black head and neck, prominent white cheeks, olive upperparts and yellow underparts, with some variation amongst the numerous subspecies. It is predominantly insectivorous in the summer, but will consume a range of food items in the winter months. Like all tits it is a cavity nester, usually nesting in a hole in a tree, the female lays around 12 eggs and incubates them alone, although both parents raise the chicks. In most years the pair will raise two broods, the nests may be raided by woodpeckers, squirrels and weasels and infested with fleas, and adults may be hunted by sparrowhawks. The great tit has adapted well to changes in the environment and is a common and familiar bird in urban parks. The great tit is also an important study species in ornithology, the great tit was originally described under its current binomial name by Linnaeus in his 18th century work, Systema Naturae. Its scientific name is derived from the Latin parus tit and maior larger, the great tit was formerly treated as ranging from Britain to Japan and south to the islands of Indonesia, with 36 described subspecies ascribed to four main species groups. The three bokharensis subspecies were often treated as a species, Parus bokharensis, the Turkestan tit. The divergence between the bokharensis and major groups was estimated to have been half a million years ago. The study also examined hybrids between representatives of the major and minor groups in the Amur Valley where the two meet, hybrids were rare, suggesting that there were some reproductive barriers between the two groups. The study recommended that the two eastern groups be split out as new species, the cinereous tit, and the Japanese tit and this taxonomy has been followed by some authorities, for example the IOC World Bird List. The nominate subspecies of the great tit is the most widespread, its range stretching from the Iberian Peninsula to the Amur Valley and from Scandinavia to the Middle East. The other subspecies have much more restricted distributions, four being restricted to islands, the dominance of a single, morphologically uniform subspecies over such a large area suggests that the nominate race rapidly recolonised a large area after the last glacial epoch. This hypothesis is supported by studies which suggest a geologically recent genetic bottleneck followed by a rapid population expansion. The genus Parus once held most of the species of tit in the family Paridae, the great tit was retained in Parus, which, along with Cyanistes comprise a lineage of tits known as the non-hoarders, with reference to the hoarding behaviour of members of the other clade. The genus Parus is still the largest in the family, other than those species formerly considered to be subspecies, the great tits closest relatives are the white-naped and green-backed tits of southern Asia
Great tit
–
Great tit
Great tit
–
The 11 subspecies of the
cinereous tit were once lumped with the great tit but recent genetic and bioacoustic studies now separate that group as a distinct species.
Great tit
–
At
Kew Gardens, London. The British subspecies P. m. newtoni has a wider mid-line ventral stripe on the lower belly than the nominate race.
Great tit
–
In females and juveniles the mid-line stripe is narrower and sometimes discontinuous.
27.
Speech synthesis
–
Speech synthesis is the artificial production of human speech. A computer system used for purpose is called a speech computer or speech synthesizer. A text-to-speech system converts normal language text into speech, other systems render symbolic linguistic representations like phonetic transcriptions into speech, synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units, a system that stores phones or diphones provides the largest output range, for specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract, the quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s, a text-to-speech system is composed of two parts, a front-end and a back-end. The front-end has two major tasks, first, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, the process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic representation that is output by the front-end. The back-end—often referred to as the converts the symbolic linguistic representation into sound. In certain systems, this includes the computation of the target prosody. Long before the invention of electronic signal processing, some tried to build machines to emulate human speech. Some early legends of the existence of Brazen Heads involved Pope Silvester II, Albertus Magnus, there followed the bellows-operated acoustic-mechanical speech machine of Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels, in 1837, Charles Wheatstone produced a speaking machine based on von Kempelens design, and in 1846, Joseph Faber exhibited the Euphonia. In 1923 Paget resurrected Wheatstones design, in the 1930s Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder, dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950
Speech synthesis
–
Stephen Hawking is one of the most famous people using a speech computer to communicate
Speech synthesis
–
Computer and speech synthesiser housing used by
Stephen Hawking in 1999
28.
Electronic music
–
In general, a distinction can be made between sound produced using electromechanical means and that produced using electronic technology. Examples of electromechanical sound producing devices include the telharmonium, Hammond organ, purely electronic sound production can be achieved using devices such as the theremin, sound synthesizer, and computer. During the 1920s and 1930s, electronic instruments were introduced and the first compositions for instruments were composed. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds, Music produced solely from electronic generators was first produced in Germany in 1953. Electronic music was created in Japan and the United States beginning in the 1950s. An important new development was the advent of computers for the purpose of composing music, algorithmic composition was first demonstrated in Australia in 1951. In America and Europe, live electronics were pioneered in the early 1960s, during the 1970s to early 1980s, the monophonic Minimoog became once the most widely used synthesizer at that time in both popular and electronic art music. In the 1980s, electronic music became dominant in popular music, with a greater reliance on synthesizers, and the adoption of programmable drum machines. Electronically produced music became prevalent in the domain by the 1990s. Contemporary electronic music includes many varieties and ranges from art music to popular forms such as electronic dance music. Today, pop music is most recognizable in its 4/4 form. At the turn of the 20th century, experimentation with emerging electronics led to the first electronic musical instruments and these initial inventions were not sold, but were instead used in demonstrations and public performances. The audiences were presented with reproductions of existing music instead of new compositions for the instruments, while some were considered novelties and produced simple tones, the Telharmonium accurately synthesized the sound of orchestral instruments. It achieved viable public interest and made progress into streaming music through telephone networks. Critics of musical conventions at the time saw promise in these developments, ferruccio Busoni encouraged the composition of microtonal music allowed for by electronic instruments. He predicted the use of machines in future music, writing the influential Sketch of a New Esthetic of Music, futurists such as Francesco Balilla Pratella and Luigi Russolo began composing music with acoustic noise to evoke the sound of machinery. They predicted expansions in timbre allowed for by electronics in the influential manifesto The Art of Noises, developments of the vacuum tube led to electronic instruments that were smaller, amplified, and more practical for performance. In particular, the theremin, ondes Martenot and trautonium were commercially produced by the early 1930s, from the late 1920s, the increased practicality of electronic instruments influenced composers such as Joseph Schillinger to adopt them
Electronic music
–
Telharmonium,
Thaddeus Cahill, 1897
Electronic music
–
Halim El-Dabh at a
Cleveland festival in 2009
Electronic music
–
Karlheinz Stockhausen in the Electronic Music Studio of WDR, Cologne, in 1991
Electronic music
–
Israeli composer
Josef Tal at the Electronic Music Studio in Jerusalem (c. 1965). On the right,
Hugh Le Caine 's sound
synthesizer the Special Purpose Tape Recorder.
29.
Steganography
–
Steganography is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos, meaning covered, concealed, or protected, the first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic. Generally, the messages appear to be something else, images, articles, shopping lists. For example, the message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, the advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal, Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a layer, such as a document file, image file. Media files are ideal for steganographic transmission because of their large size, the first recorded uses of steganography can be traced back to 440 BC when Herodotus mentions two examples in his Histories. Additionally, Demaratus sent a warning about an attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in use then as reusable writing surfaces, sometimes used for shorthand. In his work Polygraphiae Johannes Trithemius developed his so-called Ave-Maria-Cipher that can hide information in a Latin praise of God, auctor Sapientissimus Conseruans Angelica Deferat Nobis Charitas Potentissimi Creatoris for example contains the concealed word VICIPEDIA. Steganography has been used, including in recent historical times. Known examples include, Hidden messages within wax tablet—in ancient Greece, people wrote messages on wood, Hidden messages on messengers body—also used in ancient Greece. Herodotus tells the story of a message tattooed on the head of a slave of Histiaeus, hidden by the hair that afterwards grew over it. The message allegedly carried a warning to Greece about Persian invasion plans, messages written on envelopes in the area covered by postage stamps. In the early days of the press, it was common to mix different typefaces on a printed page due to the printer not having enough copies of some letters in one typeface. Because of this, a message could be using two different typefaces, such as normal or italic. During and after World War II, espionage agents used photographically produced microdots to send information back, World War II microdots were embedded in the paper and covered with an adhesive, such as collodion
Steganography
–
Image of a tree with a steganographically hidden image. The hidden image is revealed by removing all but the two least significant
bits of each
color component and a subsequent
normalization. The hidden image is shown below.
30.
Audio timescale-pitch modification
–
Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch. Pitch scaling or pitch shifting is the opposite, the process of changing the pitch without affecting the speed, similar methods can change speed, pitch, or both at once, in a time-varying way. These processes are used, for instance, to match the pitches and they are also used to create effects such as increasing the range of an instrument. The simplest way to change the duration or pitch of an audio clip is to resample it. This is an operation that effectively rebuilds a continuous waveform from its samples. When the new samples are played at the sampling frequency. Unfortunately, the frequencies in the sample are always scaled at the rate as the speed. In other words, slowing down the recording lowers the pitch, speeding it up raises the pitch and this is analogous to speeding up or slowing down an analogue recording, like a phonograph record or tape, creating the Chipmunk effect. In order to preserve an audio signals pitch when stretching or compressing its duration, given an original discrete-time audio signal, this strategys first step is to split the signal into short analysis frames of fixed length. The analysis frames are spaced by a number of samples. To achieve the actual time-scale modification, the frames are then temporally relocated to have a synthesis hopsize H s ∈ N. This frame relocation results in a modification of the duration by a stretching factor of α = H s / H a. However, simply superimposing the unmodified analysis frames typically results in undesired artifacts such as phase discontinuities or amplitude fluctuations, to prevent this kind of artifacts, the analysis frames are adapted to form synthesis frames, prior to the reconstruction of the time-scale modified output signal. The strategy of how to derive the synthesis frames from the frames is a key difference among different TSM procedures. One way of stretching the length of a signal without affecting the pitch is to build a phase vocoder after Flanagan, Golden, recent improvements allow better quality results at all compression/expansion ratios but a residual smearing effect still remains. Another method for time stretching relies on a model of the signal. In this method, peaks are identified in frames using the STFT of the signal, the tracks are then re-synthesized at a new time scale. This method can yield results on both polyphonic and percussive material, especially when the signal is separated into sub-bands
Audio timescale-pitch modification
–
Sinusoidal analysis/synthesis system (based on McAulay & Quatieri 1988, p. 161)
31.
Scattering parameters
–
Scattering parameters or S-parameters describe the electrical behavior of linear electrical networks when undergoing various steady state stimuli by electrical signals. The parameters are useful for several branches of engineering, including electronics, communication systems design. The S-parameters are members of a family of parameters, other examples being. They differ from these, in the sense that S-parameters do not use open or short circuit conditions to characterize a linear network, instead. These terminations are much easier to use at high frequencies than open-circuit and short-circuit terminations. Moreover, the quantities are measured in terms of power, many electrical properties of networks of components may be expressed using S-parameters, such as gain, return loss, voltage standing wave ratio, reflection coefficient and amplifier stability. This is equivalent to the meeting a impedance differing from the lines characteristic impedance. S-parameters change with the measurement frequency, so frequency must be specified for any S-parameter measurements stated, S-parameters are readily represented in matrix form and obey the rules of matrix algebra. The first published description of S-parameters was in the thesis of Vitold Belevitch in 1945, the name used by Belevitch was repartition matrix and limited consideration to lumped-element networks. The term scattering matrix was used by physicist and engineer Robert Henry Dicke in 1947 who independently developed the idea during wartime work on radar. The network is characterized by a matrix of complex numbers called its S-parameter matrix. For the S-parameter definition, it is understood that a network may contain any components provided that the network behaves linearly with incident small signals. An electrical network to be described by S-parameters may have any number of ports, ports are the points at which electrical signals either enter or exit the network. Ports are usually pairs of terminals with the requirement that the current into one terminal is equal to the current leaving the other, S-parameters are used at frequencies where the ports are often coaxial or waveguide connections. The S-parameter matrix describing an N-port network will be square of dimension N, at the test frequency each element or S-parameter is represented by a unitless complex number that represents magnitude and angle, i. e. amplitude and phase. The complex number may either be expressed in rectangular form or, more commonly, the S-parameter magnitude may be expressed in linear form or logarithmic form. When expressed in logarithmic form, magnitude has the unit of decibels. The S-parameter angle is most frequently expressed in degrees but occasionally in radians, any S-parameter may be displayed graphically on a polar diagram by a dot for one frequency or a locus for a range of frequencies
Scattering parameters
–
The basic parts of a vector network analyzer
32.
United States Geological Survey
–
The United States Geological Survey is a scientific agency of the United States government. The scientists of the USGS study the landscape of the United States, its resources. The organization has four science disciplines, concerning biology, geography, geology. The USGS is a research organization with no regulatory responsibility. The USGS is a bureau of the United States Department of the Interior, the USGS employs approximately 8,670 people and is headquartered in Reston, Virginia. The USGS also has major offices near Lakewood, Colorado, at the Denver Federal Center, the current motto of the USGS, in use since August 1997, is science for a changing world. The agencys previous slogan, adopted on the occasion of its anniversary, was Earth Science in the Public Service. Prompted by a report from the National Academy of Sciences, the USGS was created, by a last-minute amendment and it was charged with the classification of the public lands, and examination of the geological structure, mineral resources, and products of the national domain. This task was driven by the need to inventory the vast lands added to the United States by the Louisiana Purchase in 1803, the legislation also provided that the Hayden, Powell, and Wheeler surveys be discontinued as of June 30,1879. Clarence King, the first director of USGS, assembled the new organization from disparate regional survey agencies, after a short tenure, King was succeeded in the directors chair by John Wesley Powell. Administratively, it is divided into a Headquarters unit and six Regional Units, Other specific programs include, Earthquake Hazards Program monitors earthquake activity worldwide. The National Earthquake Information Center in Golden, Colorado on the campus of the Colorado School of Mines detects the location, the USGS also runs or supports several regional monitoring networks in the United States under the umbrella of the Advanced National Seismic System. The USGS informs authorities, emergency responders, the media, and it also maintains long-term archives of earthquake data for scientific and engineering research. It also conducts and supports research on long-term seismic hazards, USGS has released the UCERF California earthquake forecast. The USGS National Geomagnetism Program monitors the magnetic field at magnetic observatories and distributes magnetometer data in real time, the USGS operates the streamgaging network for the United States, with over 7400 streamgages. Real-time streamflow data are available online, since 1962, the Astrogeology Research Program has been involved in global, lunar, and planetary exploration and mapping. USGS operates a number of related programs, notably the National Streamflow Information Program. USGS Water data is available from their National Water Information System database
United States Geological Survey
–
Clarence King, founder of the USGS
United States Geological Survey
–
Seal of the United States Geological Survey
United States Geological Survey
–
The USGS headquarters in
Reston, Virginia
United States Geological Survey
–
USGS gauging station 03221000 on the
Scioto River below
O'Shaughnessy Dam near
Dublin, Ohio
33.
Phase (waves)
–
Phase is the position of a point in time on a waveform cycle. A complete cycle is defined as the interval required for the waveform to return to its initial value. The graphic to the right shows how one cycle constitutes 360° of phase, the graphic also shows how phase is sometimes expressed in radians, where one radian of phase equals approximately 57. 3°. Phase can also be an expression of relative displacement between two corresponding features of two waveforms having the same frequency, in sinusoidal functions or in waves phase has two different, but closely related, meanings. One is the angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the cycle that has elapsed relative to the origin. Phase shift is any change that occurs in the phase of one quantity and this symbol, φ is sometimes referred to as a phase shift or phase offset because it represents a shift from zero phase. For infinitely long sinusoids, a change in φ is the same as a shift in time, if x is delayed by 14 of its cycle, it becomes, x = A ⋅ cos = A ⋅ cos whose phase is now φ − π2. It has been shifted by π2 radians, Phase difference is the difference, expressed in degrees or time, between two waves having the same frequency and referenced to the same point in time. Two oscillators that have the frequency and no phase difference are said to be in phase. Two oscillators that have the frequency and different phases have a phase difference. The amount by which such oscillators are out of phase with each other can be expressed in degrees from 0° to 360°, if the phase difference is 180 degrees, then the two oscillators are said to be in antiphase. If two interacting waves meet at a point where they are in antiphase, then interference will occur. It is common for waves of electromagnetic, acoustic or other energy to become superposed in their transmission medium, when that happens, the phase difference determines whether they reinforce or weaken each other. Complete cancellation is possible for waves with equal amplitudes, time is sometimes used to express position within the cycle of an oscillation. A phase difference is analogous to two athletes running around a track at the same speed and direction but starting at different positions on the track. They pass a point at different instants in time, but the time difference between them is a constant - same for every pass since they are at the same speed and in the same direction. If they were at different speeds, the difference is undefined
Phase (waves)
–
Illustration of phase shift. The horizontal axis represents an angle (phase) that is increasing with time.
34.
Haskins Laboratories
–
Haskins Laboratories is an independent 501 non-profit corporation, founded in 1935 and located in New Haven, Connecticut since 1970. It is a multidisciplinary and international community of researchers which conducts research on spoken and written language. A guiding perspective of their research is to speech and language as biological processes. Haskins Laboratories is equipped, in-house, with a suite of tools and capabilities to advance its mission of research into language. Magnetic Resonance Imaging, Haskins has access to MRI scanners through agreements with the University of Connecticut, on-site, HL has a GNU-Linux computer cluster dedicated to analysis of MRI data. Near Infrared Spectroscopy, HL has a TechEn CW6 8x8 system, ultrasound sonogram Scores of researchers have contributed to scientific breakthroughs at Haskins Laboratories since its founding. All of them are indebted to the work and leadership of Caryl Parker Haskins, Franklin S. Cooper, Alvin Liberman, Seymour Hutner. This history focuses on the program of the main division of Haskins Laboratories that, since the 1940s, has been most well known for its work in the areas of speech, language. Caryl Haskins and Franklin S. Cooper established Haskins Laboratories in 1935 and it was originally affiliated with Harvard University, MIT, and Union College in Schenectady, NY. Caryl Haskins conducted research in microbiology, radiation physics, and other fields in Cambridge, MA, in 1939 the Laboratories moved its center to New York City. Seymour Hutner joined the staff to set up a program in microbiology, genetics. The descendant of this program is now part of Pace University in New York, the U. S. Office of Scientific Research and Development, under Vannevar Bush asked Haskins Laboratories to evaluate and develop technologies for assisting blinded World War II veterans. Experimental psychologist Alvin Liberman joined the Laboratories to assist in developing an alphabet to represent the letters in a text for use in a reading machine for the blind. Luigi Provasoli joined the Laboratories to set up a program in marine biology. The program in marine biology moved to Yale University in 1970, Franklin S. Cooper invented the pattern playback, a machine that converts pictures of the acoustic patterns of speech back into sound. With this device, Alvin Liberman, Cooper, and Pierre Delattre, Liberman, aided by Frances Ingemann and others, organized the results of the work on speech cues into a groundbreaking set of rules for speech synthesis by the Pattern Playback. Leigh Lisker and Arthur Abramson looked for simplification at the level of action in the voicing of certain contrasting consonants. They showed that many properties of voicing contrasts arise from variations in voice onset time, the relative phasing of the onset of vocal cord vibration
Haskins Laboratories
–
Haskins Laboratories
35.
Dual (mathematics)
–
Such involutions sometimes have fixed points, so that the dual of A is A itself. For example, Desargues theorem is self-dual in this sense under the standard duality in projective geometry, many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. From a category theory viewpoint, duality can also be seen as a functor and this functor assigns to each space its dual space, and the pullback construction assigns to each arrow f, V → W its dual f∗, W∗ → V∗. In the words of Michael Atiyah, Duality in mathematics is not a theorem, the following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case. A simple, maybe the most simple, duality arises from considering subsets of a fixed set S, to any subset A ⊆ S, the complement Ac consists of all those elements in S which are not contained in A. It is again a subset of S, taking the complement has the following properties, Applying it twice gives back the original set, i. e. c = A. This is referred to by saying that the operation of taking the complement is an involution, an inclusion of sets A ⊆ B is turned into an inclusion in the opposite direction Bc ⊆ Ac. Given two subsets A and B of S, A is contained in Bc if and only if B is contained in Ac. This duality appears in topology as a duality between open and closed subsets of some fixed topological space X, a subset U of X is closed if, because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of sets is open, so dually. The interior of a set is the largest open set contained in it, because of the duality, the complement of the interior of any set U is equal to the closure of the complement of U. A duality in geometry is provided by the cone construction. Given a set C of points in the plane R2, unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original set C. Instead, C ∗ ∗ is the smallest cone containing C which may be bigger than C. Therefore this duality is weaker than the one above, in that Applying the operation twice gives back a possibly bigger set, the other two properties carry over without change, It is still true that an inclusion C ⊆ D is turned into an inclusion in the opposite direction. Given two subsets C and D of the plane, C is contained in D ∗ if, a very important example of a duality arises in linear algebra by associating to any vector space V its dual vector space V*. Its elements are the k-linear maps φ, V → k, the three properties of the dual cone carry over to this type of duality by replacing subsets of R2 by vector space and inclusions of such subsets by linear maps. That is, Applying the operation of taking the dual vector space twice gives another vector space V**, there is always a map V → V**
Dual (mathematics)
–
A set C (blue) and its dual cone C* (red).
36.
Instantaneous frequency
–
Instantaneous phase and instantaneous frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase of a function s, is the real-valued function, ϕ = arg . And for a function s, it is determined from the functions analytic representation, sa. When φ is constrained to its value, either the interval. Otherwise it is called unwrapped phase, which is a function of argument t. Unless otherwise indicated, the form should be inferred. S = A cos , where ω >0, S a = A e j, ϕ = ω t + θ. In this simple example, the constant θ is also commonly referred to as phase or phase offset. φ is a function of time, θ is not, in the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference is specified. S = A sin = A cos , where ω >0, S a = A e j, ϕ = ω t − π /2. In both examples the local maxima of s correspond to φ = 2πN for integer values of N and this has applications in the field of computer vision. Instantaneous angular frequency is defined as, ω = d ϕ d t, if φ is wrapped, discontinuities in φ will result in dirac delta impulses in f. This instantaneous frequency, ω, can be derived directly from the real and imaginary parts of sa, instead of the complex arg without concern of phase unwrapping. ϕ = arg = atan2 +2 m 1 π = arctan + m 2 π 2m1π, discontinuities can then be removed by adding 2π whenever Δφ ≤ −π, and subtracting 2π whenever Δφ > π. That allows φ to accumulate without limit and produces an unwrapped instantaneous phase, an equivalent formulation that replaces the modulo 2π operation with a complex multiplication is, ϕ = ϕ + arg , where the asterisk denotes complex conjugate. The discrete-time instantaneous frequency is simply the advancement of phase for that sample ω = arg , a vector-average phase can be obtained as the arg of the sum of the complex numbers without concern about wrap-around. Analytic signal Frequency modulation Cohen, Leon
Instantaneous frequency
–
Instantaneous phase vs time.
37.
Heisenberg uncertainty principle
–
The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928. Heisenberg offered such an effect at the quantum level as a physical explanation of quantum uncertainty. Thus, the uncertainty principle actually states a fundamental property of quantum systems, since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems, applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers. The uncertainty principle is not readily apparent on the scales of everyday experience. So it is helpful to demonstrate how it applies to more easily understood physical situations, two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, a nonzero function and its Fourier transform cannot both be sharply localized. In matrix mechanics, the formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value, for example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. According to the de Broglie hypothesis, every object in the universe is a wave, the position of the particle is described by a wave function Ψ. The time-independent wave function of a plane wave of wavenumber k0 or momentum p0 is ψ ∝ e i k 0 x = e i p 0 x / ℏ. In the case of the plane wave, | ψ |2 is a uniform distribution. In other words, the position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. The figures to the right show how with the addition of many plane waves, in mathematical terms, we say that ϕ is the Fourier transform of ψ and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, One way to quantify the precision of the position and momentum is the standard deviation σ. Since | ψ |2 is a probability density function for position, the precision of the position is improved, i. e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i. e. increased σp. Another way of stating this is that σx and σp have a relationship or are at least bounded from below
Heisenberg uncertainty principle
–
Werner Heisenberg and Niels Bohr
Heisenberg uncertainty principle
–
Click to see animation. The evolution of an initially very localized gaussian wave function of a free particle in two-dimensional space, with colour and intensity indicating phase and amplitude. The spreading of the wave function in all directions shows that the initial momentum has a spread of values, unmodified in time; while the spread in position increases in time: as a result, the uncertainty Δx Δp increases in time.
38.
Acoustic signature
–
For signature reduction in general see Stealth technology. Acoustic signature is used to describe a combination of acoustic emissions of sound emitters, such as those of ships, the analysis of acoustic signatures is an important adjunct to passive sonar used to track naval warships and weapons. Similar methods have used to identify aircraft, especially before the development of sophisticated radar tracking. The acoustic signature is made up of a number of individual elements and these include, Machinery noise, noise generated by a ships engines, propeller shafts, fuel pumps, air conditioning systems, etc. Cavitation noise, noise generated by the creation of gas bubbles by the turning of a ships propellers, hydrodynamic noise, noise generated by the movement of water displaced by the hull of a moving vessel. These emissions depend on a hulls dimensions, the installed machinery, therefore different ship classes will have different combinations of acoustic signals that together form a unique signature. Hydrophones and sonar operating in passive mode can detect acoustic signals radiated by otherwise invisible submarines, distinguishing an aircraft carrier from its escorts. Warship designers aim to reduce the acoustic signature of ships and submarines just as much as they aim to reduce the radar cross sections, for submarines, as a prime factor in how they can be detected the reduction of the acoustic signature is a primary goal. The acoustic signature can be reduced by fitting of machinery with the best possible mechanical tolerances, decoupling the machinery from the hull by mounting machinery on rubber mounting blocks. Designing propellers to reduce cavitation, this led to the development of large slow turning propellers, the fitting of anechoic tiles to the hull, however ill fitting and loose anechoic tiles can themselves be a source of noise. Hydrodynamic efficiency to minimise the perturbation of water, care in minimising protrusions from the hull. For a time the Royal Navy toyed with the idea of the trimaran hulled Future Surface Combatant and these would have had a very low acoustic signature. With three blade like hulls these ships would have cut through the water with a minimum of hydrodynamic noise and this project got as far as the construction of the research ship RV Triton to test the principle of a large scale trimaran design. Underwater acoustics Stealth ship Type 45 destroyer Anti-submarine warfare Submarine warfare Upholder/Victoria-class submarine Teardrop hull Spectrogram
Acoustic signature
–
The
RV Triton
39.
Chromagram
–
In the music context, the term chroma feature or chromagram closely relates to the twelve different pitch classes. One main property of chroma features is that they capture harmonic and melodic characteristics of music, while being robust to changes in timbre, the underyling observation is that humans perceive two musical pitches as similar in color if they differ by an octave. Based on this observation, a pitch can be separated into two components, which are referred to as height and chroma. Assuming the equal-tempered scale, one considers twelve chroma values represented by the set consists of the twelve pitch spelling attributes as used in Western music notation. Note that in the equal-tempered scale different pitch spellings such C♯, enumerating the chroma values, one can identify the set of chroma values with the set of integers, where 1 refers to chroma C,2 to C♯, and so on. A pitch class is defined as the set of all pitches that share the same chroma, for example, using the scientific pitch notation, the pitch class corresponding to the chroma C is the set consisting of all pitches separated by an integer number of octaves. Given a music representation, the idea of chroma features is to aggregate for a given local time window all information that relates to a given chroma into a single coefficient. The resulting time-chroma representation is also referred to as chromagram, the figure above shows chormagrams for a C-major scale, once obtained from a musical score and once from an audio recording. Because of the relation between the terms chroma and pitch class, chorma features are also referred to as pitch class profiles. Identifying pitches that differ by an octave, chroma features show a degree of robustness to variations in timbre. This is the reason why chroma features are a tool for processing and analyzing music data. For example, basically every chord recognition procedure relies on some kind of chroma representation, also, chroma features have become the de facto standard for tasks such as music alignment and synchronization as well as audio structure analysis. Finally, chroma features have turned out to be a powerful mid-level feature representation in content-based audio retrieval such as cover song identification or audio matching, there are many ways for converting an audio recording into a chromagram. Furthermore, the properties of chroma features can be changed by introducing suitable pre- and post-processing steps modifying spectral, temporal. This leads to a number of chroma variants, which may show a quite different behavior in the context of a specific music analysis scenario
Chromagram
–
Fig.1 General HPCP feature extraction block diagram
Chromagram
–
Fig.2 Example of a high-resolution HPCP sequence
Chromagram
–
Fig.3 System of measuring similarity between two songs
40.
List of unexplained sounds
–
The following is a list of unidentified, or formerly unidentified, sounds. The following unidentified sounds have been detected by the USA National Oceanic and Atmospheric Administration using its Equatorial Pacific Ocean autonomous hydrophone array, upsweep is an unidentified sound detected on the American NOAAs equatorial autonomous hydrophone arrays. This sound was present when the Pacific Marine Environmental Laboratory began recording its sound surveillance system SOSUS in August,1991 and it consists of a long train of narrow-band upsweeping sounds of several seconds in duration each. The source level is enough to be recorded throughout the Pacific. The sound appears to be seasonal, generally reaching peaks in spring and autumn, the source can be roughly located at 54°S 140°W, near the location of inferred volcanic seismicity, but the origin of the sound is unresolved. The overall source level has been declining since 1991 but the sounds can still be detected on NOAAs equatorial autonomous hydrophone arrays, Bloop is the name given to an ultra-low-frequency and extremely powerful underwater sound detected by the U. S. National Oceanic and Atmospheric Administration in 1997. The sound is consistent with the noises generated by icequakes in large icebergs, according to the NOAA description, it rises rapidly in frequency over about one minute and was of sufficient amplitude to be heard on multiple sensors, at a range of over 5,000 km. The NOAAs Dr. Christopher Fox did not believe its origin was man-made, such as a submarine or bomb, the NOAA Vents Program has attributed the sound to that of a large icequake. Numerous icequakes share similar spectrograms with Bloop, as well as the necessary to spot them despite ranges exceeding 5,000 km. This was found during the tracking of iceberg A53a as it disintegrated near South Georgia island in early 2008, julia is a sound recorded on March 1,1999 by the U. S. National Oceanic and Atmospheric Administration. NOAA said the source of the sound was most likely a large iceberg that had run aground off Antarctica and it was loud enough to be heard over the entire Equatorial Pacific Ocean autonomous hydrophone array. The unidentified sound lasted for about 15 seconds, due to the uncertainty of the arrival azimuth, the point of origin could be between Bransfield Straits and Cape Adare. Slow Down is a sound recorded on May 19,1997, the source of the sound was most likely a large iceberg as it became grounded. The name was given because the sound slowly decreases in frequency over about 7 minutes and it was recorded using an autonomous hydrophone array. The sound has been picked up several times each year since 1997, One of the hypotheses on the origin of the sound is moving ice in Antarctica. Sound spectrograms of vibrations caused by friction closely resemble the spectrogram of the Slow Down and this suggests the source of the sound could have been caused by the friction between a large ice sheet moving over land. The Train is the given to a sound recorded on March 5,1997 on the Equatorial Pacific Ocean autonomous hydrophone array. The sound rises to a quasi-steady frequency, according to the NOAA, the origin of the sound is most likely generated by a very large iceberg grounded in the Ross Sea, near Cape Adare
List of unexplained sounds
–
Spectrogram of the Upsweep sound
List of unexplained sounds
–
Spectrogram of the Whistle sound
List of unexplained sounds
–
A
spectrogram of Bloop
List of unexplained sounds
–
A
spectrogram of "Julia".
41.
Reassignment method
–
The method has been independently introduced by several parties under various names, including method of reassignment, remapping, time-frequency reassignment, and modified moving-window method. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time, many signals of interest have a distribution of energy that varies in time and frequency. Time-frequency representations are used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a function of time. A time-frequency representation describes the variation of energy distribution over time. One of the best-known time-frequency representations is the spectrogram, defined as the magnitude of the short-time Fourier transform. As a time-frequency representation, the spectrogram has relatively poor resolution, time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other. The Wigner–Ville distribution is concentrated in time and frequency, but it is also highly nonlinear. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy, the spectrogram is a member of Cohens class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window, the method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has shown to reduce time and frequency smearing of any member of Cohens class. In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference and these quantities are local in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis. This point is called the center of gravity of the distribution. In digital signal processing, it is most common to sample the time, the discrete Fourier transform is used to compute samples X of the Fourier transform from samples x of a time domain signal. The reassignment operations proposed by Kodera et al and it is possible to approximate the partial derivatives using finite differences. Nelson arrived at a method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase spectrum. It is easily shown that Nelsons cross spectral surfaces compute an approximation of the derivatives that is equivalent to the finite differences method, auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al. One constraint in this method of computation is that the | X |2 must be non-zero and this is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued
Reassignment method
–
Reassigned spectral surface for the onset of an acoustic bass tone having a sharp pluck and a fundamental frequency of approximately 73.4 Hz. Sharp spectral ridges representing the harmonics are evident, as is the abrupt onset of the tone. The spectrogram was computed using a 65.7 ms Kaiser window with a shaping parameter of 12.
42.
Scaleogram
–
In signal processing, a scaleogram or scalogram is a visual method of displaying a wavelet transform. There are 3 axes, x representing time, y representing scale, the z axis is often shown by varying the colour or brightness. A scaleogram is the equivalent of a spectrogram for wavelets
Scaleogram
–
Scaleograms from the
DWT and
CWT for an audio sample
43.
Spectrometer
–
A spectrometer is a scientific instrument originally used to split light into an array of separate colors, called a spectrum. Spectrometers were developed in early studies of physics, astronomy, the capability of spectroscopy to determine chemical composition drove its advancement and continues to be one of their primary uses. Spectrometers are used in astronomy to analyze the composition of stars and planets. The concept of a spectrometer now encompasses instruments that do not examine light, spectrometers separate particles, atoms, and molecules by their mass, momentum, or energy. These types of spectrometers are used in analysis and particle physics. Optical spectrometers, in particular, show the intensity of light as a function of wavelength or of frequency, the deflection is produced either by refraction in a prism or by diffraction in a diffraction grating. These spectrometers utilize the phenomenon of optical dispersion, the light from a source can consist of a continuous spectrum, an emission spectrum, or an absorption spectrum. Because each element leaves its spectral signature in the pattern of lines observed, a mass spectrometer is an analytical instrument that is used to identify the amount and type of chemicals present in a sample by measuring the mass-to-charge ratio and abundance of gas-phase ions. The energy spectrum of particles of mass can also be measured by determining the time of flight between two detectors in a time-of-flight spectrometer. Alternatively, if the velocity is known, masses can be determined in a mass spectrometer. When a fast charged particle enters a constant magnetic field B at right angles, it is deflected into a path of radius r. The momentum p of the particle is given by p = m v = q B r. The focussing principle of the oldest and simplest magnetic spectrometer, the semicircular spectrometer, a constant magnetic field is perpendicular to the page. Charged particles of momentum p that pass the slit are deflected into circular paths of radius r = p/qB and it turns out that they all hit the horizontal line at nearly the same place, the focus, here a particle counter should be placed. Since Danysz time, many types of magnetic spectrometers more complicated than the type have been devised. Generally, the resolution of an instrument tells us how well two close-lying energies can be resolved, generally, for an instrument with mechanical slits, higher resolution will mean lower intensity
Spectrometer
–
A positive charged particle moving in a circle under the influence of the Lorentz force F
44.
Spectrum
–
A spectrum is a condition that is not limited to a specific set of values but can vary, without steps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors in visible light after passing through a prism, as scientific understanding of light advanced, it came to apply to the entire electromagnetic spectrum. Spectrum has since been applied by analogy to topics outside of optics, thus, one might talk about the spectrum of political opinion, or the spectrum of activity of a drug, or the autism spectrum. In these uses, values within a spectrum may not be associated with precisely quantifiable numbers or definitions, such uses imply a broad range of conditions or behaviors grouped together and studied under a single title for ease of discussion. Nonscientific uses of the spectrum are sometimes misleading. For instance, a single left–right spectrum of opinion does not capture the full range of peoples political beliefs. Political scientists use a variety of biaxial and multiaxial systems to accurately characterize political opinion. In most modern usages of spectrum there is a theme between the extremes at either end. This was not always true in older usage, in Latin spectrum means image or apparition, including the meaning spectre. Spectral evidence is testimony about what was done by spectres of persons not present physically and it was used to convict a number of persons of witchcraft at Salem, Massachusetts in the late 17th century. The word spectrum was used to designate a ghostly optical afterimage by Goethe in his Theory of Colors and Schopenhauer in On Vision. The prefix spectro- is used to form words relating to spectra, for example, a spectrometer is a device used to record spectra and spectroscopy is the use of a spectrometer for chemical analysis. In the 17th century the word spectrum was introduced into optics by Isaac Newton, soon the term referred to a plot of light intensity or power as a function of frequency or wavelength, also known as a spectral density plot. The term spectrum was expanded to apply to other waves, such as sound waves that could also be measured as a function of frequency, frequency spectrum and power spectrum of a signal. The term now applies to any signal that can be measured or decomposed along a variable such as energy in electron spectroscopy or mass to charge ratio in mass spectrometry. Spectrum is also used to refer to a representation of the signal as a function of the dependent variable. Devices used to measure an electromagnetic spectrum are called spectrograph or spectrometer, the visible spectrum is the part of the electromagnetic spectrum that can be seen by the human eye. The wavelength of light ranges from 390 to 700 nm
Spectrum
–
The spectrum in a
rainbow
Spectrum
–
Electromagnetic spectrum of a
quasar.
Spectrum
–
Mass spectrum of
Titan 's
ionosphere
Spectrum
–
Spectrogram of
dolphin vocalizations.
45.
Strobe tuner
–
Guitar tuner redirects here, but can also refer to the string tension adjusters also called machine heads. For the radio receiver component, see Tuner In music, a tuner is a device that detects. Pitch is the highness or lowness of a note, which is typically measured in Hertz. Simple tuners indicate—typically with an analog needle-dial, LEDs, or an LCD screen—whether a pitch is lower, higher, in the 2010s, software applications can turn a smartphone, tablet, or personal computer into a tuner. More complex and expensive tuners indicate pitch more precisely, Tuners vary in size from units that fit in a pocket to 19 rack-mount units. Instrument technicians, piano tuners, and violin-family luthiers typically use more expensive, the simplest tuners detect and display tuning only for a single pitch—often A or E—or for a small number of pitches, such as the six used in the standard tuning of a guitar. More complex tuners offer chromatic tuning for all 12 pitches of the equally tempered octave, among the most accurate tuning devices, strobe tuners work differently than regular electronic tuners. They are stroboscopes that flicker a light at the frequency as the note. The light shines on a wheel that spins at a precise speed, the interaction of the light and regularly-spaced marks on the wheel creates a stroboscopic effect that makes the marks for a particular pitch appear to stand still when the pitch is in tune. These can tune instruments and audio devices more accurately than most non-strobe tuners, regular electronic tuners contain either an input jack for electric instruments, a microphone, or a clip-on sensor or some combination of these inputs. Pitch detection circuitry drives some type of display, some tuners have an output, or through-put, so the tuner can connect in-line from an electric instrument to an instrument amplifier or mixing console. Small tuners are usually battery powered, many battery powered tuners also have a jack for an optional AC power supply. Most musical instruments generate a complex waveform. It contains a number of partials, including the fundamental frequency. Each instrument produces different ratios of harmonics, which is what makes notes of the pitch played on different instruments sound different. As well, this waveform constantly changes and this means that for non-strobe tuners to be accurate, the tuner must process a number of cycles and use the pitch average to drive its display. Background noise from other musicians or harmonic overtones from the instrument can impede the electronic tuner from locking onto the input frequency. This is why the needle or display on regular electronic tuners tends to waver when a pitch is played, small movements of the needle, or LED, usually represent a tuning error of 1 cent
Strobe tuner
–
Pocket-sized
Korg chromatic
LCD tuner, with simulated analog indicator needle
Strobe tuner
–
Some rock and pop guitarists and bassists use "
stompbox " format electronic tuners that route the electric signal for the instrument through the unit via a 1/4" patch cable.
Strobe tuner
–
A clip-on tuner attaches to the instrument and senses the vibrations from the instrument, even in a noisy environment.
Strobe tuner
–
Pattern of a mechanical strobe tuner disc
46.
Waterfall plot
–
A waterfall plot is a three-dimensional plot in which multiple curves of data, typically spectra, are displayed simultaneously. Typically the curves are staggered both across the screen and vertically, with nearer curves masking the ones behind, the result is a series of mountain shapes that appear to be side by side. The waterfall plot is used to show how two-dimensional information changes over time or some other variable such as rpm. The term waterfall plot is used interchangeably with spectrogram or Cumulative Spectral Decay plot. Waterfall plots are used to show the, results of spectral density estimation. Delayed response from a loudspeaker or listening room produced by impulse response testing or MLSSA, spectra at different engine speeds when testing engines. Loudspeaker acoustics Loudspeaker measurement Typical engine vibration waterfall Waterfall FFT Matlab script
Waterfall plot
–
Spectrogram and 3 styles of waterfall plot of a whistled sequence of 3 notes vs time
47.
Wavelet transform
–
In mathematics, a wavelet series is a representation of a square-integrable function by a certain orthonormal series generated by a wavelet. Nowadays, wavelet transformation is one of the most popular time-frequency-transformations and this article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform. A function ψ ∈ L2 is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space L2 of square integrable functions. The Hilbert basis is constructed as the family of functions by means of translations and dilations of ψ, ψ j k =2 j 2 ψ for integers j, k ∈ Z. Such a representation of a function f is known as a wavelet series and this implies that an orthonormal wavelet is self-dual. The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension and this is effected by choosing suitable basis functions that allow for this. Changes in the extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing, Δ t Δ ω ≧12 where t represents time, the higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the windows is chosen, the larger is the value of Δ t. The transformed signal provides information about the time and the frequency, the difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, another example, The analysis of three superposed sinusoidal signals y = sin + sin + sin with STFT and wavelet-transformation. Wavelet compression is a form of data compression well suited for image compression, notable implementations are JPEG2000, DjVu and ECW for still images, CineForm, and the BBCs Dirac. The goal is to image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy, see Diary Of An x264 Developer, The problems with wavelets for discussion of practical issues of current methods using wavelets for video compression. First a wavelet transform is applied and this produces as many coefficients as there are pixels in the image. These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients and this principle is called transform coding. After that, the coefficients are quantized and the values are entropy encoded and/or run length encoded. A few 1D and 2D applications of wavelet compression use a technique called wavelet footprints, Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies
Wavelet transform
Wavelet transform
–
An example of the 2D
discrete wavelet transform that is used in
JPEG2000.
Wavelet transform
Wavelet transform
48.
Sourceforge.net
–
SourceForge is a Web-based service that offers software developers a centralized online location to control and manage free and open-source software projects. SourceForge was one of the first to offer this service free of charge to open source projects, since 2012 the website runs on Apache Allura software. As of March 2014, the SourceForge repository claimed to host more than 430,000 projects and had more than 3.7 million registered users, the domain sourceforge. net attracted at least 33 million visitors by August 2009 according to a Compete. com survey. Negative community reactions to the program led to review of the program, nonetheless. The program was cancelled by new owners BizX on February 9,2016, on May 17,2016 they announced that it would scan all projects for malware, SourceForge is a web-based source code repository. It acts as a location for free and open-source software projects. It was the first to offer service for free to open-source projects. Project developers have access to centralized storage and tools for managing projects, though it is best known for providing revision control systems such as CVS, SVN, Bazaar, Git and Mercurial. Major features include project wikis, metrics and analysis, access to a MySQL database, the vast number of users at SourceForge. net exposes prominent projects to a variety of developers and can create a positive feedback loop. As a projects activity rises, SourceForge. nets internal ranking system makes it visible to other developers through SourceForge directory. Given that many projects fail due to lack of developer support. SourceForges traditional revenue model is through advertising sales on their site. In 2006 SourceForge Inc. reported quarterly takings of US$6.5 million, in 2009 SourceForge reported a gross quarterly income of US$23 million through media and e-commerce streams. In 2011 a revenue of 20 million USD was reported for the value of the SourceForge, slashdot and freecode holdings. Since 2013 additional revenue generation schemes, such as models, were trialled. The result has in some cases been the appearance of malware bundled with SourceForge downloads, on February 9,2016, SourceForge announced they had eliminated their DevShare program practice of bundling installers with project downloads. The software running the SourceForge site was released as software in January 2000 and was later named SourceForge Alexandria. In September 2002 SourceForge was temporarily banned in China, the site was banned again in China, for about a month, in July 2008
Sourceforge.net
–
The SourceForge logo