1.
Spectrograph
–
A spectrograph is an instrument that separates light into a frequency spectrum and records the signal using a camera. There are several kinds of machines referred to as spectrographs, depending on the nature of the waves. The term was first used in July,1876 by Dr. Henry Draper when he invented the earliest version of this device, and this earliest version of the spectrograph was cumbersome to use and difficult to manage. One way to define a spectrograph is as a device that separates light by its wavelength, a spectrograph typically has a multi-channel detector system or imaging system that detects the spectrum of light. The first spectrographs used photographic paper as the detector, the star spectral classification and discovery of the main sequence, Hubbles law and the Hubble sequence were all made with spectrographs that used photographic paper. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector, more recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded, the forthcoming James Webb Space Telescope will contain both a near-infrared spectrograph and a mid-infrared spectrometer. An echelle spectrograph uses two diffraction gratings, rotated 90 degrees with respect to other and placed close to one another. Therefore, a point and not a slit is used. The small chip also means that the collimating optics need not to be optimized for coma or astigmatism, but the spherical aberration can be set to zero
2.
Spectral density
–
The power spectrum S x x of a time series x describes the distribution of power into frequency components composing that signal. According to Fourier analysis any physical signal can be decomposed into a number of discrete frequencies, the statistical average of a certain signal or sort of signal as analyzed in terms of its frequency content, is called its spectrum. When the energy of the signal is concentrated around a time interval, especially if its total energy is finite. More commonly used is the spectral density, which applies to signals existing over all time. The power spectral density then refers to the energy distribution that would be found per unit time. Summation or integration of the spectral components yields the total power or variance, identical to what would be obtained by integrating x 2 over the time domain, the spectrum of a physical process x often contains essential information about the nature of x. For instance, the pitch and timbre of an instrument are immediately determined from a spectral analysis. The color of a source is determined by the spectrum of the electromagnetic waves electric field E as it fluctuates at an extremely high frequency. Obtaining a spectrum from time series such as these involves the Fourier transform, however this article concentrates on situations in which the time series is known or directly measured. The power spectrum is important in signal processing and in the statistical study of stochastic processes, as well as in many other branches of physics. Typically the process is a function of time but one can similarly discuss data in the domain being decomposed in terms of spatial frequency. Any signal that can be represented as an amplitude that varies in time has a frequency spectrum. This includes familiar entities such as light, musical notes, radio/TV. When these signals are viewed in the form of a frequency spectrum, in some cases the frequency spectrum may include a distinct peak corresponding to a sine wave component. And additionally there may be corresponding to harmonics of a fundamental peak. In physics, the signal might be a wave, such as an electromagnetic wave, the power spectral density of the signal describes the power present in the signal as a function of frequency, per unit frequency. Power spectral density is expressed in watts per hertz. When a signal is defined in terms only of a voltage, for instance, in this case power is simply reckoned in terms of the square of the signal, as this would always be proportional to the actual power delivered by that signal into a given impedance
3.
Frequencies
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
4.
Sound
–
In physics, sound is a vibration that propagates as a typically audible mechanical wave of pressure and displacement, through a transmission medium such as air or water. In physiology and psychology, sound is the reception of such waves, humans can hear sound waves with frequencies between about 20 Hz and 20 kHz. Sound above 20 kHz is ultrasound and below 20 Hz is infrasound, other animals have different hearing ranges. Acoustics is the science that deals with the study of mechanical waves in gases, liquids, and solids including vibration, sound, ultrasound. A scientist who works in the field of acoustics is an acoustician, an audio engineer, on the other hand, is concerned with the recording, manipulation, mixing, and reproduction of sound. Auditory sensation evoked by the oscillation described in, sound can propagate through a medium such as air, water and solids as longitudinal waves and also as a transverse wave in solids. The sound waves are generated by a source, such as the vibrating diaphragm of a stereo speaker. The sound source creates vibrations in the surrounding medium, as the source continues to vibrate the medium, the vibrations propagate away from the source at the speed of sound, thus forming the sound wave. At a fixed distance from the source, the pressure, velocity, at an instant in time, the pressure, velocity, and displacement vary in space. Note that the particles of the medium do not travel with the sound wave and this is intuitively obvious for a solid, and the same is true for liquids and gases. During propagation, waves can be reflected, refracted, or attenuated by the medium, the behavior of sound propagation is generally affected by three things, A complex relationship between the density and pressure of the medium. This relationship, affected by temperature, determines the speed of sound within the medium, if the medium is moving, this movement may increase or decrease the absolute speed of the sound wave depending on the direction of the movement. For example, sound moving through wind will have its speed of propagation increased by the speed of the if the sound and wind are moving in the same direction. If the sound and wind are moving in opposite directions, the speed of the wave will be decreased by the speed of the wind. Medium viscosity determines the rate at which sound is attenuated, for many media, such as air or water, attenuation due to viscosity is negligible. When sound is moving through a medium that does not have constant physical properties, the mechanical vibrations that can be interpreted as sound can travel through all forms of matter, gases, liquids, solids, and plasmas. The matter that supports the sound is called the medium, sound cannot travel through a vacuum. Sound is transmitted through gases, plasma, and liquids as longitudinal waves and it requires a medium to propagate
5.
Music
–
Music is an art form and cultural activity whose medium is sound organized in time. The common elements of music are pitch, rhythm, dynamics, different styles or types of music may emphasize, de-emphasize or omit some of these elements. The word derives from Greek μουσική, Ancient Greek and Indian philosophers defined music as tones ordered horizontally as melodies and vertically as harmonies. Common sayings such as the harmony of the spheres and it is music to my ears point to the notion that music is often ordered and pleasant to listen to. However, 20th-century composer John Cage thought that any sound can be music, saying, for example, There is no noise, the creation, performance, significance, and even the definition of music vary according to culture and social context. There are many types of music, including music, traditional music, art music, music written for religious ceremonies. For example, it can be hard to draw the line between some early 1980s hard rock and heavy metal, within the arts, music may be classified as a performing art, a fine art or as an auditory art. People may make music as a hobby, like a teen playing cello in a youth orchestra, the word derives from Greek μουσική. According to the Online Etymological Dictionary, the music is derived from mid-13c. Musike, from Old French musique and directly from Latin musica the art of music and this is derived from the. Greek mousike of the Muses, from fem. of mousikos pertaining to the Muses, from Mousa Muse. In classical Greece, any art in which the Muses presided, Music is composed and performed for many purposes, ranging from aesthetic pleasure, religious or ceremonial purposes, or as an entertainment product for the marketplace. With the advent of recording, records of popular songs. Some music lovers create mix tapes of their songs, which serve as a self-portrait. An environment consisting solely of what is most ardently loved, amateur musicians can compose or perform music for their own pleasure, and derive their income elsewhere. Professional musicians sometimes work as freelancers or session musicians, seeking contracts and engagements in a variety of settings, There are often many links between amateur and professional musicians. Beginning amateur musicians take lessons with professional musicians, in community settings, advanced amateur musicians perform with professional musicians in a variety of ensembles such as community concert bands and community orchestras. However, there are many cases where a live performance in front of an audience is also recorded and distributed. Live concert recordings are popular in classical music and in popular music forms such as rock, where illegally taped live concerts are prized by music lovers
6.
Sonar
–
Sonar is a technique that uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water, such as other vessels. Two types of technology share the name sonar, passive sonar is essentially listening for the sound made by vessels, active sonar is emitting pulses of sounds, Sonar may be used as a means of acoustic location and of measurement of the echo characteristics of targets in the water. Acoustic location in air was used before the introduction of radar, Sonar may also be used in air for robot navigation, and SODAR is used for atmospheric investigations. The term sonar is used for the equipment used to generate. The acoustic frequencies used in sonar systems vary from low to extremely high. The study of sound is known as underwater acoustics or hydroacoustics. In the 19th century a bell was used as an ancillary to lighthouses to provide warning of hazards. The use of sound to locate underwater in the same way as bats use sound for aerial navigation seems to have been prompted by the Titanic disaster of 1912. S. Revenue Cutter Miami on the Grand Banks off Newfoundland Canada, in that test, Fessenden demonstrated depth sounding, underwater communications and echo ranging. The so-called Fessenden oscillator, at ca.500 Hz frequency, was unable to determine the bearing of the due to the 3 metre wavelength. The ten Montreal-built British H class submarines launched in 1915 were equipped with a Fessenden oscillator, during World War I the need to detect submarines prompted more research into the use of sound. Although piezoelectric and magnetostrictive transducers later superseded the electrostatic transducers they used, lightweight sound-sensitive plastic film and fibre optics have been used for hydrophones, while Terfenol-D and PMN have been developed for projectors. By 1918, both France and Britain had built prototype active systems, the British tested their ASDIC on HMS Antrim in 1920, and started production in 1922. The 6th Destroyer Flotilla had ASDIC-equipped vessels in 1923, an anti-submarine school, HMS Osprey, and a training flotilla of four vessels were established on Portland in 1924. The US Sonar QB set arrived in 1931, by the outbreak of World War II, the Royal Navy had five sets for different surface ship classes, and others for submarines, incorporated into a complete anti-submarine attack system. The effectiveness of early ASDIC was hamstrung by the use of the charge as an anti-submarine weapon. This required a vessel to pass over a submerged contact before dropping charges over the stern. The hunter was effectively firing blind, during which time a commander could take evasive action
7.
Radar
–
Radar is an object-detection system that uses radio waves to determine the range, angle, or velocity of objects. It can be used to detect aircraft, ships, spacecraft, guided missiles, motor vehicles, weather formations, Radio waves from the transmitter reflect off the object and return to the receiver, giving information about the objects location and speed. Radar was developed secretly for military use by several nations in the period before, the term RADAR was coined in 1940 by the United States Navy as an acronym for RAdio Detection And Ranging or RAdio Direction And Ranging. The term radar has since entered English and other languages as a common noun, high tech radar systems are associated with digital signal processing, machine learning and are capable of extracting useful information from very high noise levels. Other systems similar to make use of other parts of the electromagnetic spectrum. One example is lidar, which uses ultraviolet, visible, or near infrared light from lasers rather than radio waves, as early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, an instructor at the Imperial Russian Navy school in Kronstadt. The next year, he added a spark-gap transmitter, in 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, the German inventor Christian Hülsmeyer was the first to use radio waves to detect the presence of distant metallic objects. In 1904, he demonstrated the feasibility of detecting a ship in dense fog and he obtained a patent for his detection device in April 1904 and later a patent for a related amendment for estimating the distance to the ship. He also got a British patent on September 23,1904 for a radar system. It operated on a 50 cm wavelength and the radar signal was created via a spark-gap. In 1915, Robert Watson-Watt used radio technology to advance warning to airmen. Watson-Watt became an expert on the use of direction finding as part of his lightning experiments. As part of ongoing experiments, he asked the new boy, Arnold Frederic Wilkins, Wilkins made an extensive study of available units before selecting a receiver model from the General Post Office. Its instruction manual noted that there was fading when aircraft flew by, in 1922, A. Hoyt Taylor and Leo C. Taylor submitted a report, suggesting that this might be used to detect the presence of ships in low visibility, eight years later, Lawrence A. Australia, Canada, New Zealand, and South Africa followed prewar Great Britain, and Hungary had similar developments during the war. Hugon, began developing a radio apparatus, a part of which was installed on the liner Normandie in 1935
8.
Seismology
–
Seismology is the scientific study of earthquakes and the propagation of elastic waves through the Earth or through other planet-like bodies. A related field that uses geology to infer information regarding past earthquakes is paleoseismology, a recording of earth motion as a function of time is called a seismogram. A seismologist is a scientist who does research in seismology, scholarly interest in earthquakes can be traced back to antiquity. Early speculations on the causes of earthquakes were included in the writings of Thales of Miletus, Anaximenes of Miletus, Aristotle. In 132 CE, Zhang Heng of Chinas Han dynasty designed the first known seismoscope, in 1664, Athanasius Kircher argued that earthquakes were caused by the movement of fire within a system of channels inside the Earth. In 1703, Martin Lister and Nicolas Lemery proposed that earthquakes were caused by chemical explosions within the earth, the Lisbon earthquake of 1755, coinciding with the general flowering of science in Europe, set in motion intensified scientific attempts to understand the behaviour and causation of earthquakes. The earliest responses include work by John Bevis and John Michell, Michell determined that earthquakes originate within the Earth and were waves of movement caused by shifting masses of rock miles below the surface. From 1857, Robert Mallet laid the foundation of instrumental seismology and he is also responsible for coining the word seismology. In 1897, Emil Wiecherts theoretical calculations led him to conclude that the Earths interior consists of a mantle of silicates, surrounding a core of iron. In 1906 Richard Dixon Oldham identified the separate arrival of P-waves, S-waves and surface waves on seismograms, in 1910, after studying the 1906 San Francisco earthquake, Harry Fielding Reid put forward the elastic rebound theory which remains the foundation for modern tectonic studies. The development of this depended on the considerable progress of earlier independent streams of work on the behaviour of elastic materials. In 1926, Harold Jeffreys was the first to claim, based on his study of waves, that below the mantle. In 1937, Inge Lehmann determined that within the liquid outer core there is a solid inner core. By the 1960s, earth science had developed to the point where a comprehensive theory of the causation of seismic events had come together in the now well-established theory of plate tectonics, seismic waves are elastic waves that propagate in solid or fluid materials. There are two types of waves, Pressure waves or Primary waves and Shear or Secondary waves. S-waves are transverse waves that move perpendicular to the direction of propagation, therefore, they appear later than P-waves on a seismogram. Fluids cannot support perpendicular motion, so S-waves only travel in solids, the two main surface wave types are Rayleigh waves, which have some compressional motion, and Love waves, which do not. Rayleigh waves result from the interaction of vertically polarized P- and S-waves that satisfy the conditions on the surface
9.
Animal communication
–
Animal communication is the transfer of information from one or a group of animals to one or more other animals that affects the current or future behaviour of the receivers. Information may be sent intentionally, as in a display, or unintentionally. Information may be transferred to an audience of several receivers, Animal communication is a rapidly growing area of study in disciplines including animal behaviour, sociobiology, neurobiology and animal cognition. Many aspects of behaviour, such as symbolic name use, emotional expression, learning. When the information from the changes the behaviour of a receiver. Signalling theory predicts that for a signal to be maintained in the population, signal production by senders and the perception and subsequent response of receivers are thought to coevolve. Signals often involve multiple mechanisms, e. g. both visual and auditory, and for a signal to be understood the behaviour of both sender and receiver require careful study. A notable example is the presentation of a parent herring gull’s bill to its chick as a signal for feeding, like many gulls, the herring gull has a brightly coloured bill, yellow with a red spot on the lower mandible near the tip. The complete signal therefore involves a distinctive feature, the red-spotted bill. While all primates use some form of gesture, Frans de Waal concluded that apes and he tested the hypothesis that gestures evolve into language by studying the gestures of bonobos and chimps. Facial expression, Facial gestures play an important role in animal communication, often a facial gesture is a signal of emotion. Dogs, for example, express anger through snarling and showing their teeth, in alarm their ears perk up, in fear the ears flatten while the dogs expose their teeth slightly and squint their eyes. Gaze following, Social animals coordinate their communication by monitoring of each others head, such behaviour has long been recognized as an important component of communication during human development, and gaze-following has recently received much attention in animals. g. By repositioning themselves to follow a gaze cue when faced with a barrier blocking their view, Color change, Color change can be separated into changes that occur during growth and development, and those triggered by mood, social context, or abiotic factors such as temperature. The latter are seen in many taxa, Some cephalopods, such as the octopus and the cuttlefish, have specialized skin cells that can change the apparent colour, opacity, and reflectiveness of their skin. In addition to their use for camouflage, rapid changes in color are used while hunting. Cuttlefish may display two different signals simultaneously from opposite sides of their body. When a male cuttlefish courts a female in the presence of males, he displays a male pattern facing the female
10.
Optical spectrometer
–
An optical spectrometer is an instrument used to measure properties of light over a specific portion of the electromagnetic spectrum, typically used in spectroscopic analysis to identify materials. The variable measured is most often the lights intensity but could also, for instance, a spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometers may also operate over a range of non-optical wavelengths. If the instrument is designed to measure the spectrum in absolute units rather than relative units, the majority of spectrophotometers are used in spectral regions near the visible spectrum. In general, any particular instrument will operate over a portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies, the analyzer is a closely related electronic device. Spectrometers are used in many fields, for example, they are used in astronomy to analyze the radiation from astronomical objects and deduce chemical composition. The spectrometer uses a prism or a grating to spread the light from a distant object into a spectrum and this allows astronomers to detect many of the chemical elements by their characteristic spectral fingerprints. If the object is glowing by itself, it will show spectral lines caused by the gas itself. These lines are named for the elements which cause them, such as the alpha, beta. Chemical compounds may also be identified by absorption, typically these are dark bands in specific locations in the spectrum caused by energy being absorbed as light from other objects passes through a gas cloud. Much of our knowledge of the makeup of the universe comes from spectra. Spectroscopes are often used in astronomy and some branches of chemistry, early spectroscopes were simply prisms with graduations marking wavelengths of light. Modern spectroscopes generally use a diffraction grating, a movable slit, Fraunhofer also went on to invent the first diffraction spectroscope. Gustav Robert Kirchhoff and Robert Bunsen discovered the application of spectroscopes to chemical analysis, Kirchhoff and Bunsens analysis also enabled a chemical explanation of stellar spectra, including Fraunhofer lines. When a material is heated to incandescence it emits light that is characteristic of the makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints, in the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light then passed through a prism that refracted the beam into a spectrum because different wavelengths were refracted different amounts due to dispersion and this image was then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement
11.
Band-pass filter
–
A band-pass filter is a device that passes frequencies within a certain range and rejects frequencies outside that range. An example of an analogue electronic band-pass filter is an RLC circuit and these filters can also be created by combining a low-pass filter with a high-pass filter. Bandpass is an adjective that describes a type of filter or filtering process, it is to be distinguished from passband, hence, one might say A dual bandpass filter has two passbands. A bandpass signal is a signal containing a band of frequencies not adjacent to zero frequency, an ideal bandpass filter would have a completely flat passband and would completely attenuate all frequencies outside the passband. Additionally, the out of the passband would have brickwall characteristics. In practice, no bandpass filter is ideal and this is known as the filter roll-off, and it is usually expressed in dB of attenuation per octave or decade of frequency. Generally, the design of a filter seeks to make the roll-off as narrow as possible, often, this is achieved at the expense of pass-band or stop-band ripple. The bandwidth of the filter is simply the difference between the upper and lower cutoff frequencies, optical band-pass filters are common in photography and theatre lighting work. These filters take the form of a transparent coloured film or sheet, a band-pass filter can be characterized by its Q factor. The Q-factor is the inverse of the fractional bandwidth, a high-Q filter will have a narrow passband and a low-Q filter will have a wide passband. These are respectively referred to as narrow-band and wide-band filters, Bandpass filters are widely used in wireless transmitters and receivers. The main function of such a filter in a transmitter is to limit the bandwidth of the signal to the band allocated for the transmission. This prevents the transmitter from interfering with other stations, in a receiver, a bandpass filter allows signals within a selected range of frequencies to be heard or decoded, while preventing signals at unwanted frequencies from getting through. A bandpass filter also optimizes the ratio and sensitivity of a receiver. Outside of electronics and signal processing, one example of the use of filters is in the atmospheric sciences. It is common to band-pass filter recent meteorological data with a range of, for example,3 to 10 days. In neuroscience, visual cortical simple cells were first shown by David Hubel and Torsten Wiesel to have properties that resemble Gabor filters. In astronomy, band-pass filters are used to only a single portion of the light spectrum into an instrument
12.
Fourier transform
–
The Fourier transform decomposes a function of time into the frequencies that make it up, in a way similar to how a musical chord can be expressed as the frequencies of its constituent notes. The Fourier transform is called the frequency domain representation of the original signal, the term Fourier transform refers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. The Fourier transform is not limited to functions of time, but in order to have a unified language, linear operations performed in one domain have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the domain corresponds to multiplication by the frequency. Also, convolution in the domain corresponds to ordinary multiplication in the frequency domain. Concretely, this means that any linear time-invariant system, such as a filter applied to a signal, after performing the desired operations, transformation of the result can be made back to the time domain. Functions that are localized in the domain have Fourier transforms that are spread out across the frequency domain and vice versa. The Fourier transform of a Gaussian function is another Gaussian function, Joseph Fourier introduced the transform in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can also be generalized to functions of variables on Euclidean space. In general, functions to which Fourier methods are applicable are complex-valued, the latter is routinely employed to handle periodic functions. The fast Fourier transform is an algorithm for computing the DFT, the Fourier transform of the function f is traditionally denoted by adding a circumflex, f ^. There are several conventions for defining the Fourier transform of an integrable function f, ℝ → ℂ. Here we will use the definition, f ^ = ∫ − ∞ ∞ f e −2 π i x ξ d x. When the independent variable x represents time, the transform variable ξ represents frequency. Under suitable conditions, f is determined by f ^ via the inverse transform, f = ∫ − ∞ ∞ f ^ e 2 π i ξ x d ξ, the functions f and f ^ often are referred to as a Fourier integral pair or Fourier transform pair. For other common conventions and notations, including using the angular frequency ω instead of the frequency ξ, see Other conventions, the Fourier transform on Euclidean space is treated separately, in which the variable x often represents position and ξ momentum. Many other characterizations of the Fourier transform exist, for example, one uses the Stone–von Neumann theorem, the Fourier transform is the unique unitary intertwiner for the symplectic and Euclidean Schrödinger representations of the Heisenberg group. In 1822, Joseph Fourier showed that some functions could be written as an sum of harmonics
13.
Time
–
Time is the indefinite continued progress of existence and events that occur in apparently irreversible succession from the past through the present to the future. Time is often referred to as the dimension, along with the three spatial dimensions. Time has long been an important subject of study in religion, philosophy, and science, nevertheless, diverse fields such as business, industry, sports, the sciences, and the performing arts all incorporate some notion of time into their respective measuring systems. Two contrasting viewpoints on time divide prominent philosophers, one view is that time is part of the fundamental structure of the universe—a dimension independent of events, in which events occur in sequence. Isaac Newton subscribed to this realist view, and hence it is referred to as Newtonian time. This second view, in the tradition of Gottfried Leibniz and Immanuel Kant, holds that time is neither an event nor a thing, Time in physics is unambiguously operationally defined as what a clock reads. Time is one of the seven fundamental physical quantities in both the International System of Units and International System of Quantities, Time is used to define other quantities—such as velocity—so defining time in terms of such quantities would result in circularity of definition. The operational definition leaves aside the question there is something called time, apart from the counting activity just mentioned, that flows. Investigations of a single continuum called spacetime bring questions about space into questions about time, questions that have their roots in the works of early students of natural philosophy. Furthermore, it may be there is a subjective component to time. Temporal measurement has occupied scientists and technologists, and was a motivation in navigation. Periodic events and periodic motion have long served as standards for units of time, examples include the apparent motion of the sun across the sky, the phases of the moon, the swing of a pendulum, and the beat of a heart. Currently, the unit of time, the second, is defined by measuring the electronic transition frequency of caesium atoms. Time is also of significant social importance, having economic value as well as value, due to an awareness of the limited time in each day. In day-to-day life, the clock is consulted for periods less than a day whereas the calendar is consulted for periods longer than a day, increasingly, personal electronic devices display both calendars and clocks simultaneously. The number that marks the occurrence of an event as to hour or date is obtained by counting from a fiducial epoch—a central reference point. Artifacts from the Paleolithic suggest that the moon was used to time as early as 6,000 years ago. Lunar calendars were among the first to appear, either 12 or 13 lunar months, without intercalation to add days or months to some years, seasons quickly drift in a calendar based solely on twelve lunar months
14.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope
15.
Amplitude
–
The amplitude of a periodic variable is a measure of its change over a single period. There are various definitions of amplitude, which are all functions of the magnitude of the difference between the extreme values. In older texts the phase is called the amplitude. Peak-to-peak amplitude is the change between peak and trough, with appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. Peak-to-peak is a measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. In audio system measurements, telecommunications and other areas where the measurand is a signal that swings above and below a value but is not sinusoidal. If the reference is zero, this is the absolute value of the signal, if the reference is a mean value. Semi-amplitude means half the peak-to-peak amplitude, some scientists use amplitude or peak amplitude to mean semi-amplitude, that is, half the peak-to-peak amplitude. It is the most widely used measure of orbital wobble in astronomy, the RMS of the AC waveform. For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is used because it is both unambiguous and has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude. For alternating current electric power, the practice is to specify RMS values of a sinusoidal waveform. One property of root mean square voltages and currents is that they produce the same heating effect as direct current in a given resistance, the peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. Some common voltmeters are calibrated for RMS amplitude, but respond to the value of a rectified waveform. Many digital voltmeters and all moving coil meters are in this category, the RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. If the wave shape being measured is greatly different from a sine wave, true RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure current. The advent of microprocessor controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace
16.
Brightness
–
Brightness is an attribute of visual perception in which a source appears to be radiating or reflecting light. In other words, brightness is the perception elicited by the luminance of a visual target and it is not necessarily proportional to luminance. This is a subjective attribute/property of an object being observed and one of the color appearance parameters of color appearance models, brightness refers to an absolute term and should not be confused with Lightness. The adjective bright derives from an Old English beorht with the same meaning via metathesis giving Middle English briht, the word is from a Common Germanic *berhtaz, ultimately from a PIE root with a closely related meaning, *bhereg- white, bright. Brightness was formerly used as a synonym for the term luminance. As defined by the US Federal Glossary of Telecommunication Terms, brightness should now be used only for non-quantitative references to physiological sensations and perceptions of light, a given target luminance can elicit different perceptions of brightness in different contexts, see, for example, Whites illusion. With regard to stars, brightness is quantified as apparent magnitude, brightness is, at least in some respects, the antonym of darkness. The United States Federal Trade Commission has assigned a meaning to brightness when applied to lamps. When appearing on light bulb packages, brightness means Luminous flux, Luminous flux is the total amount of light coming from a source, such as a lighting device. Luminance, the meaning of brightness, is the amount of light per solid angle coming from an area. The table below shows the ways of indicating the amount of light. The term brightness is used in discussions of sound timbres. Luma Luminance Luminosity Media related to brightness at Wikimedia Commons Poyntons Color FAQ
17.
Logarithm
–
In mathematics, the logarithm is the inverse operation to exponentiation. That means the logarithm of a number is the exponent to which another fixed number, in simple cases the logarithm counts factors in multiplication. For example, the base 10 logarithm of 1000 is 3, the logarithm of x to base b, denoted logb, is the unique real number y such that by = x. For example, log2 =6, as 64 =26, the logarithm to base 10 is called the common logarithm and has many applications in science and engineering. The natural logarithm has the e as its base, its use is widespread in mathematics and physics. The binary logarithm uses base 2 and is used in computer science. Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations and they were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the function in the 18th century. Logarithmic scales reduce wide-ranging quantities to tiny scopes, for example, the decibel is a unit quantifying signal power log-ratios and amplitude log-ratios. In chemistry, pH is a measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and they describe musical intervals, appear in formulas counting prime numbers, inform some models in psychophysics, and can aid in forensic accounting. In the same way as the logarithm reverses exponentiation, the logarithm is the inverse function of the exponential function applied to complex numbers. The discrete logarithm is another variant, it has uses in public-key cryptography, the idea of logarithms is to reverse the operation of exponentiation, that is, raising a number to a power. For example, the power of 2 is 8, because 8 is the product of three factors of 2,23 =2 ×2 ×2 =8. It follows that the logarithm of 8 with respect to base 2 is 3, the third power of some number b is the product of three factors equal to b. More generally, raising b to the power, where n is a natural number, is done by multiplying n factors equal to b. The n-th power of b is written bn, so that b n = b × b × ⋯ × b ⏟ n factors, exponentiation may be extended to by, where b is a positive number and the exponent y is any real number. For example, b−1 is the reciprocal of b, that is, the logarithm of a positive real number x with respect to base b, a positive real number not equal to 1, is the exponent by which b must be raised to yield x
18.
Decibel
–
The decibel is a logarithmic unit used to express the ratio of two values of a physical quantity. One of these values is often a reference value, in which case the decibel is used to express the level of the other value relative to this reference. When used in way, the decibel symbol is often qualified with a suffix that indicates the reference quantity that has been used or some other property of the quantity being measured. For example, dBm indicates a power of one milliwatt. There are two different scales used when expressing a ratio in decibels depending on the nature of the quantities, when expressing power quantities, the number of decibels is ten times the logarithm to base 10 of the ratio of two power quantities. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level, when expressing field quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The difference in scales relates to the square law of fields in three-dimensional linear space. The decibel scales differ so that comparisons can be made between related power and field quantities when they are expressed in decibels. The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth of one bel, named in honor of Alexander Graham Bell, however, today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics, electronics, and control theory. In electronics, the gains of amplifiers, attenuation of signals, the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was originally Miles of Standard Cable, the standard telephone cable implied was a cable having uniformly distributed resistance of 88 ohms per loop mile and uniformly distributed shunt capacitance of 0.054 microfarad per mile. 1 TU was defined such that the number of TUs was ten times the logarithm of the ratio of measured power to a reference power level. The definition was conveniently chosen such that 1 TU approximated 1 MSC, in 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio. It was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell, the bel is seldom used, as the decibel was the proposed working unit. However, the decibel is recognized by international bodies such as the International Electrotechnical Commission. The term field quantity is deprecated by ISO 80000-1, which favors root-power, in spite of their widespread use, suffixes are not recognized by the IEC or ISO. The ISO Standard 80000-3,2006 defines the following quantities, the decibel is one-tenth of a bel,1 dB =0.1 B
19.
Frequency modulation
–
In telecommunications and signal processing, frequency modulation is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. This contrasts with amplitude modulation, in which the amplitude of the wave varies. This modulation technique is known as frequency-shift keying, FSK is widely used in modems and fax modems, and can also be used to send Morse code. Frequency modulation is used for FM radio broadcasting. For this reason, most music is broadcast over FM radio, frequency modulation has a close relationship with phase modulation, phase modulation is often used as an intermediate step to achieve frequency modulation. Mathematically both of these are considered a case of quadrature amplitude modulation. While most of the energy of the signal is contained within fc ± fΔ, the frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems. Mathematically, a baseband modulated signal may be approximated by a continuous wave signal with a frequency fm. This method is also named as Single-tone Modulation. As in other systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. e. The maximum deviation of the frequency from the carrier frequency. For a sine wave modulation, the index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave. If h ≪1, the modulation is called narrowband FM, sometimes modulation index h<0.3 rad is considered as Narrowband FM otherwise Wideband FM. In the case of digital modulation, the carrier f c is never transmitted, rather, one of two frequencies is transmitted, either f c + Δ f or f c − Δ f, depending on the binary state 0 or 1 of the modulation signal. If h ≫1, the modulation is called wideband FM, if the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals, for particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands. Since the sidebands are on sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example,3 kHz deviation modulated by a 2.2 kHz audio tone produces an index of 1.36. Suppose that we limit ourselves to only those sidebands that have an amplitude of at least 0.01
20.
Sinusoidal
–
A sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time is, y = A sin = A sin where, A = the amplitude, F = the ordinary frequency, the number of oscillations that occur each second of time. ω = 2πf, the frequency, the rate of change of the function argument in units of radians per second φ = the phase. When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ /ω seconds, a negative value represents a delay, and a positive value represents an advance. The sine wave is important in physics because it retains its shape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property and this property leads to its importance in Fourier analysis and makes it acoustically unique. The wavenumber is related to the frequency by. K = ω v =2 π f v =2 π λ where λ is the wavelength, f is the frequency, and v is the linear speed. This equation gives a wave for a single dimension, thus the generalized equation given above gives the displacement of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire, in two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex such as the height of a water wave in a pond after a stone has been dropped in. This wave pattern occurs often in nature, including wind waves, sound waves, a cosine wave is said to be sinusoidal, because cos = sin , which is also a sine wave with a phase-shift of π/2 radians. Because of this start, it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a frequency with no harmonics. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, on the other hand, if the sound contains aperiodic waves along with sine waves, then the sound will be perceived noisy as noise is characterized as being aperiodic or having a non-repetitive pattern. In 1822, French mathematician Joseph Fourier discovered that sinusoidal waves can be used as building blocks to describe and approximate any periodic waveform
21.
PAL
–
Phase Alternating Line is a colour encoding system for analogue television used in broadcast television systems in most countries broadcasting at 625-line /50 field per second. Other common colour encoding systems are NTSC and SECAM, all the countries using PAL are currently in process of conversion or have already converted standards to DVB, ISDB or DTMB. This page primarily discusses the PAL colour encoding system, the articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation. To overcome NTSCs shortcomings, alternative standards were devised, resulting in the development of the PAL, the goal was to provide a colour TV standard for the European picture frequency of 50 fields per second, and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Hannover, Germany, with important input from Dr. Kruse, the format was patented by Telefunken in 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union on 3 January 1963. When asked, why the system was named PAL and not Bruch the inventor answered that a Bruch system would not have sold very well. The first broadcasts began in the United Kingdom in June 1967, the one BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce 625-lines in 1964. Telefunken PALcolor 708T was the first PAL commercial TV set and it was followed by Loewe-Farbfernseher S920 & F900. Telefunken was later bought by the French electronics manufacturer Thomson, Thomson also bought the Compagnie Générale de Télévision where Henri de France developed SECAM, the first European Standard for colour television. The term PAL was often used informally and somewhat imprecisely to refer to the 625-line/50 Hz television system in general, accordingly, DVDs were labelled as PAL or NTSC even though technically the discs do not carry either PAL or NTSC composite signal. CCIR 625/50 and EIA 525/60 are the names for these standards, PAL. Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the chrominance information added to the video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL and NTSC4.43, the SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. Early PAL receivers relied on the eye to do that cancelling, however. The effect is that phase errors result in changes, which are less objectionable than the equivalent hue changes of NTSC. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth reduced greatly compared to the luminance signal. The 4.43361875 MHz frequency of the carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency is 15625 Hz, the carrier frequency calculates as follows,4.43361875 MHz =283.75 ×15625 Hz +25 Hz
22.
Digital signal processing
–
Digital signal processing is the use of digital processing, such as by computers, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a variable in a domain such as time, space. Digital signal processing and analog signal processing are subfields of signal processing, digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to system identification and can be implemented in the time, frequency. DSP is applicable to both streaming data and static data, the increasing use of computers has resulted in the increased use of, and need for, digital signal processing. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter, sampling is usually carried out in two stages, discretization and quantization. Discretization means that the signal is divided into equal intervals of time, quantization means each amplitude measurement is approximated by a value from a finite set. Rounding real numbers to integers is an example, the Nyquist–Shannon sampling theorem states that a signal can be exactly reconstructed from its samples if the sampling frequency is greater than twice the highest frequency of the signal. In practice, the frequency is often significantly higher than twice that required by the signals limited bandwidth. Theoretical DSP analyses and derivations are typically performed on discrete-time signal models with no amplitude inaccuracies, numerical methods require a quantized signal, such as those produced by an analog-to-digital converter. The processed result might be a frequency spectrum or a set of statistics, but often it is another quantized signal that is converted back to analog form by a digital-to-analog converter. In DSP, engineers usually study digital signals in one of the domains, time domain, spatial domain, frequency domain. They choose the domain in which to process a signal by making an assumption as to which domain best represents the essential characteristics of the signal. The most common processing approach in the time or space domain is enhancement of the signal through a method called filtering. Digital filtering generally consists of linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters, for example, A linear filter is a transformation of input samples. A causal filter uses only samples of the input or output signals. A non-causal filter can usually be changed into a filter by adding a delay to it
23.
Sampling (signal processing)
–
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a wave to a sequence of samples. A sample is a value or set of values at a point in time and/or space, a sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the value of the continuous signal at the desired points. Sampling can be done for functions varying in space, time, or any other dimension, then the sampled function is given by the sequence, s, for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples obtained in one second. Reconstructing a continuous function from samples is done by interpolation algorithms, the Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal lowpass filter whose input is a sequence of Dirac delta functions that are modulated by the sample values. When the time interval between adjacent samples is a constant, the sequence of functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the function with s. That purely mathematical abstraction is sometimes referred to as impulse sampling, most sampled signals are not simply stored and reconstructed. But the fidelity of a reconstruction is a customary measure of the effectiveness of sampling. That fidelity is reduced when s contains frequency components whose periodicity is smaller than 2 samples, the quantity ½ cycles/sample × fs samples/sec = fs/2 cycles/sec is known as the Nyquist frequency of the sampler. Therefore, s is usually the output of a lowpass filter, without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. In practice, the signal is sampled using an analog-to-digital converter. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion, various types of distortion can occur, including, Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, in a capacitor-based sample and hold circuit, aperture error is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width. Jitter or deviation from the precise sample timing intervals, noise, including thermal sensor noise, analog circuit noise, etc
24.
Time series
–
A time series is a series of data points indexed in time order. Most commonly, a series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data, examples of time series are heights of ocean tides, counts of sunspots, and the daily closing value of the Dow Jones Industrial Average. Time series are very frequently plotted via line charts, Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values, Time series data have a natural temporal ordering. This makes time series analysis distinct from cross-sectional studies, in there is no natural ordering of the observations. Time series analysis is also distinct from data analysis where the observations typically relate to geographical locations. A stochastic model for a series will generally reflect the fact that observations close together in time will be more closely related than observations further apart. Methods for time series analysis may be divided into two classes, frequency-domain methods and time-domain methods, the former include spectral analysis and wavelet analysis, the latter include auto-correlation and cross-correlation analysis. In the time domain, correlation and analysis can be made in a filter-like manner using scaled correlation, additionally, time series analysis techniques may be divided into parametric and non-parametric methods. The parametric approaches assume that the stationary stochastic process has a certain structure which can be described using a small number of parameters. In these approaches, the task is to estimate the parameters of the model describes the stochastic process. By contrast, non-parametric approaches explicitly estimate the covariance or the spectrum of the process without assuming that the process has any particular structure, Methods of time series analysis may also be divided into linear and non-linear, and univariate and multivariate. A time series is one type of Panel data, Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel. A data set may exhibit characteristics of both data and time series data. One way to tell is to ask what makes one data record unique from the other records, if the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a data field and an additional identifier which is unrelated to time. If the differentiation lies on the identifier, then the data set is a cross-sectional data set candidate
25.
Window function
–
In signal processing, a window function is a mathematical function that is zero-valued outside of some chosen interval. For instance, a function that is constant inside the interval and zero elsewhere is called a rectangular window, in typical applications, the window functions used are non-negative, smooth, bell-shaped curves. Rectangle, triangle, and other functions can also be used, applications of window functions include spectral analysis/modification/resynthesis, the design of finite impulse response filters, as well as beamforming and antenna design. The Fourier transform of the function cos ωt is zero, except at frequency ±ω, However, many other functions and waveforms do not have convenient closed-form transforms. Alternatively, one might be interested in their content only during a certain time period. In either case, the Fourier transform can be applied on one or more finite intervals of the waveform, in general, the transform is applied to the product of the waveform and a window function. Any window affects the spectral estimate computed by this method, windowing of a simple waveform like cos ωt causes its Fourier transform to develop non-zero values at frequencies other than ω. The leakage tends to be worst near ω and least at frequencies farthest from ω, if the waveform under analysis comprises two sinusoids of different frequencies, leakage can interfere with the ability to distinguish them spectrally. If their frequencies are dissimilar and one component is weaker, then leakage from the component can obscure the weaker ones presence. But if the frequencies are similar, leakage can render them even when the sinusoids are of equal strength. The rectangular window has excellent resolution characteristics for sinusoids of comparable strength and this characteristic is sometimes described as low dynamic range. At the other extreme of dynamic range are the windows with the poorest resolution and sensitivity and that is because the noise produces a stronger response with high-dynamic-range windows than with high-resolution windows. Therefore, high-dynamic-range windows are most often justified in wideband applications, in between the extremes are moderate windows, such as Hamming and Hann. They are commonly used in applications, such as the spectrum of a telephone channel. In summary, spectral analysis involves a trade-off between resolving comparable strength components with frequencies and resolving disparate strength components with dissimilar frequencies. That trade-off occurs when the function is chosen. When the input waveform is time-sampled, instead of continuous, the analysis is usually done by applying a window function, but the DFT provides only a sparse sampling of the actual discrete-time Fourier transform spectrum. Figure 1 shows a portion of the DTFT for a rectangularly-windowed sinusoid, the actual frequency of the sinusoid is indicated as 0 on the horizontal axis
26.
Short-time Fourier transform
–
In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment, one then usually plots the changing spectra as a function of time. Simply, in the case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform of the signal is taken as the window is slid along the time axis. X is essentially the Fourier Transform of xw, a function representing the phase and magnitude of the signal over time. Often phase unwrapping is employed along either or both the axis, τ, and frequency axis, ω, to suppress any jump discontinuity of the phase result of the STFT. The time index τ is normally considered to be slow time, in the discrete time case, the data to be transformed could be broken up into chunks or frames. Each chunk is Fourier transformed, and the result is added to a matrix. This can be expressed as, S T F T ≡ X = ∑ n = − ∞ ∞ x w e − j ω n likewise, with signal x and window w. In this case, m is discrete and ω is continuous, but in most typical applications the STFT is performed on a computer using the Fast Fourier Transform, so both variables are discrete and quantized. If only a number of ω are desired, or if the STFT is desired to be evaluated for every shift m of the window. The STFT is invertible, that is, the signal can be recovered from the transform by the Inverse STFT. The most widely accepted way of inverting the STFT is by using the overlap-add method and this makes for a versatile signal processing method, referred to as the overlap and add with modifications method. Given the width and definition of the function w, we initially require the area of the window function to be scaled so that ∫ − ∞ ∞ w d τ =1. It easily follows that ∫ − ∞ ∞ w d τ =1 ∀ t and x = x ∫ − ∞ ∞ w d τ = ∫ − ∞ ∞ x w d τ. The continuous Fourier Transform is X = ∫ − ∞ ∞ x e − j ω t d t. Substituting x from above, X = ∫ − ∞ ∞ e − j ω t d t = ∫ − ∞ ∞ ∫ − ∞ ∞ x w e − j ω t d τ d t. Swapping order of integration, X = ∫ − ∞ ∞ ∫ − ∞ ∞ x w e − j ω t d t d τ = ∫ − ∞ ∞ d τ = ∫ − ∞ ∞ X d τ
27.
Great tit
–
The great tit is a passerine bird in the tit family Paridae. Until 2005 this species was lumped with other subspecies. The great tit remains the most widespread species in the genus Parus, the great tit is a distinctive bird with a black head and neck, prominent white cheeks, olive upperparts and yellow underparts, with some variation amongst the numerous subspecies. It is predominantly insectivorous in the summer, but will consume a range of food items in the winter months. Like all tits it is a cavity nester, usually nesting in a hole in a tree, the female lays around 12 eggs and incubates them alone, although both parents raise the chicks. In most years the pair will raise two broods, the nests may be raided by woodpeckers, squirrels and weasels and infested with fleas, and adults may be hunted by sparrowhawks. The great tit has adapted well to changes in the environment and is a common and familiar bird in urban parks. The great tit is also an important study species in ornithology, the great tit was originally described under its current binomial name by Linnaeus in his 18th century work, Systema Naturae. Its scientific name is derived from the Latin parus tit and maior larger, the great tit was formerly treated as ranging from Britain to Japan and south to the islands of Indonesia, with 36 described subspecies ascribed to four main species groups. The three bokharensis subspecies were often treated as a species, Parus bokharensis, the Turkestan tit. The divergence between the bokharensis and major groups was estimated to have been half a million years ago. The study also examined hybrids between representatives of the major and minor groups in the Amur Valley where the two meet, hybrids were rare, suggesting that there were some reproductive barriers between the two groups. The study recommended that the two eastern groups be split out as new species, the cinereous tit, and the Japanese tit and this taxonomy has been followed by some authorities, for example the IOC World Bird List. The nominate subspecies of the great tit is the most widespread, its range stretching from the Iberian Peninsula to the Amur Valley and from Scandinavia to the Middle East. The other subspecies have much more restricted distributions, four being restricted to islands, the dominance of a single, morphologically uniform subspecies over such a large area suggests that the nominate race rapidly recolonised a large area after the last glacial epoch. This hypothesis is supported by studies which suggest a geologically recent genetic bottleneck followed by a rapid population expansion. The genus Parus once held most of the species of tit in the family Paridae, the great tit was retained in Parus, which, along with Cyanistes comprise a lineage of tits known as the non-hoarders, with reference to the hoarding behaviour of members of the other clade. The genus Parus is still the largest in the family, other than those species formerly considered to be subspecies, the great tits closest relatives are the white-naped and green-backed tits of southern Asia
28.
Speech synthesis
–
Speech synthesis is the artificial production of human speech. A computer system used for purpose is called a speech computer or speech synthesizer. A text-to-speech system converts normal language text into speech, other systems render symbolic linguistic representations like phonetic transcriptions into speech, synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units, a system that stores phones or diphones provides the largest output range, for specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract, the quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood clearly. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to words on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s, a text-to-speech system is composed of two parts, a front-end and a back-end. The front-end has two major tasks, first, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, the process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic representation that is output by the front-end. The back-end—often referred to as the converts the symbolic linguistic representation into sound. In certain systems, this includes the computation of the target prosody. Long before the invention of electronic signal processing, some tried to build machines to emulate human speech. Some early legends of the existence of Brazen Heads involved Pope Silvester II, Albertus Magnus, there followed the bellows-operated acoustic-mechanical speech machine of Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper. This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels, in 1837, Charles Wheatstone produced a speaking machine based on von Kempelens design, and in 1846, Joseph Faber exhibited the Euphonia. In 1923 Paget resurrected Wheatstones design, in the 1930s Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tones and resonances. From his work on the vocoder, Homer Dudley developed a keyboard-operated voice-synthesizer called The Voder, dr. Franklin S. Cooper and his colleagues at Haskins Laboratories built the Pattern playback in the late 1940s and completed it in 1950
29.
Electronic music
–
In general, a distinction can be made between sound produced using electromechanical means and that produced using electronic technology. Examples of electromechanical sound producing devices include the telharmonium, Hammond organ, purely electronic sound production can be achieved using devices such as the theremin, sound synthesizer, and computer. During the 1920s and 1930s, electronic instruments were introduced and the first compositions for instruments were composed. Musique concrète, created in Paris in 1948, was based on editing together recorded fragments of natural and industrial sounds, Music produced solely from electronic generators was first produced in Germany in 1953. Electronic music was created in Japan and the United States beginning in the 1950s. An important new development was the advent of computers for the purpose of composing music, algorithmic composition was first demonstrated in Australia in 1951. In America and Europe, live electronics were pioneered in the early 1960s, during the 1970s to early 1980s, the monophonic Minimoog became once the most widely used synthesizer at that time in both popular and electronic art music. In the 1980s, electronic music became dominant in popular music, with a greater reliance on synthesizers, and the adoption of programmable drum machines. Electronically produced music became prevalent in the domain by the 1990s. Contemporary electronic music includes many varieties and ranges from art music to popular forms such as electronic dance music. Today, pop music is most recognizable in its 4/4 form. At the turn of the 20th century, experimentation with emerging electronics led to the first electronic musical instruments and these initial inventions were not sold, but were instead used in demonstrations and public performances. The audiences were presented with reproductions of existing music instead of new compositions for the instruments, while some were considered novelties and produced simple tones, the Telharmonium accurately synthesized the sound of orchestral instruments. It achieved viable public interest and made progress into streaming music through telephone networks. Critics of musical conventions at the time saw promise in these developments, ferruccio Busoni encouraged the composition of microtonal music allowed for by electronic instruments. He predicted the use of machines in future music, writing the influential Sketch of a New Esthetic of Music, futurists such as Francesco Balilla Pratella and Luigi Russolo began composing music with acoustic noise to evoke the sound of machinery. They predicted expansions in timbre allowed for by electronics in the influential manifesto The Art of Noises, developments of the vacuum tube led to electronic instruments that were smaller, amplified, and more practical for performance. In particular, the theremin, ondes Martenot and trautonium were commercially produced by the early 1930s, from the late 1920s, the increased practicality of electronic instruments influenced composers such as Joseph Schillinger to adopt them
30.
Steganography
–
Steganography is the practice of concealing a file, message, image, or video within another file, message, image, or video. The word steganography combines the Greek words steganos, meaning covered, concealed, or protected, the first recorded use of the term was in 1499 by Johannes Trithemius in his Steganographia, a treatise on cryptography and steganography, disguised as a book on magic. Generally, the messages appear to be something else, images, articles, shopping lists. For example, the message may be in invisible ink between the visible lines of a private letter. Some implementations of steganography that lack a shared secret are forms of security through obscurity, the advantage of steganography over cryptography alone is that the intended secret message does not attract attention to itself as an object of scrutiny. Plainly visible encrypted messages—no matter how unbreakable—arouse interest, and may in themselves be incriminating in countries where encryption is illegal, Steganography includes the concealment of information within computer files. In digital steganography, electronic communications may include steganographic coding inside of a layer, such as a document file, image file. Media files are ideal for steganographic transmission because of their large size, the first recorded uses of steganography can be traced back to 440 BC when Herodotus mentions two examples in his Histories. Additionally, Demaratus sent a warning about an attack to Greece by writing it directly on the wooden backing of a wax tablet before applying its beeswax surface. Wax tablets were in use then as reusable writing surfaces, sometimes used for shorthand. In his work Polygraphiae Johannes Trithemius developed his so-called Ave-Maria-Cipher that can hide information in a Latin praise of God, auctor Sapientissimus Conseruans Angelica Deferat Nobis Charitas Potentissimi Creatoris for example contains the concealed word VICIPEDIA. Steganography has been used, including in recent historical times. Known examples include, Hidden messages within wax tablet—in ancient Greece, people wrote messages on wood, Hidden messages on messengers body—also used in ancient Greece. Herodotus tells the story of a message tattooed on the head of a slave of Histiaeus, hidden by the hair that afterwards grew over it. The message allegedly carried a warning to Greece about Persian invasion plans, messages written on envelopes in the area covered by postage stamps. In the early days of the press, it was common to mix different typefaces on a printed page due to the printer not having enough copies of some letters in one typeface. Because of this, a message could be using two different typefaces, such as normal or italic. During and after World War II, espionage agents used photographically produced microdots to send information back, World War II microdots were embedded in the paper and covered with an adhesive, such as collodion
31.
Audio timescale-pitch modification
–
Time stretching is the process of changing the speed or duration of an audio signal without affecting its pitch. Pitch scaling or pitch shifting is the opposite, the process of changing the pitch without affecting the speed, similar methods can change speed, pitch, or both at once, in a time-varying way. These processes are used, for instance, to match the pitches and they are also used to create effects such as increasing the range of an instrument. The simplest way to change the duration or pitch of an audio clip is to resample it. This is an operation that effectively rebuilds a continuous waveform from its samples. When the new samples are played at the sampling frequency. Unfortunately, the frequencies in the sample are always scaled at the rate as the speed. In other words, slowing down the recording lowers the pitch, speeding it up raises the pitch and this is analogous to speeding up or slowing down an analogue recording, like a phonograph record or tape, creating the Chipmunk effect. In order to preserve an audio signals pitch when stretching or compressing its duration, given an original discrete-time audio signal, this strategys first step is to split the signal into short analysis frames of fixed length. The analysis frames are spaced by a number of samples. To achieve the actual time-scale modification, the frames are then temporally relocated to have a synthesis hopsize H s ∈ N. This frame relocation results in a modification of the duration by a stretching factor of α = H s / H a. However, simply superimposing the unmodified analysis frames typically results in undesired artifacts such as phase discontinuities or amplitude fluctuations, to prevent this kind of artifacts, the analysis frames are adapted to form synthesis frames, prior to the reconstruction of the time-scale modified output signal. The strategy of how to derive the synthesis frames from the frames is a key difference among different TSM procedures. One way of stretching the length of a signal without affecting the pitch is to build a phase vocoder after Flanagan, Golden, recent improvements allow better quality results at all compression/expansion ratios but a residual smearing effect still remains. Another method for time stretching relies on a model of the signal. In this method, peaks are identified in frames using the STFT of the signal, the tracks are then re-synthesized at a new time scale. This method can yield results on both polyphonic and percussive material, especially when the signal is separated into sub-bands
32.
Scattering parameters
–
Scattering parameters or S-parameters describe the electrical behavior of linear electrical networks when undergoing various steady state stimuli by electrical signals. The parameters are useful for several branches of engineering, including electronics, communication systems design. The S-parameters are members of a family of parameters, other examples being. They differ from these, in the sense that S-parameters do not use open or short circuit conditions to characterize a linear network, instead. These terminations are much easier to use at high frequencies than open-circuit and short-circuit terminations. Moreover, the quantities are measured in terms of power, many electrical properties of networks of components may be expressed using S-parameters, such as gain, return loss, voltage standing wave ratio, reflection coefficient and amplifier stability. This is equivalent to the meeting a impedance differing from the lines characteristic impedance. S-parameters change with the measurement frequency, so frequency must be specified for any S-parameter measurements stated, S-parameters are readily represented in matrix form and obey the rules of matrix algebra. The first published description of S-parameters was in the thesis of Vitold Belevitch in 1945, the name used by Belevitch was repartition matrix and limited consideration to lumped-element networks. The term scattering matrix was used by physicist and engineer Robert Henry Dicke in 1947 who independently developed the idea during wartime work on radar. The network is characterized by a matrix of complex numbers called its S-parameter matrix. For the S-parameter definition, it is understood that a network may contain any components provided that the network behaves linearly with incident small signals. An electrical network to be described by S-parameters may have any number of ports, ports are the points at which electrical signals either enter or exit the network. Ports are usually pairs of terminals with the requirement that the current into one terminal is equal to the current leaving the other, S-parameters are used at frequencies where the ports are often coaxial or waveguide connections. The S-parameter matrix describing an N-port network will be square of dimension N, at the test frequency each element or S-parameter is represented by a unitless complex number that represents magnitude and angle, i. e. amplitude and phase. The complex number may either be expressed in rectangular form or, more commonly, the S-parameter magnitude may be expressed in linear form or logarithmic form. When expressed in logarithmic form, magnitude has the unit of decibels. The S-parameter angle is most frequently expressed in degrees but occasionally in radians, any S-parameter may be displayed graphically on a polar diagram by a dot for one frequency or a locus for a range of frequencies
33.
United States Geological Survey
–
The United States Geological Survey is a scientific agency of the United States government. The scientists of the USGS study the landscape of the United States, its resources. The organization has four science disciplines, concerning biology, geography, geology. The USGS is a research organization with no regulatory responsibility. The USGS is a bureau of the United States Department of the Interior, the USGS employs approximately 8,670 people and is headquartered in Reston, Virginia. The USGS also has major offices near Lakewood, Colorado, at the Denver Federal Center, the current motto of the USGS, in use since August 1997, is science for a changing world. The agencys previous slogan, adopted on the occasion of its anniversary, was Earth Science in the Public Service. Prompted by a report from the National Academy of Sciences, the USGS was created, by a last-minute amendment and it was charged with the classification of the public lands, and examination of the geological structure, mineral resources, and products of the national domain. This task was driven by the need to inventory the vast lands added to the United States by the Louisiana Purchase in 1803, the legislation also provided that the Hayden, Powell, and Wheeler surveys be discontinued as of June 30,1879. Clarence King, the first director of USGS, assembled the new organization from disparate regional survey agencies, after a short tenure, King was succeeded in the directors chair by John Wesley Powell. Administratively, it is divided into a Headquarters unit and six Regional Units, Other specific programs include, Earthquake Hazards Program monitors earthquake activity worldwide. The National Earthquake Information Center in Golden, Colorado on the campus of the Colorado School of Mines detects the location, the USGS also runs or supports several regional monitoring networks in the United States under the umbrella of the Advanced National Seismic System. The USGS informs authorities, emergency responders, the media, and it also maintains long-term archives of earthquake data for scientific and engineering research. It also conducts and supports research on long-term seismic hazards, USGS has released the UCERF California earthquake forecast. The USGS National Geomagnetism Program monitors the magnetic field at magnetic observatories and distributes magnetometer data in real time, the USGS operates the streamgaging network for the United States, with over 7400 streamgages. Real-time streamflow data are available online, since 1962, the Astrogeology Research Program has been involved in global, lunar, and planetary exploration and mapping. USGS operates a number of related programs, notably the National Streamflow Information Program. USGS Water data is available from their National Water Information System database
34.
Phase (waves)
–
Phase is the position of a point in time on a waveform cycle. A complete cycle is defined as the interval required for the waveform to return to its initial value. The graphic to the right shows how one cycle constitutes 360° of phase, the graphic also shows how phase is sometimes expressed in radians, where one radian of phase equals approximately 57. 3°. Phase can also be an expression of relative displacement between two corresponding features of two waveforms having the same frequency, in sinusoidal functions or in waves phase has two different, but closely related, meanings. One is the angle of a sinusoidal function at its origin and is sometimes called phase offset or phase difference. Another usage is the fraction of the cycle that has elapsed relative to the origin. Phase shift is any change that occurs in the phase of one quantity and this symbol, φ is sometimes referred to as a phase shift or phase offset because it represents a shift from zero phase. For infinitely long sinusoids, a change in φ is the same as a shift in time, if x is delayed by 14 of its cycle, it becomes, x = A ⋅ cos = A ⋅ cos whose phase is now φ − π2. It has been shifted by π2 radians, Phase difference is the difference, expressed in degrees or time, between two waves having the same frequency and referenced to the same point in time. Two oscillators that have the frequency and no phase difference are said to be in phase. Two oscillators that have the frequency and different phases have a phase difference. The amount by which such oscillators are out of phase with each other can be expressed in degrees from 0° to 360°, if the phase difference is 180 degrees, then the two oscillators are said to be in antiphase. If two interacting waves meet at a point where they are in antiphase, then interference will occur. It is common for waves of electromagnetic, acoustic or other energy to become superposed in their transmission medium, when that happens, the phase difference determines whether they reinforce or weaken each other. Complete cancellation is possible for waves with equal amplitudes, time is sometimes used to express position within the cycle of an oscillation. A phase difference is analogous to two athletes running around a track at the same speed and direction but starting at different positions on the track. They pass a point at different instants in time, but the time difference between them is a constant - same for every pass since they are at the same speed and in the same direction. If they were at different speeds, the difference is undefined
35.
Haskins Laboratories
–
Haskins Laboratories is an independent 501 non-profit corporation, founded in 1935 and located in New Haven, Connecticut since 1970. It is a multidisciplinary and international community of researchers which conducts research on spoken and written language. A guiding perspective of their research is to speech and language as biological processes. Haskins Laboratories is equipped, in-house, with a suite of tools and capabilities to advance its mission of research into language. Magnetic Resonance Imaging, Haskins has access to MRI scanners through agreements with the University of Connecticut, on-site, HL has a GNU-Linux computer cluster dedicated to analysis of MRI data. Near Infrared Spectroscopy, HL has a TechEn CW6 8x8 system, ultrasound sonogram Scores of researchers have contributed to scientific breakthroughs at Haskins Laboratories since its founding. All of them are indebted to the work and leadership of Caryl Parker Haskins, Franklin S. Cooper, Alvin Liberman, Seymour Hutner. This history focuses on the program of the main division of Haskins Laboratories that, since the 1940s, has been most well known for its work in the areas of speech, language. Caryl Haskins and Franklin S. Cooper established Haskins Laboratories in 1935 and it was originally affiliated with Harvard University, MIT, and Union College in Schenectady, NY. Caryl Haskins conducted research in microbiology, radiation physics, and other fields in Cambridge, MA, in 1939 the Laboratories moved its center to New York City. Seymour Hutner joined the staff to set up a program in microbiology, genetics. The descendant of this program is now part of Pace University in New York, the U. S. Office of Scientific Research and Development, under Vannevar Bush asked Haskins Laboratories to evaluate and develop technologies for assisting blinded World War II veterans. Experimental psychologist Alvin Liberman joined the Laboratories to assist in developing an alphabet to represent the letters in a text for use in a reading machine for the blind. Luigi Provasoli joined the Laboratories to set up a program in marine biology. The program in marine biology moved to Yale University in 1970, Franklin S. Cooper invented the pattern playback, a machine that converts pictures of the acoustic patterns of speech back into sound. With this device, Alvin Liberman, Cooper, and Pierre Delattre, Liberman, aided by Frances Ingemann and others, organized the results of the work on speech cues into a groundbreaking set of rules for speech synthesis by the Pattern Playback. Leigh Lisker and Arthur Abramson looked for simplification at the level of action in the voicing of certain contrasting consonants. They showed that many properties of voicing contrasts arise from variations in voice onset time, the relative phasing of the onset of vocal cord vibration
36.
Dual (mathematics)
–
Such involutions sometimes have fixed points, so that the dual of A is A itself. For example, Desargues theorem is self-dual in this sense under the standard duality in projective geometry, many mathematical dualities between objects of two types correspond to pairings, bilinear functions from an object of one type and another object of the second type to some family of scalars. From a category theory viewpoint, duality can also be seen as a functor and this functor assigns to each space its dual space, and the pullback construction assigns to each arrow f, V → W its dual f∗, W∗ → V∗. In the words of Michael Atiyah, Duality in mathematics is not a theorem, the following list of examples shows the common features of many dualities, but also indicates that the precise meaning of duality may vary from case to case. A simple, maybe the most simple, duality arises from considering subsets of a fixed set S, to any subset A ⊆ S, the complement Ac consists of all those elements in S which are not contained in A. It is again a subset of S, taking the complement has the following properties, Applying it twice gives back the original set, i. e. c = A. This is referred to by saying that the operation of taking the complement is an involution, an inclusion of sets A ⊆ B is turned into an inclusion in the opposite direction Bc ⊆ Ac. Given two subsets A and B of S, A is contained in Bc if and only if B is contained in Ac. This duality appears in topology as a duality between open and closed subsets of some fixed topological space X, a subset U of X is closed if, because of this, many theorems about closed sets are dual to theorems about open sets. For example, any union of sets is open, so dually. The interior of a set is the largest open set contained in it, because of the duality, the complement of the interior of any set U is equal to the closure of the complement of U. A duality in geometry is provided by the cone construction. Given a set C of points in the plane R2, unlike for the complement of sets mentioned above, it is not in general true that applying the dual cone construction twice gives back the original set C. Instead, C ∗ ∗ is the smallest cone containing C which may be bigger than C. Therefore this duality is weaker than the one above, in that Applying the operation twice gives back a possibly bigger set, the other two properties carry over without change, It is still true that an inclusion C ⊆ D is turned into an inclusion in the opposite direction. Given two subsets C and D of the plane, C is contained in D ∗ if, a very important example of a duality arises in linear algebra by associating to any vector space V its dual vector space V*. Its elements are the k-linear maps φ, V → k, the three properties of the dual cone carry over to this type of duality by replacing subsets of R2 by vector space and inclusions of such subsets by linear maps. That is, Applying the operation of taking the dual vector space twice gives another vector space V**, there is always a map V → V**
37.
Instantaneous frequency
–
Instantaneous phase and instantaneous frequency are important concepts in signal processing that occur in the context of the representation and analysis of time-varying functions. The instantaneous phase of a function s, is the real-valued function, ϕ = arg . And for a function s, it is determined from the functions analytic representation, sa. When φ is constrained to its value, either the interval. Otherwise it is called unwrapped phase, which is a function of argument t. Unless otherwise indicated, the form should be inferred. S = A cos , where ω >0, S a = A e j, ϕ = ω t + θ. In this simple example, the constant θ is also commonly referred to as phase or phase offset. φ is a function of time, θ is not, in the next example, we also see that the phase offset of a real-valued sinusoid is ambiguous unless a reference is specified. S = A sin = A cos , where ω >0, S a = A e j, ϕ = ω t − π /2. In both examples the local maxima of s correspond to φ = 2πN for integer values of N and this has applications in the field of computer vision. Instantaneous angular frequency is defined as, ω = d ϕ d t, if φ is wrapped, discontinuities in φ will result in dirac delta impulses in f. This instantaneous frequency, ω, can be derived directly from the real and imaginary parts of sa, instead of the complex arg without concern of phase unwrapping. ϕ = arg = atan2 +2 m 1 π = arctan + m 2 π 2m1π, discontinuities can then be removed by adding 2π whenever Δφ ≤ −π, and subtracting 2π whenever Δφ > π. That allows φ to accumulate without limit and produces an unwrapped instantaneous phase, an equivalent formulation that replaces the modulo 2π operation with a complex multiplication is, ϕ = ϕ + arg , where the asterisk denotes complex conjugate. The discrete-time instantaneous frequency is simply the advancement of phase for that sample ω = arg , a vector-average phase can be obtained as the arg of the sum of the complex numbers without concern about wrap-around. Analytic signal Frequency modulation Cohen, Leon
38.
Heisenberg uncertainty principle
–
The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard later that year and by Hermann Weyl in 1928. Heisenberg offered such an effect at the quantum level as a physical explanation of quantum uncertainty. Thus, the uncertainty principle actually states a fundamental property of quantum systems, since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their research program. These include, for example, tests of number–phase uncertainty relations in superconducting or quantum optics systems, applications dependent on the uncertainty principle for their operation include extremely low-noise technology such as that required in gravitational wave interferometers. The uncertainty principle is not readily apparent on the scales of everyday experience. So it is helpful to demonstrate how it applies to more easily understood physical situations, two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, a nonzero function and its Fourier transform cannot both be sharply localized. In matrix mechanics, the formulation of quantum mechanics, any pair of non-commuting self-adjoint operators representing observables are subject to similar uncertainty limits. An eigenstate of an observable represents the state of the wavefunction for a certain measurement value, for example, if a measurement of an observable A is performed, then the system is in a particular eigenstate Ψ of that observable. According to the de Broglie hypothesis, every object in the universe is a wave, the position of the particle is described by a wave function Ψ. The time-independent wave function of a plane wave of wavenumber k0 or momentum p0 is ψ ∝ e i k 0 x = e i p 0 x / ℏ. In the case of the plane wave, | ψ |2 is a uniform distribution. In other words, the position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. The figures to the right show how with the addition of many plane waves, in mathematical terms, we say that ϕ is the Fourier transform of ψ and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, One way to quantify the precision of the position and momentum is the standard deviation σ. Since | ψ |2 is a probability density function for position, the precision of the position is improved, i. e. reduced σx, by using many plane waves, thereby weakening the precision of the momentum, i. e. increased σp. Another way of stating this is that σx and σp have a relationship or are at least bounded from below