1.
Bandwidth (signal processing)
–
Bandwidth is the difference between the upper and lower frequencies in a continuous set of frequencies. It is typically measured in hertz, and may refer to passband bandwidth, sometimes to baseband bandwidth. Passband bandwidth is the difference between the upper and lower frequencies of, for example, a band-pass filter, a communication channel. In the case of a filter or baseband signal, the bandwidth is equal to its upper cutoff frequency. A key characteristic of bandwidth is that any band of a given width can carry the amount of information. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband or modulated to some higher frequency, Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the range occupied by a modulated carrier signal. An FM radio receivers tuner spans a range of frequencies. A government agency may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere, each transmitter owns a slice of bandwidth. For different applications there are different precise definitions, which are different for signals than for systems. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a level of performance. A less strict and more practically useful definition will refer to the frequencies beyond which frequency response is small, small could mean less than 3 dB below the maximum value, or more rarely 10 dB below, or it could mean below a certain absolute value. As with any definition of the width of a function, many definitions are suitable for different purposes, in some contexts, the signal bandwidth in hertz refers to the frequency range in which the signals spectral density is nonzero or above a small threshold value. That definition is used in calculations of the lowest sampling rate that will satisfy the sampling theorem, the threshold value is often defined relative to the maximum value, and is most commonly the 3dB point, that is the point where the spectral density is half its maximum value. The word bandwidth applies to signals as described above, but it could apply to systems. To say that a system has a certain bandwidth means that the system can process signals of that bandwidth, or that the system reduces the bandwidth of a white noise input to that bandwidth. If the maximum gain is 0 dB, the 3 dB bandwidth is the range where the gain is more than −3 dB. This is also the range of frequencies where the gain is above 70. 7% of the maximum amplitude gain
2.
Physics
–
Physics is the natural science that involves the study of matter and its motion and behavior through space and time, along with related concepts such as energy and force. One of the most fundamental disciplines, the main goal of physics is to understand how the universe behaves. Physics is one of the oldest academic disciplines, perhaps the oldest through its inclusion of astronomy, Physics intersects with many interdisciplinary areas of research, such as biophysics and quantum chemistry, and the boundaries of physics are not rigidly defined. New ideas in physics often explain the mechanisms of other sciences while opening new avenues of research in areas such as mathematics. Physics also makes significant contributions through advances in new technologies that arise from theoretical breakthroughs, the United Nations named 2005 the World Year of Physics. Astronomy is the oldest of the natural sciences, the stars and planets were often a target of worship, believed to represent their gods. While the explanations for these phenomena were often unscientific and lacking in evidence, according to Asger Aaboe, the origins of Western astronomy can be found in Mesopotamia, and all Western efforts in the exact sciences are descended from late Babylonian astronomy. The most notable innovations were in the field of optics and vision, which came from the works of many scientists like Ibn Sahl, Al-Kindi, Ibn al-Haytham, Al-Farisi and Avicenna. The most notable work was The Book of Optics, written by Ibn Al-Haitham, in which he was not only the first to disprove the ancient Greek idea about vision, but also came up with a new theory. In the book, he was also the first to study the phenomenon of the pinhole camera, many later European scholars and fellow polymaths, from Robert Grosseteste and Leonardo da Vinci to René Descartes, Johannes Kepler and Isaac Newton, were in his debt. Indeed, the influence of Ibn al-Haythams Optics ranks alongside that of Newtons work of the same title, the translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the devices as what Ibn al-Haytham did. From this, such important things as eyeglasses, magnifying glasses, telescopes, Physics became a separate science when early modern Europeans used experimental and quantitative methods to discover what are now considered to be the laws of physics. Newton also developed calculus, the study of change, which provided new mathematical methods for solving physical problems. The discovery of new laws in thermodynamics, chemistry, and electromagnetics resulted from greater research efforts during the Industrial Revolution as energy needs increased, however, inaccuracies in classical mechanics for very small objects and very high velocities led to the development of modern physics in the 20th century. Modern physics began in the early 20th century with the work of Max Planck in quantum theory, both of these theories came about due to inaccuracies in classical mechanics in certain situations. Quantum mechanics would come to be pioneered by Werner Heisenberg, Erwin Schrödinger, from this early work, and work in related fields, the Standard Model of particle physics was derived. Areas of mathematics in general are important to this field, such as the study of probabilities, in many ways, physics stems from ancient Greek philosophy
3.
Engineering
–
The term Engineering is derived from the Latin ingenium, meaning cleverness and ingeniare, meaning to contrive, devise. Engineering has existed since ancient times as humans devised fundamental inventions such as the wedge, lever, wheel, each of these inventions is essentially consistent with the modern definition of engineering. The term engineering is derived from the engineer, which itself dates back to 1390 when an engineer originally referred to a constructor of military engines. In this context, now obsolete, a referred to a military machine. Notable examples of the obsolete usage which have survived to the present day are military engineering corps, the word engine itself is of even older origin, ultimately deriving from the Latin ingenium, meaning innate quality, especially mental power, hence a clever invention. The earliest civil engineer known by name is Imhotep, as one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser at Saqqara in Egypt around 2630–2611 BC. Ancient Greece developed machines in both civilian and military domains, the Antikythera mechanism, the first known mechanical computer, and the mechanical inventions of Archimedes are examples of early mechanical engineering. In the Middle Ages, the trebuchet was developed, the first steam engine was built in 1698 by Thomas Savery. The development of this gave rise to the Industrial Revolution in the coming decades. With the rise of engineering as a profession in the 18th century, similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. The inventions of Thomas Newcomen and the Scottish engineer James Watt gave rise to mechanical engineering. The development of specialized machines and machine tools during the revolution led to the rapid growth of mechanical engineering both in its birthplace Britain and abroad. John Smeaton was the first self-proclaimed civil engineer and is regarded as the father of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbours and he was also a capable mechanical engineer and an eminent physicist. Smeaton designed the third Eddystone Lighthouse where he pioneered the use of hydraulic lime and his lighthouse remained in use until 1877 and was dismantled and partially rebuilt at Plymouth Hoe where it is known as Smeatons Tower. The United States census of 1850 listed the occupation of engineer for the first time with a count of 2,000, there were fewer than 50 engineering graduates in the U. S. before 1865. In 1870 there were a dozen U. S. mechanical engineering graduates, in 1890 there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics established at Cambridge until 1875, the theoretical work of James Maxwell and Heinrich Hertz in the late 19th century gave rise to the field of electronics
4.
Underdamped
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
5.
Resonator
–
The oscillations in a resonator can be either electromagnetic or mechanical. Resonators are used to generate waves of specific frequencies or to select specific frequencies from a signal. Musical instruments use acoustic resonators that produce sound waves of specific tones, another example is quartz crystals used in electronic devices such as radio transmitters and quartz watches to produce oscillations of very precise frequency. A cavity resonator is one in which waves exist in a space inside the device. Acoustic cavity resonators, in sound is produced by air vibrating in a cavity with one opening, are known as Helmholtz resonators. A physical system can have as many resonant frequencies as it has degrees of freedom, systems with one degree of freedom, such as a mass on a spring, pendulums, balance wheels, and LC tuned circuits have one resonant frequency. Systems with two degrees of freedom, such as coupled pendulums and resonant transformers can have two resonant frequencies, a crystal lattice composed of N atoms bound together can have N resonant frequencies. As the number of coupled harmonic oscillators grows, the time it takes to transfer energy from one to the next becomes significant, the vibrations in them begin to travel through the coupled harmonic oscillators in waves, from one oscillator to the next. The term resonator is most often used for an object in which vibrations travel as waves, at an approximately constant velocity, bouncing back. The material of the resonator, through which the waves flow, therefore, they can have millions of resonant frequencies, although only a few may be used in practical resonators. The oppositely moving waves interfere with other, and at its resonant frequencies reinforce each other to create a pattern of standing waves in the resonator. If the distance between the sides is d, the length of a trip is 2 d. To cause resonance, the phase of a sinusoidal wave after a round trip must be equal to the initial phase so the waves self-reinforce. The above analysis assumes the medium inside the resonator is homogeneous, so the waves travel at a constant speed, and they are then called overtones instead of harmonics. There may be several such series of resonant frequencies in a single resonator, an electrical circuit composed of discrete components can act as a resonator when both an inductor and capacitor are included. Oscillations are limited by the inclusion of resistance, either via a specific resistor component, such resonant circuits are also called RLC circuits after the circuit symbols for the components. A distributed-parameter resonator has capacitance, inductance, and resistance that cannot be isolated into separate lumped capacitors, inductors, an example of this, much used in filtering, is the helical resonator. A single layer coil that is used as a secondary or tertiary winding in a Tesla coil or magnifying transmitter is also a distributed resonator
6.
Damping
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
7.
Resonance
–
In physics, resonance is a phenomenon in which a vibrating system or external force drives another system to oscillate with greater amplitude at a specific preferential frequency. Frequencies at which the amplitude is a relative maximum are known as the systems resonant frequencies or resonance frequencies. At resonant frequencies, small periodic driving forces have the ability to produce large amplitude oscillations, Resonance occurs when a system is able to store and easily transfer energy between two or more different storage modes. However, there are some losses from cycle to cycle, called damping, when damping is small, the resonant frequency is approximately equal to the natural frequency of the system, which is a frequency of unforced vibrations. Some systems have multiple, distinct, resonant frequencies, resonant systems can be used to generate vibrations of a specific frequency, or pick out specific frequencies from a complex vibration containing many frequencies. Resonance occurs widely in nature, and is exploited in many manmade devices and it is the mechanism by which virtually all sine waves and vibrations are generated. Many sounds we hear, such as when hard objects of metal, glass, light and other short wavelength electromagnetic radiation is produced by resonance on an atomic scale, such as electrons in atoms. A familiar example is a swing, which acts as a pendulum. Pushing a person in a swing in time with the interval of the swing makes the swing go higher and higher. This is because the energy the swing absorbs is maximized when the match the swings natural oscillations. It may cause violent swaying motions and even catastrophic failure in improperly constructed structures including bridges, buildings, trains, avoiding resonance disasters is a major concern in every building, tower, and bridge construction project. As a countermeasure, shock mounts can be installed to absorb resonant frequencies, the Taipei 101 building relies on a 660-tonne pendulum —a tuned mass damper—to cancel resonance. Furthermore, the structure is designed to resonate at a frequency that does not typically occur, buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. Clocks keep time by mechanical resonance in a wheel, pendulum. The cadence of runners has been hypothesized to be energetically favorable due to resonance between the energy stored in the lower limb and the mass of the runner. Acoustic resonance is a branch of mechanical resonance that is concerned with the mechanical vibrations across the range of human hearing. Like mechanical resonance, acoustic resonance can result in failure of the object at resonance. The classic example of this is breaking a glass with sound at the precise resonant frequency of the glass
8.
Harmonic oscillator
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
9.
Sine wave
–
A sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time is, y = A sin = A sin where, A = the amplitude, F = the ordinary frequency, the number of oscillations that occur each second of time. ω = 2πf, the frequency, the rate of change of the function argument in units of radians per second φ = the phase. When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ /ω seconds, a negative value represents a delay, and a positive value represents an advance. The sine wave is important in physics because it retains its shape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property and this property leads to its importance in Fourier analysis and makes it acoustically unique. The wavenumber is related to the frequency by. K = ω v =2 π f v =2 π λ where λ is the wavelength, f is the frequency, and v is the linear speed. This equation gives a wave for a single dimension, thus the generalized equation given above gives the displacement of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire, in two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex such as the height of a water wave in a pond after a stone has been dropped in. This wave pattern occurs often in nature, including wind waves, sound waves, a cosine wave is said to be sinusoidal, because cos = sin , which is also a sine wave with a phase-shift of π/2 radians. Because of this start, it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a frequency with no harmonics. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, on the other hand, if the sound contains aperiodic waves along with sine waves, then the sound will be perceived noisy as noise is characterized as being aperiodic or having a non-repetitive pattern. In 1822, French mathematician Joseph Fourier discovered that sinusoidal waves can be used as building blocks to describe and approximate any periodic waveform
10.
RLC circuit
–
An RLC circuit is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The name of the circuit is derived from the letters that are used to denote the constituent components of this circuit, the circuit forms a harmonic oscillator for current, and resonates in a similar way as an LC circuit. Introducing the resistor increases the decay of these oscillations, which is known as damping. The resistor also reduces the resonant frequency. Some resistance is unavoidable in real circuits even if a resistor is not specifically included as a component, an ideal, pure LC circuit exists only in the domain of superconductivity. RLC circuits have many applications as oscillator circuits, radio receivers and television sets use them for tuning to select a narrow frequency range from ambient radio waves. In this role the circuit is often referred to as a tuned circuit, an RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering, the RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements, R, L and C, can be combined in a number of different topologies, all three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance, inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. An important property of this circuit is its ability to resonate at a specific frequency, frequencies are measured in units of hertz. In this article, however, angular frequency, ω0, is used which is mathematically convenient. This is measured in radians per second and they are related to each other by a simple proportion, ω0 =2 π f 0. Resonance occurs because energy is stored in two different ways, in a field as the capacitor is charged and in a magnetic field as current flows through the inductor. Energy can be transferred from one to the other within the circuit, a mechanical analogy is a weight suspended on a spring which will oscillate up and down when released. The mechanical property answering to the resistor in the circuit is friction in the spring–weight system, friction will slowly bring any oscillation to a halt if there is no external force driving it. Likewise, the resistance in an RLC circuit will damp the oscillation, the resonance frequency is defined as the frequency at which the impedance of the circuit is at a minimum
11.
Atomic clock
–
The principle of operation of an atomic clock is not based on nuclear physics, but rather on atomic physics, it uses the microwave signal that electrons in atoms emit when they change energy levels. Early atomic clocks were based on masers at room temperature, currently, the most accurate atomic clocks first cool the atoms to near absolute zero temperature by slowing them with lasers and probing them in atomic fountains in a microwave-filled cavity. An example of this is the NIST-F1 atomic clock, one of the primary time. The accuracy of an atomic clock depends on two factors, the first factor is temperature of the sample atoms—colder atoms move much more slowly, allowing longer probe times. The second factor is the frequency and intrinsic width of the electronic transition, higher frequencies and narrow lines increase the precision. National standards agencies in many countries maintain a network of atomic clocks which are intercompared and these clocks collectively define a continuous and stable time scale, International Atomic Time. For civil time, another time scale is disseminated, Coordinated Universal Time, UTC is derived from TAI, but approximately synchronised, by using leap seconds, to UT1, which is based on actual rotation of the Earth with respect to the solar time. The idea of using atomic transitions to measure time was suggested by Lord Kelvin in 1879, magnetic resonance, developed in the 1930s by Isidor Rabi, became the practical method for doing this. In 1945, Rabi first publicly suggested that atomic beam magnetic resonance might be used as the basis of a clock, the first atomic clock was an ammonia maser device built in 1949 at the U. S. National Bureau of Standards. It was less accurate than existing quartz clocks, but served to demonstrate the concept, calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time. This led to the agreed definition of the latest SI second being based on atomic time. Equality of the ET second with the SI second has been verified to within 1 part in 1010, the SI second thus inherits the effect of decisions by the original designers of the ephemeris time scale, determining the length of the ET second. Since the beginning of development in the 1950s, atomic clocks have been based on the transitions in hydrogen-1, caesium-133. The first commercial atomic clock was the Atomichron, manufactured by the National Company, more than 50 were sold between 1956 and 1960. This bulky and expensive instrument was replaced by much smaller rack-mountable devices, such as the Hewlett-Packard model 5060 caesium frequency standard. In August 2004, NIST scientists demonstrated a chip-scale atomic clock, according to the researchers, the clock was believed to be one-hundredth the size of any other. It requires no more than 125 mW, making it suitable for battery-driven applications and this technology became available commercially in 2011. Ion trap experimental optical clocks are more precise than the current caesium standard, in March 2017, NASA plans to deploy the Deep Space Atomic Clock, a miniaturized, ultra-precise mercury-ion atomic clock, into outer space
12.
Superconducting radio frequency
–
Superconducting radio frequency science and technology involves the application of electrical superconductors to radio frequency devices. The ultra-low electrical resistivity of a material allows an RF resonator to obtain an extremely high quality factor. For example, it is commonplace for a 1.3 GHz niobium SRF resonant cavity at 1.8 Kelvin to obtain a quality factor of Q=5×1010, such a very high Q resonator stores energy with very low loss and narrow bandwidth. These properties can be exploited for a variety of applications, including the construction of particle accelerator structures. The most common application of superconducting RF is in particle accelerators, accelerators typically use resonant RF cavities formed from or coated with superconducting materials. Electromagnetic fields are excited in the cavity by coupling in an RF source with an antenna, when the RF frequency fed by the antenna is the same as that of a cavity mode, the resonant fields build to high amplitudes. Charged particles passing through apertures in the cavity are then accelerated by the electric fields, the resonant frequency driven in SRF cavities typically ranges from 200 MHz to 3 GHz, depending on the particle species to be accelerated. The most common technology for such SRF cavities is to form thin walled shell components from high purity niobium sheets by stamping. These shell components are welded together to form cavities. Several such finished products are pictured below, a simplified diagram of the key elements of an SRF cavity setup is shown below. The cavity is immersed in a liquid helium bath. Pumping removes helium vapor boil-off and controls the bath temperature, the helium vessel is often pumped to a pressure below heliums superfluid lambda point to take advantage of the superfluids thermal properties. Because superfluid has very high thermal conductivity, it makes an excellent coolant, in addition, superfluids boil only at free surfaces, preventing the formation of bubbles on the surface of the cavity, which would cause mechanical perturbations. An antenna is needed in the setup to couple RF power to the cavity fields and, in turn, the cold portions of the setup need to be extremely well insulated, which is best accomplished by a vacuum vessel surrounding the helium vessel and all ancillary cold components. The full SRF cavity containment system, including the vacuum vessel, entry into superconducting RF technology can incur more complexity, expense, and time than normal-conducting RF cavity strategies. A vexing aspect of SRF is the as-yet elusive ability to produce high Q cavities in high volume production. Nevertheless, for many applications the capabilities of SRF cavities provide the solution for a host of demanding performance requirements. Several extensive treatments of SRF physics and technology are available, many of them free of charge, a large variety of RF cavities are used in particle accelerators
13.
Optical cavity
–
An optical cavity, resonating cavity or optical resonator is an arrangement of mirrors that forms a standing wave cavity resonator for light waves. Optical cavities are a component of lasers, surrounding the gain medium. They are also used in optical parametric oscillators and some interferometers, light confined in the cavity reflects multiple times producing standing waves for certain resonance frequencies. Different resonator types are distinguished by the lengths of the two mirrors and the distance between them. The geometry must be chosen so that the beam remains stable, resonator types are also designed to meet other criteria such as minimum beam waist or having no focal point inside the cavity. Optical cavities are designed to have a large Q factor, a beam will reflect a large number of times with little attenuation. Therefore, the line width of the beam is very small indeed compared to the frequency of the laser. In general, radiation patterns which are reproduced on every round-trip of the light through the resonator are the most stable, the basic, or fundamental transverse mode of a resonator is a Gaussian beam. The most common types of optical cavities consist of two facing plane or spherical mirrors, the simplest of these is the plane-parallel or Fabry–Pérot cavity, consisting of two opposing flat mirrors. However, this problem is reduced for very short cavities with a small mirror separation distance. Plane-parallel resonators are therefore used in microchip and microcavity lasers. In these cases, rather than using separate mirrors, an optical coating may be directly applied to the laser medium itself. The plane-parallel resonator is also the basis of the Fabry–Pérot interferometer, for a resonator with two mirrors with radii of curvature R1 and R2, there are a number of common cavity configurations. If the two curvatures are equal to half the cavity length, a concentric or spherical resonator results and this type of cavity produces a diffraction-limited beam waist in the centre of the cavity, with large beam diameters at the mirrors, filling the whole mirror aperture. Similar to this is the cavity, with one plane mirror. A common and important design is the confocal resonator, with equal curvature mirrors equal to the cavity length. This design produces the smallest possible beam diameter at the cavity mirrors for a given cavity length, a concave-convex cavity has one convex mirror with a negative radius of curvature. A transparent dielectric sphere, such as a liquid droplet, also forms an optical cavity
14.
Damping ratio
–
In engineering, the damping ratio is a dimensionless measure describing how oscillations in a system decay after a disturbance. Many systems exhibit oscillatory behavior when they are disturbed from their position of static equilibrium, a mass suspended from a spring, for example, might, if pulled and released, bounce up and down. On each bounce, the system is trying to return to its equilibrium position, sometimes losses damp the system and can cause the oscillations to gradually decay in amplitude towards zero or attenuate. The damping ratio is a measure of describing how rapidly the oscillations decay from one bounce to the next, where the spring–mass system is completely lossless, the mass would oscillate indefinitely, with each bounce of equal height to the last. This hypothetical case is called undamped, If the system contained high losses, for example if the spring–mass experiment were conducted in a viscous fluid, the mass could slowly return to its rest position without ever overshooting. Commonly, the mass tends to overshoot its starting position, and then return, with each overshoot, some energy in the system is dissipated, and the oscillations die towards zero. Between the overdamped and underdamped cases, there exists a level of damping at which the system will just fail to overshoot. This case is called critical damping, the key difference between critical damping and overdamping is that, in critical damping, the system returns to equilibrium in the minimum amount of time. The damping ratio is a parameter, usually denoted by ζ and it is particularly important in the study of control theory. It is also important in the harmonic oscillator, the damping ratio provides a mathematical means of expressing the level of damping in a system relative to critical damping. This equation can be solved with the approach, X = C e s t, where C and s are both complex constants. That approach assumes a solution that is oscillatory and/or decaying exponentially, using it in the ODE gives a condition on the frequency of the damped oscillations, s = − ω n. Undamped, Is the case where ζ →0 corresponds to the simple harmonic oscillator. Underdamped, If s is a number, then the solution is a decaying exponential combined with an oscillatory portion that looks like exp . This case occurs for ζ <1, and is referred to as underdamped, overdamped, If s is a real number, then the solution is simply a decaying exponential with no oscillation. This case occurs for ζ >1, and is referred to as overdamped, critically damped, The case where ζ =1 is the border between the overdamped and underdamped cases, and is referred to as critically damped. This turns out to be an outcome in many cases where engineering design of a damped oscillator is required. The factors Q, damping ratio ζ, and exponential decay rate α are related such that ζ =12 Q = α ω0, a lower damping ratio implies a lower decay rate, and so very underdamped systems oscillate for long times
15.
Western Electric
–
Western Electric Company was an American electrical engineering and manufacturing company that served as the primary supplier to AT&T from 1881 to 1996. The company was responsible for technological innovations and seminal developments in industrial management. It also served as the agent for the member companies of the Bell System. In 1856, George Shawk purchased an engineering business in Cleveland. On December 31,1869, he partners with Enos M. Barton and, later the same year. In 1872 Barton, and Gray moved the business to Clinton Street, Chicago, Illinois, in 1875, Gray sold his interests to Western Union, including the caveat that he had filed against Alexander Graham Bells patent application for the telephone. Western Electric was the first company to join in a Japanese joint venture with foreign capital, in 1899, it invested in a 54% share of the Nippon Electric Company, Ltd. Western Electrics representative in Japan was Walter Tenney Carleton, in 1901, Western Electric secretly purchased a controlling interest in a principal competitor, the Kellogg Switchboard & Supply Company, but was later forced by a lawsuit to sell. On July 24,1915, employees of the Hawthorne Works boarded the SS Eastland in downtown Chicago for a company picnic, the ship rolled over at the dock and over 800 people died. In 1920, Alice Heacock Seidel was the first of Western Electrics female employees to be given permission to stay on after she had married and this set a precedent in the company, which previously had not allowed married women in their employ. Miss Heacock had worked for Western Electric for sixteen years before her marriage, if the women at the top were permitted to remain after marriage then all women would expect the same privilege. How far and how fast the policy was expanded is shown by the fact that a few years later women were given maternity leaves with no loss of time on their service records. In 1925, ITT purchased the Bell Telephone Manufacturing Company of Brussels, Belgium, the company manufactured rotary system switching equipment under the Western Electric brand. Early on, Western Electric also managed an electrical equipment distribution business, Bell Telephone Laboratories was half-owned by Western Electric, the other half belonging to AT&T. Western Electric used various logos during its existence, starting in 1914 it used an image of AT&Ts statue Spirit of Communication. In 1915, Western Electric Manufacturing was incorporated in New York, New York, as an owned subsidiary of AT&T, under the name Western Electric Company. AT&T and Bell System companies were rumored to employ small armies of inspectors to check household line voltage levels to determine if non-leased phones were in use by consumers. Western Electric telephones were owned not by end customers but by the local Bell System telephone companies—all of which were subsidiaries of AT&T, each phone was leased from the phone company on a monthly basis by customers who generally paid for their phone as part of the recurring lease fees
16.
Full width at half maximum
–
In other words, it is the width of a spectrum curve measured between those points on the y-axis which are half the maximum amplitude. Half width at half maximum is half of the FWHM, FWHM is applied to such phenomena as the duration of pulse waveforms and the spectral width of sources used for optical communications and the resolution of spectrometers. The term full duration at half maximum is preferred when the independent variable is time, in signal processing terms, this is at most −3 dB of attenuation, called half power point.355 σ. The width does not depend on the expected value x0, it is invariant under translations, in spectroscopy half the width at half maximum, HWHM, is in common use. For example, a Lorentzian/Cauchy distribution of height 1/πγ can be defined by f =1 π γ and F W H M =2 γ, another important distribution function, related to solitons in optics, is the hyperbolic secant, f = sech . Any translating element was omitted, since it does not affect the FWHM, for this impulse we have, F W H M =2 arsech X =2 ln X ≈2.634 X where arsech is the inverse hyperbolic secant. Gaussian function Cutoff frequency This article incorporates public domain material from the General Services Administration document Federal Standard 1037C
17.
Inductor
–
An inductor, also called a coil or reactor, is a passive two-terminal electrical component that stores electrical energy in a magnetic field when electric current is flowing through it. An inductor typically consists of a conductor, such as a wire. When the current flowing through an inductor changes, the magnetic field induces a voltage in the conductor. According to Lenzs law, the direction of induced electromotive force opposes the change in current that created it, as a result, inductors oppose any changes in current through them. An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current, in the International System of Units, the unit of inductance is the henry. Inductors have values that range from 1 µH to 1 H. Many inductors have a core made of iron or ferrite inside the coil. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits, Inductors are widely used in alternating current electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass and they are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers. An electric current flowing through a conductor generates a magnetic field surrounding it, any changes of current and therefore in the magnetic flux through the cross-section of the inductor creates an opposing electromotive force in the conductor. An inductor is a component consisting of a wire or other conductor shaped to increase the flux through the circuit. Winding the wire into a coil increases the number of times the magnetic flux lines link the circuit, increasing the field, the more turns, the higher the inductance. The inductance also depends on the shape of the coil, separation of the turns, by adding a magnetic core made of a ferromagnetic material like iron inside the coil, the magnetizing field from the coil will induce magnetization in the material, increasing the magnetic flux. The high permeability of a core can increase the inductance of a coil by a factor of several thousand over what it would be without it. Any change in the current through an inductor creates a changing flux, for example, an inductor with an inductance of 1 henry produces an EMF of 1 volt when the current through the inductor changes at the rate of 1 ampere per second. This is usually taken to be the relation of the inductor. The dual of the inductor is the capacitor, which stores energy in a field rather than a magnetic field. Its current-voltage relation is obtained by exchanging current and voltage in the inductor equations, the polarity of the induced voltage is given by Lenzs law, which states that it will be such as to oppose the change in current
18.
Capacitor
–
A capacitor is a passive two-terminal electrical component that stores electrical energy in an electric field. The effect of a capacitor is known as capacitance, a capacitor was therefore historically first known as an electric condenser. The physical form and construction of practical capacitors vary widely and many types are in common use. Most capacitors contain at least two electrical conductors often in the form of plates or surfaces separated by a dielectric medium. A conductor may be a foil, thin film, sintered bead of metal, the nonconducting dielectric acts to increase the capacitors charge capacity. Materials commonly used as dielectrics include glass, ceramic, plastic film, paper, mica, Capacitors are widely used as parts of electrical circuits in many common electrical devices. Unlike a resistor, a capacitor does not dissipate energy. No current actually flows through the dielectric, instead, the effect is a displacement of charges through the source circuit, if the condition is maintained sufficiently long, this displacement current through the battery ceases. However, if a voltage is applied across the leads of the capacitor. Capacitance is defined as the ratio of the charge on each conductor to the potential difference between them. The unit of capacitance in the International System of Units is the farad, capacitance values of typical capacitors for use in general electronics range from about 1 pF to about 1 mF. The capacitance of a capacitor is proportional to the area of the plates. In practice, the dielectric between the plates passes a small amount of leakage current and it has an electric field strength limit, known as the breakdown voltage. The conductors and leads introduce an undesired inductance and resistance, Capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies, in resonant circuits they tune radios to particular frequencies. In electric power systems, they stabilize voltage and power flow. The property of energy storage in capacitors was exploited as dynamic memory in digital computers. Von Kleists hand and the water acted as conductors, and the jar as a dielectric, von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained from an electrostatic machine
19.
Resistor
–
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of power as heat may be used as part of motor controls, in power distribution systems. Fixed resistors have resistances that only slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements, or as sensing devices for heat, light, humidity, force, Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds, Resistors are also implemented within integrated circuits. The electrical function of a resistor is specified by its resistance, the nominal value of the resistance falls within the manufacturing tolerance, indicated on the component. Two typical schematic diagram symbols are as follows, The notation to state a resistors value in a circuit diagram varies, one common scheme is the letter and digit code for resistance values following IEC60062. It avoids using a separator and replaces the decimal separator with a letter loosely associated with SI prefixes corresponding with the parts resistance. For example, 8K2 as part marking code, in a diagram or in a bill of materials indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example 15M0 for three significant digits, when the value can be expressed without the need for a prefix, an R is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω, for example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 /300 =0.04 amperes flows through that resistor. Practical resistors also have some inductance and capacitance which affect the relation between voltage and current in alternating current circuits, the ohm is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere, since resistors are specified and manufactured over a very large range of values, the derived units of milliohm, kilohm, and megohm are also in common usage. The total resistance of resistors connected in series is the sum of their resistance values. R e q = R1 + R2 + ⋯ + R n, the total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors. 1 R e q =1 R1 +1 R2 + ⋯ +1 R n. For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor, a resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other
20.
Potential energy
–
In physics, potential energy is energy possessed by a body by virtue of its position relative to others, stresses within itself, electric charge, and other factors. The unit for energy in the International System of Units is the joule, the term potential energy was introduced by the 19th century Scottish engineer and physicist William Rankine, although it has links to Greek philosopher Aristotles concept of potentiality. Potential energy is associated with forces that act on a body in a way that the work done by these forces on the body depends only on the initial and final positions of the body in space. These forces, that are called potential forces, can be represented at every point in space by vectors expressed as gradients of a scalar function called potential. Potential energy is the energy of an object. It is the energy by virtue of a position relative to other objects. Potential energy is associated with restoring forces such as a spring or the force of gravity. The action of stretching the spring or lifting the mass is performed by a force that works against the force field of the potential. This work is stored in the field, which is said to be stored as potential energy. If the external force is removed the field acts on the body to perform the work as it moves the body back to the initial position. Suppose a ball which mass is m, and it is in h position in height, if the acceleration of free fall is g, the weight of the ball is mg. There are various types of energy, each associated with a particular type of force. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of mutual positions of electrons and nuclei in atoms and molecules. Thermal energy usually has two components, the energy of random motions of particles and the potential energy of their mutual positions. Forces derivable from a potential are also called conservative forces, the work done by a conservative force is W = − Δ U where Δ U is the change in the potential energy associated with the force. The negative sign provides the convention that work done against a force field increases potential energy, common notations for potential energy are U, V, also Ep. Potential energy is closely linked with forces, in this case, the force can be defined as the negative of the vector gradient of the potential field. If the work for a force is independent of the path, then the work done by the force is evaluated at the start
21.
Kinetic energy
–
In physics, the kinetic energy of an object is the energy that it possesses due to its motion. It is defined as the work needed to accelerate a body of a mass from rest to its stated velocity. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes, the same amount of work is done by the body in decelerating from its current speed to a state of rest. In classical mechanics, the energy of a non-rotating object of mass m traveling at a speed v is 12 m v 2. In relativistic mechanics, this is an approximation only when v is much less than the speed of light. The standard unit of energy is the joule. The adjective kinetic has its roots in the Greek word κίνησις kinesis, the dichotomy between kinetic energy and potential energy can be traced back to Aristotles concepts of actuality and potentiality. The principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, Willem s Gravesande of the Netherlands provided experimental evidence of this relationship. By dropping weights from different heights into a block of clay, Émilie du Châtelet recognized the implications of the experiment and published an explanation. The terms kinetic energy and work in their present scientific meanings date back to the mid-19th century, early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de lEffet des Machines outlining the mathematics of kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term kinetic energy c, energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. These can be categorized in two classes, potential energy and kinetic energy. Kinetic energy is the movement energy of an object, Kinetic energy can be transferred between objects and transformed into other kinds of energy. Kinetic energy may be best understood by examples that demonstrate how it is transformed to, for example, a cyclist uses chemical energy provided by food to accelerate a bicycle to a chosen speed. On a level surface, this speed can be maintained without further work, except to overcome air resistance, the chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms, for example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. The kinetic energy has now largely converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling, the energy is not destroyed, it has only been converted to another form by friction
22.
Conservative force
–
A conservative force is a force with the property that the work done in moving a particle between two points is independent of the taken path. Equivalently, if a particle travels in a loop, the net work done by a conservative force is zero. A conservative force is dependent only on the position of the object, if a force is conservative, it is possible to assign a numerical value for the potential at any point. When an object moves from one location to another, the changes the potential energy of the object by an amount that does not depend on the path taken. If the force is not conservative, then defining a scalar potential is not possible, gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force. Other examples of conservative forces are, force in elastic spring, the last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. Thus, all forces are conservative forces. Informally, a force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it, then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, if the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force, the gravitational force, spring force, magnetic force and electric force are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces. For non-conservative forces, the energy that is lost has to go somewhere else. Usually the energy is turned into heat, for example the heat generated by friction, in addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the mechanical energy into not only heat and sound energy. These and other losses are irreversible because of the second law of thermodynamics. A direct consequence of the closed path test is that the work done by a force on a particle moving between any two points does not depend on the path taken by the particle. This is illustrated in the figure to the right, The work done by the force on an object depends only on its change in height because the gravitational force is conservative. The work done by a force is equal to the negative of change in potential energy during that process
23.
AC power
–
Power in an electric circuit is the rate of flow of energy past a given point of the circuit. In alternating current circuits, energy storage such as inductors and capacitors may result in periodic reversals of the direction of energy flow. The portion of power that, averaged over a cycle of the AC waveform. The portion of power due to stored energy, which returns to the source in each cycle, is known as reactive power, in a simple alternating current circuit consisting of a source and a linear load, both the current and voltage are sinusoidal. If the load is resistive, the two quantities reverse their polarity at the same time. At every instant the product of voltage and current is positive or zero, in this case, only active power is transferred. If the load is purely reactive, then the voltage and current are 90 degrees out of phase, There is no net energy flow over each half cycle. In this case, only reactive power flows, There is no net transfer of energy to the load, however electrical power does flow along the wires and returns by flowing in reverse along the same wires. During its travels both from the source to the reactive load and back to the power source, this purely reactive power flow loses energy to the line resistance. Practical loads have resistance as well as inductance, and/or capacitance, Power engineers analyse the apparent power as being the magnitude of the vector sum of active and reactive power. Apparent power is the product of the rms values of voltage, conductors, transformers and generators must be sized to carry the total current, not just the current that does useful work. Another consequence is that adding the apparent power for two loads will not accurately give the power unless they have the same phase difference between current and voltage. Conventionally, capacitors are treated as if they generate reactive power, if a capacitor and an inductor are placed in parallel, then the currents flowing through the capacitor and the inductor tend to cancel rather than add. The result of this is that capacitive and inductive circuit elements tend to each other out. Current lagging voltage, current leading voltage and these are all denoted in the diagram to the right. In the diagram, P is the power, Q is the reactive power, S is the complex power. Reactive power does not do any work, so it is represented as the axis of the vector diagram. Active power does do work, so it is the real axis, the unit for all forms of power is the watt, but this unit is generally reserved for active power
24.
Linear time-invariant theory
–
It investigates the response of a linear and time-invariant system to an arbitrary input signal. Thus, these systems are called linear translation-invariant to give the theory the most general reach. In the case of generic discrete-time systems, linear shift-invariant is the corresponding term, a good example of LTI systems are electrical circuits that can be made up of resistors, capacitors, and inductors. The defining properties of any LTI system are linearity and time invariance. It follows that this can be extended to a number of terms. In particular, where c ω and x ω are scalars, time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of T seconds. That is, if the output due to input x is y, hence, the system is time invariant because the output does not depend on the particular time the input is applied. The fundamental result in LTI system theory is that any LTI system can be characterized entirely by a function called the systems impulse response. The output of the system is simply the convolution of the input to the system with the impulse response. This method of analysis is called the time domain point-of-view. The same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, equivalently, any LTI system can be characterized in the frequency domain by the systems transfer function, which is the Laplace transform of the systems impulse response. As a result of the properties of these transforms, the output of the system in the domain is the product of the transfer function. In other words, convolution in the domain is equivalent to multiplication in the frequency domain. For all LTI systems, the eigenfunctions, and the functions of the transforms, are complex exponentials. The ratio B / A is the function at frequency s. LTI systems cannot produce frequency components that are not in the input, LTI system theory is good at describing many important systems. Most LTI systems are considered easy to analyze, at least compared to the time-varying and/or nonlinear case, any system that can be modeled as a linear homogeneous differential equation with constant coefficients is an LTI system. Examples of such systems are electrical circuits made up of resistors, inductors, ideal spring–mass–damper systems are also LTI systems, and are mathematically equivalent to RLC circuits
25.
Exponential decay
–
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the differential equation. The solution to this equation is, N = N0 e − λ t, where N is the quantity at time t, and N0 = N is the initial quantity, i. e. the quantity at time t =0. If the decaying quantity, N, is the number of elements in a certain set. This is called the lifetime, τ, and it can be shown that it relates to the decay rate, λ, in the following way. For example, if the population of the assembly, N, is 1000. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, in that case the scaling time is the half-life. A more intuitive characteristic of exponential decay for many people is the time required for the quantity to fall to one half of its initial value. This time is called the half-life, and often denoted by the symbol t1/2, the half-life can be written in terms of the decay constant, or the mean lifetime, as, t 1 /2 = ln λ = τ ln . When this expression is inserted for τ in the equation above, and ln 2 is absorbed into the base. Thus, the amount of left is 2−1 = 1/2 raised to the number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the material left. Therefore, the mean lifetime τ is equal to the half-life divided by the log of 2, or. E. g. polonium-210 has a half-life of 138 days, the equation that describes exponential decay is d N d t = − λ N or, by rearranging, d N N = − λ d t. This is the form of the equation that is most commonly used to describe exponential decay, any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the operator with N as the corresponding eigenfunction. The units of the constant are s−1
26.
Asymptote
–
In analytic geometry, an asymptote of a curve is a line such that the distance between the curve and the line approaches zero as they tend to infinity. Some sources include the requirement that the curve may not cross the line infinitely often, in some contexts, such as algebraic geometry, an asymptote is defined as a line which is tangent to a curve at infinity. The word asymptote is derived from the Greek ἀσύμπτωτος which means not falling together, + σύν together + πτωτ-ός fallen. The term was introduced by Apollonius of Perga in his work on conic sections, there are potentially three kinds of asymptotes, horizontal, vertical and oblique asymptotes. For curves given by the graph of a function y = ƒ, vertical asymptotes are vertical lines near which the function grows without bound. Asymptotes convey information about the behavior of curves in the large, the study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. The idea that a curve may come close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a screen have a positive width. So if they were to be extended far enough they would seem to merge, but these are physical representations of the corresponding mathematical entities, the line and the curve are idealized concepts whose width is 0. Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience, consider the graph of the function f =1 x shown to the right. The coordinates of the points on the curve are of the form where x is an other than 0. But no matter how large x becomes, its reciprocal 1 x is never 0, so the curve extends farther and farther upward as it comes closer and closer to the y-axis. Thus, both the x and y-axes are asymptotes of the curve and these ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below. The asymptotes most commonly encountered in the study of calculus are of curves of the form y = ƒ and these can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on its orientation. Horizontal asymptotes are lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicate they are parallel to the x-axis, vertical asymptotes are vertical lines near which the function grows without bound. Oblique asymptotes are diagonal lines so that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞, more general type of asymptotes can be defined in this case. Only open curves that have some infinite branch, can have an asymptote, no closed curve can have an asymptote
27.
Impulse response
–
In signal processing, the impulse response, or impulse response function, of a dynamic system is its output when presented with a brief input signal, called an impulse. More generally, a response is the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time, in all these cases, the dynamic system and its impulse response may be actual physical objects, or may be mathematical systems of equations describing such objects. Since the impulse function contains all frequencies, the impulse response defines the response of a linear time-invariant system for all frequencies, mathematically, how the impulse is described depends on whether the system is modeled in discrete or continuous time. The impulse can be modeled as a Dirac delta function for continuous-time systems, the Dirac delta represents the limiting case of a pulse made very short in time while maintaining its area or integral. While this is impossible in any system, it is a useful idealisation. In Fourier analysis theory, such an impulse comprises equal portions of all possible excitation frequencies, any system in a large class known as linear, time-invariant is completely characterized by its impulse response. That is, for any input, the output can be calculated in terms of the input, the impulse response of a linear transformation is the image of Diracs delta function under the transformation, analogous to the fundamental solution of a partial differential operator. It is usually easier to analyze systems using transfer functions as opposed to impulse responses, the transfer function is the Laplace transform of the impulse response. The Laplace transform of a systems output may be determined by the multiplication of the function with the inputs Laplace transform in the complex plane. An inverse Laplace transform of this result will yield the output in the time domain, to determine an output directly in the time domain requires the convolution of the input with the impulse response. When the transfer function and the Laplace transform of the input are known, the impulse response, considered as a Greens function, can be thought of as an influence function, how a point of input influences output. In practical systems, it is not possible to produce an impulse to serve as input for testing, therefore. Provided that the pulse is short compared to the impulse response. An application that demonstrates this idea was the development of impulse response loudspeaker testing in the 1970s, loudspeakers suffer from phase inaccuracy, a defect unlike other measured properties such as frequency response. Impulse response analysis is a facet of radar, ultrasound imaging. An interesting example would be broadband internet connections, dSL/Broadband services use adaptive equalisation techniques to help compensate for signal distortion and interference introduced by the copper phone lines used to deliver the service. In control theory the impulse response is the response of a system to a Dirac delta input, in acoustic and audio applications, impulse responses enable the acoustic characteristics of a location, such as a concert hall, to be captured
28.
Low-pass filter
–
A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design, the filter is sometimes called a high-cut filter, or treble cut filter in audio applications. A low-pass filter is the complement of a high-pass filter, Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and leaving the longer-term trend. Filter designers will often use the form as a prototype filter. That is, a filter with unity bandwidth and impedance, the desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform. Examples of low-pass filters occur in acoustics, optics and electronics, a stiff physical barrier tends to reflect higher sound frequencies, and so acts as a low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, an optical filter with the same function can correctly be called a low-pass filter, but conventionally is called a longpass filter, to avoid confusion. For current signals, a circuit, using a resistor and capacitor in parallel. Electronic low-pass filters are used on inputs to subwoofers and other types of loudspeakers, radio transmitters use low-pass filters to block harmonic emissions that might interfere with other communications. The tone knob on many electric guitars is a filter used to reduce the amount of treble in the sound. An integrator is another time constant low-pass filter, telephone lines fitted with DSL splitters use low-pass and high-pass filters to separate DSL and POTS signals sharing the same pair of wires. Low-pass filters also play a significant role in the sculpting of sound created by analogue, the transition region present in practical filters does not exist in an ideal filter. The filter would therefore need to have infinite delay, or knowledge of the future and past. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or more typically by making the signal repetitive and this delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay, an ideal low-pass filter results in ringing artifacts via the Gibbs phenomenon. These can be reduced or worsened by choice of windowing function, for example, simple truncation causes severe ringing artifacts, in signal reconstruction, and to reduce these artifacts one uses window functions which drop off more smoothly at the edges. The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a signal from a sampled digital signal. Real digital-to-analog converters use real filter approximations, there are many different types of filter circuits, with different responses to changing frequency
29.
Heaviside step function
–
The Heaviside step function, or the unit step function, usually denoted by H or θ, is a discontinuous function whose value is zero for negative argument and one for positive argument. It is an example of the class of step functions. Oliver Heaviside, who developed the operational calculus as a tool in the analysis of telegraphic communications, represented the function as 1. This is sometimes written as H, = ∫ − ∞ x δ d s although this expansion may not hold for x =0, in this context, the Heaviside function is the cumulative distribution function of a random variable which is almost surely 0. In operational calculus, useful answers seldom depend on which value is used for H, however, the choice may have some important consequences in functional analysis and game theory, where more general forms of continuity are considered. Some common choices can be seen below, unlike the usual case, the definition of H is significant. The discrete-time unit impulse is the first difference of the discrete-time step δ = H − H and this function is the cumulative summation of the Kronecker delta, H = ∑ k = − ∞ n δ where δ = δ k,0 is the discrete unit impulse function. If we take H = 1/2, equality holds in the limit, there are many other smooth, analytic approximations to the step function. Among the possibilities are, H = lim k → ∞ H = lim k → ∞ These limits hold pointwise, in general, however, pointwise convergence need not imply distributional convergence, and vice versa distributional convergence need not imply pointwise convergence. For example, all three of the approximations are cumulative distribution functions of common probability distributions, The logistic, Cauchy and normal distributions. Since H is usually used in integration, and the value of a function at a point does not affect its integral. Indeed when H is considered as a distribution or an element of L∞ it does not even make sense to talk of a value at zero, if using some analytic approximation then often whatever happens to be the relevant limit at zero is used. There exist various reasons for choosing a particular value, H = 1/2 is often used since the graph then has rotational symmetry, put another way, H − 1/2 is then an odd function. In this case the relation with the sign function holds for all x, H =12 +12 sgn . H =1 is used when H needs to be right-continuous, for instance cumulative distribution functions are usually taken to be right continuous, as are functions integrated against in Lebesgue–Stieltjes integration. In this case H is the function of a closed semi-infinite interval. The corresponding probability distribution is the degenerate distribution, H =0 is used when H needs to be left-continuous. In this case H is a function of an open semi-infinite interval
30.
Critically damped
–
If a frictional force proportional to the velocity is also present, the harmonic oscillator is described as a damped oscillator. Depending on the coefficient, the system can, Oscillate with a frequency lower than in the non-damped case. Decay to the position, without oscillations. The boundary solution between an underdamped oscillator and an overdamped oscillator occurs at a value of the friction coefficient and is called critically damped. If an external time dependent force is present, the oscillator is described as a driven oscillator. Mechanical examples include pendulums, masses connected to springs, and acoustical systems, other analogous systems include electrical harmonic oscillators such as RLC circuits. The harmonic oscillator model is important in physics, because any mass subject to a force in stable equilibrium acts as a harmonic oscillator for small vibrations. Harmonic oscillators occur widely in nature and are exploited in many devices, such as clocks. They are the source of virtually all sinusoidal vibrations and waves, a simple harmonic oscillator is an oscillator that is neither driven nor damped. It consists of a m, which experiences a single force, F, which pulls the mass in the direction of the point x=0 and depends only on the masss position x. Balance of forces for the system is F = m a = m d 2 x d t 2 = m x ¨ = − k x. Solving this differential equation, we find that the motion is described by the function x = A cos , the motion is periodic, repeating itself in a sinusoidal fashion with constant amplitude, A. The position at a time t also depends on the phase, φ. The period and frequency are determined by the size of the mass m, the velocity and acceleration of a simple harmonic oscillator oscillate with the same frequency as the position but with shifted phases. The velocity is maximum for zero displacement, while the acceleration is in the direction as the displacement. The potential energy stored in a harmonic oscillator at position x is U =12 k x 2. In real oscillators, friction, or damping, slows the motion of the system, due to frictional force, the velocity decreases in proportion to the acting frictional force. While simple harmonic motion oscillates with only the force acting on the system
31.
Negative feedback
–
Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations, Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive. General negative feedback systems are studied in control systems engineering, in the invisible hand of the market metaphor of economic theory, reactions to price movements provide a feedback mechanism to match supply and demand. In centrifugal governors, negative feedback is used to maintain a near-constant speed of an engine, in a Steering engine, power assistance is applied to the rudder with a feedback loop, to maintain the direction set by the steersman. In servomechanisms, the speed or position of an output, as determined by a sensor, is compared to a set value, and any error is reduced by negative feedback to the input. In analog computing feedback around operational amplifiers is used to generate mathematical functions such as addition, subtraction, integration, differentiation, logarithm, in a phase locked loop feedback is used to maintain a generated alternating waveform in a constant phase to a reference signal. In many implementations the generated waveform is the output, but when used as a demodulator in a FM radio receiver, if there is a frequency divider between the generated waveform and the phase comparator, the device acts as a frequency multiplier. In organisms, feedback enables various measures to be maintained within a range by homeostatic processes. Negative feedback as a technique may be seen in the refinements of the water clock introduced by Ktesibios of Alexandria in the 3rd century BCE. Self-regulating mechanisms have existed since antiquity, and were used to maintain a constant level in the reservoirs of water clocks as early as 200 BCE, Negative feedback was implemented in the 17th Century. The term feedback was well established by the 1920s, in reference to a means of boosting the gain of an electronic amplifier, friis and Jensen described this action as positive feedback and made passing mention of a contrasting negative feed-back action in 1924. Karl Küpfmüller published papers on an automatic gain control system. Nyquist and Bode built on Black’s work to develop a theory of amplifier stability, early researchers in the area of cybernetics subsequently generalized the idea of negative feedback to cover any goal-seeking or purposeful behavior. All purposeful behavior may be considered to require negative feed-back, for understanding the general principles of dynamic systems, therefore, the concept of feedback is inadequate in itself. What is important is that complex systems, richly cross-connected internally, have complex behaviors, to reduce confusion, later authors have suggested alternative terms such as degenerative, self-correcting, balancing, or discrepancy-reducing in place of negative. In many physical and biological systems, qualitatively different influences can oppose each other, for example, in biochemistry, one set of chemicals drives the system in a given direction, whereas another set of chemicals drives it in an opposing direction. If one or both of these influences are non-linear, equilibrium point result. In biology, this process is referred to as homeostasis, whereas in mechanics
32.
Butterworth filter
–
The Butterworth filter is a type of signal processing filter designed to have as flat a frequency response as possible in the passband. It is also referred to as a maximally flat magnitude filter and it was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled On the Theory of Filter Amplifiers. Butterworth had a reputation for solving mathematical problems. At the time, filter design required an amount of designer experience due to limitations of the theory then in use. The filter was not in use for over 30 years after its publication. Butterworth stated that, An ideal electrical filter should not only reject the unwanted frequencies. Such an ideal filter cannot be achieved but Butterworth showed that successively closer approximations were obtained with increasing numbers of elements of the right values. At the time, filters generated substantial ripple in the passband, if ω =1, the amplitude response of this type of filter in the passband is 1/√2 ≈0.707, which is half power or −3 dB. Butterworth only dealt with filters with an number of poles in his paper. He may have been unaware that such filters could be designed with an odd number of poles and he built his higher order filters from 2-pole filters separated by vacuum tube amplifiers. His plot of the response of 2,4,6,8, and 10 pole filters is shown as A, B, C, D. In 1930, low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were rather lossy, Butterworth discovered that it was possible to adjust the component values of the filter to compensate for the winding resistance of the inductors. He used coil forms of 1. 25″ diameter and 3″ length with plug-in terminals, associated capacitors and resistors were contained inside the wound coil form. The coil formed part of the load resistor. Two poles were used per vacuum tube and RC coupling was used to the grid of the following tube, Butterworth also showed that his basic low-pass filter could be modified to give low-pass, high-pass, band-pass and band-stop functionality. The frequency response of the Butterworth filter is maximally flat in the passband, when viewed on a logarithmic Bode plot, the response slopes off linearly towards negative infinity. A first-order filters response rolls off at −6 dB per octave, a second-order filter decreases at −12 dB per octave, a third-order at −18 dB and so on. Butterworth filters have a monotonically changing magnitude function with ω, unlike other types that have non-monotonic ripple in the passband and/or the stopband
33.
Bessel filter
–
In electronics and signal processing, a Bessel filter is a type of analog linear filter with a maximally flat group/phase delay, which preserves the wave shape of filtered signals in the passband. Bessel filters are used in audio crossover systems. The filters name is a reference to German mathematician Friedrich Bessel, the filters are also called Bessel–Thomson filters in recognition of W. E. Thomson, who worked out how to apply Bessel functions to filter design in 1949. The Bessel filter is similar to the Gaussian filter. While the time-domain step response of the Gaussian filter has zero overshoot, the Bessel filter has an amount of overshoot. The filter has a group delay of 1 / ω0. Since θ n is indeterminate by the definition of reverse Bessel polynomials, the transfer function for a third-order Bessel low-pass filter, normalized to have unit group delay, is H =15 s 3 +6 s 2 +15 s +15. The roots of the polynomial, the filters poles, include a real pole at s = −2.3222. The numerator 15 is chosen to give a gain of 1 at DC, the gain is then G = | H | =15 ω6 +6 ω4 +45 ω2 +225. The phase is ϕ = − arg = − arctan , the group delay is D = − d ϕ d ω =6 ω4 +45 ω2 +225 ω6 +6 ω4 +45 ω2 +225. The Taylor series expansion of the delay is D =1 − ω6225 + ω81125 + ⋯. Note that the two terms in ω2 and ω4 are zero, resulting in a very flat group delay at ω =0. This is the greatest number of terms that can be set to zero, since there are a total of four coefficients in the third order Bessel polynomial, requiring four equations in order to be defined. One equation specifies that the gain be unity at ω =0, the digital equivalent is the Thiran filter, also an all-pole low-pass filter with maximally-flat group delay, which can also be transformed into an allpass filter, to implement fractional delays. Butterworth filter Comb filter Chebyshev filter Elliptic filter Bessel function Group delay and phase delay Bessel, bond Bessel Filters Polynomials, Poles and Circuit Elements — C. R. Bond Java source code to compute Bessel filter poles
34.
Oscillation
–
Oscillation is the repetitive variation, typically in time, of some measure about a central value or between two or more different states. The term vibration is used to describe mechanical oscillation. Familiar examples of oscillation include a swinging pendulum and alternating current power, the simplest mechanical oscillating system is a weight attached to a linear spring subject to only weight and tension. Such a system may be approximated on an air table or ice surface, the system is in an equilibrium state when the spring is static. If the system is displaced from the equilibrium, there is a net restoring force on the mass, tending to bring it back to equilibrium. However, in moving the back to the equilibrium position, it has acquired momentum which keeps it moving beyond that position. If a constant force such as gravity is added to the system, the time taken for an oscillation to occur is often referred to as the oscillatory period. All real-world oscillator systems are thermodynamically irreversible and this means there are dissipative processes such as friction or electrical resistance which continually convert some of the energy stored in the oscillator into heat in the environment. Thus, oscillations tend to decay with time there is some net source of energy into the system. The simplest description of this process can be illustrated by oscillation decay of the harmonic oscillator. In addition, a system may be subject to some external force. In this case the oscillation is said to be driven, some systems can be excited by energy transfer from the environment. This transfer typically occurs where systems are embedded in some fluid flow, at sufficiently large displacements, the stiffness of the wing dominates to provide the restoring force that enables an oscillation. The harmonic oscillator and the systems it models have a degree of freedom. More complicated systems have more degrees of freedom, for two masses and three springs. In such cases, the behavior of each variable influences that of the others and this leads to a coupling of the oscillations of the individual degrees of freedom. For example, two pendulum clocks mounted on a wall will tend to synchronise. This phenomenon was first observed by Christiaan Huygens in 1665, more special cases are the coupled oscillators where energy alternates between two forms of oscillation
35.
Amplitude
–
The amplitude of a periodic variable is a measure of its change over a single period. There are various definitions of amplitude, which are all functions of the magnitude of the difference between the extreme values. In older texts the phase is called the amplitude. Peak-to-peak amplitude is the change between peak and trough, with appropriate circuitry, peak-to-peak amplitudes of electric oscillations can be measured by meters or by viewing the waveform on an oscilloscope. Peak-to-peak is a measurement on an oscilloscope, the peaks of the waveform being easily identified and measured against the graticule. This remains a common way of specifying amplitude, but sometimes other measures of amplitude are more appropriate. In audio system measurements, telecommunications and other areas where the measurand is a signal that swings above and below a value but is not sinusoidal. If the reference is zero, this is the absolute value of the signal, if the reference is a mean value. Semi-amplitude means half the peak-to-peak amplitude, some scientists use amplitude or peak amplitude to mean semi-amplitude, that is, half the peak-to-peak amplitude. It is the most widely used measure of orbital wobble in astronomy, the RMS of the AC waveform. For complicated waveforms, especially non-repeating signals like noise, the RMS amplitude is used because it is both unambiguous and has physical significance. For example, the power transmitted by an acoustic or electromagnetic wave or by an electrical signal is proportional to the square of the RMS amplitude. For alternating current electric power, the practice is to specify RMS values of a sinusoidal waveform. One property of root mean square voltages and currents is that they produce the same heating effect as direct current in a given resistance, the peak-to-peak value is used, for example, when choosing rectifiers for power supplies, or when estimating the maximum voltage that insulation must withstand. Some common voltmeters are calibrated for RMS amplitude, but respond to the value of a rectified waveform. Many digital voltmeters and all moving coil meters are in this category, the RMS calibration is only correct for a sine wave input since the ratio between peak, average and RMS values is dependent on waveform. If the wave shape being measured is greatly different from a sine wave, true RMS-responding meters were used in radio frequency measurements, where instruments measured the heating effect in a resistor to measure current. The advent of microprocessor controlled meters capable of calculating RMS by sampling the waveform has made true RMS measurement commonplace
36.
Frequency
–
Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as frequency, which emphasizes the contrast to spatial frequency. The period is the duration of time of one cycle in a repeating event, for example, if a newborn babys heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as vibrations, audio signals, radio waves. For cyclical processes, such as rotation, oscillations, or waves, in physics and engineering disciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter ν or ν. For a simple motion, the relation between the frequency and the period T is given by f =1 T. The SI unit of frequency is the hertz, named after the German physicist Heinrich Hertz, a previous name for this unit was cycles per second. The SI unit for period is the second, a traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. As a matter of convenience, longer and slower waves, such as ocean surface waves, short and fast waves, like audio and radio, are usually described by their frequency instead of period. Spatial frequency is analogous to temporal frequency, but the axis is replaced by one or more spatial displacement axes. Y = sin = sin d θ d x = k Wavenumber, in the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has a relationship to the wavelength. Even in dispersive media, the frequency f of a wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave. In the special case of electromagnetic waves moving through a vacuum, then v = c, where c is the speed of light in a vacuum, and this expression becomes, f = c λ. When waves from a monochrome source travel from one medium to another, their remains the same—only their wavelength. For example, if 71 events occur within 15 seconds the frequency is, the latter method introduces a random error into the count of between zero and one count, so on average half a count. This is called gating error and causes an error in the calculated frequency of Δf = 1/, or a fractional error of Δf / f = 1/ where Tm is the timing interval. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small, an older method of measuring the frequency of rotating or vibrating objects is to use a stroboscope