1.
Electronic filter
–
Electronic filters are circuits which perform signal processing functions, specifically to remove unwanted frequency components from the signal, to enhance wanted ones, or both. Electronic filters can be, passive or active analog or digital high-pass, low-pass, band-pass, band-stop, see the article on linear filters for details on their design and analysis. The oldest forms of filters are passive analog linear filters. These are known as RC and RL single-pole filters respectively, more complex multipole LC filters have also existed for many years, and their operation is well understood. Hybrid filters are possible, typically involving a combination of analog amplifiers with mechanical resonators or delay lines. Other devices such as CCD delay lines have also used as discrete-time filters. With the availability of digital processing, active digital filters have become common. Passive implementations of filters are based on combinations of resistors, inductors and capacitors. These types are known as passive filters, because they do not depend upon an external power supply and/or they do not contain active components such as transistors. Inductors block high-frequency signals and conduct low-frequency signals, while capacitors do the reverse, the inductors and capacitors are the reactive elements of the filter. The number of elements determines the order of the filter, in this context, an LC tuned circuit being used in a band-pass or band-stop filter is considered a single element even though it consists of two components. At high frequencies, sometimes the inductors consist of single loops or strips of metal. These inductive or capacitive pieces of metal are called stubs, the simplest passive filters, RC and RL filters, include only one reactive element, except hybrid LC filter which is characterized by inductance and capacitance integrated in one element. An L filter consists of two elements, one in series and one in parallel. Three-element filters can have a T or π topology and in either geometries, the components can be chosen symmetric or not, depending on the required frequency characteristics. The high-pass T filter in the illustration, has a low impedance at high frequencies. That means that it can be inserted in a line, resulting in the high frequencies being passed. Likewise, for the illustrated low-pass π filter, the circuit can be connected to a line, transmitting low frequencies
2.
Network synthesis filters
–
Network synthesis is a method of designing signal processing filters. It has produced several important classes of filter including the Butterworth filter, the Chebyshev filter and it was originally intended to be applied to the design of passive linear analogue filters but its results can also be applied to implementations in active filters and digital filters. The essence of the method is to obtain the component values of the filter from a rational function representing the desired transfer function. The method can be viewed as the problem of network analysis. Network analysis starts with a network and by applying the various electric circuit theorems predicts the response of the network, network synthesis on the other hand, starts with a desired response and its methods produce a network that outputs, or approximates to, that response. Network synthesis was originally intended to produce filters of the formerly described as wave filters. That is, filters whose purpose is to pass waves of certain wavelengths while rejecting waves of other wavelengths, network synthesis starts out with a specification for the transfer function of the filter, H, as a function of complex frequency, s. In a digital implementation of a filter, H can be implemented directly, the advantages of the method are best understood by comparing it to the filter design methodology that was used before it, the image method. The image method considers the characteristics of a filter section in an infinite chain of identical sections. The filters produced by this method suffer from inaccuracies due to the theoretical termination impedance, with network synthesis filters, the terminations are included in the design from the start. The image method also requires an amount of experience on the part of the designer. The designer must first decide how many sections and of type should be used. This may not be what is required and there can be a number of iterations, the network synthesis method, on the other hand, starts out with the required function and generates as output the sections needed to build the corresponding filter. In general, the sections of a network synthesis filter are of identical topology, both methods make use of low-pass prototype filters followed by frequency transformations and impedance scaling to arrive at the final desired filter. The class of a filter refers to the class of polynomials from which the filter is mathematically derived, the order of the filter is the number of filter elements present in the filters ladder implementation. Generally speaking, the higher the order of the filter, the steeper the cut-off transition between passband and stopband, filters are often named after the mathematician or mathematics on which they are based rather than the discoverer or inventor of the filter. Butterworth filters are described as flat, meaning that the response in the frequency domain is the smoothest possible curve of any class of filter of the equivalent order. The Butterworth class of filter was first described in a 1930 paper by the British engineer Stephen Butterworth after whom it is named, the filter response is described by Butterworth polynomials, also due to Butterworth
3.
Butterworth filter
–
The Butterworth filter is a type of signal processing filter designed to have as flat a frequency response as possible in the passband. It is also referred to as a maximally flat magnitude filter and it was first described in 1930 by the British engineer and physicist Stephen Butterworth in his paper entitled On the Theory of Filter Amplifiers. Butterworth had a reputation for solving mathematical problems. At the time, filter design required an amount of designer experience due to limitations of the theory then in use. The filter was not in use for over 30 years after its publication. Butterworth stated that, An ideal electrical filter should not only reject the unwanted frequencies. Such an ideal filter cannot be achieved but Butterworth showed that successively closer approximations were obtained with increasing numbers of elements of the right values. At the time, filters generated substantial ripple in the passband, if ω =1, the amplitude response of this type of filter in the passband is 1/√2 ≈0.707, which is half power or −3 dB. Butterworth only dealt with filters with an number of poles in his paper. He may have been unaware that such filters could be designed with an odd number of poles and he built his higher order filters from 2-pole filters separated by vacuum tube amplifiers. His plot of the response of 2,4,6,8, and 10 pole filters is shown as A, B, C, D. In 1930, low-loss core materials such as molypermalloy had not been discovered and air-cored audio inductors were rather lossy, Butterworth discovered that it was possible to adjust the component values of the filter to compensate for the winding resistance of the inductors. He used coil forms of 1. 25″ diameter and 3″ length with plug-in terminals, associated capacitors and resistors were contained inside the wound coil form. The coil formed part of the load resistor. Two poles were used per vacuum tube and RC coupling was used to the grid of the following tube, Butterworth also showed that his basic low-pass filter could be modified to give low-pass, high-pass, band-pass and band-stop functionality. The frequency response of the Butterworth filter is maximally flat in the passband, when viewed on a logarithmic Bode plot, the response slopes off linearly towards negative infinity. A first-order filters response rolls off at −6 dB per octave, a second-order filter decreases at −12 dB per octave, a third-order at −18 dB and so on. Butterworth filters have a monotonically changing magnitude function with ω, unlike other types that have non-monotonic ripple in the passband and/or the stopband
4.
Chebyshev filter
–
Chebyshev filters are analog or digital filters having a steeper roll-off and more passband ripple or stopband ripple than Butterworth filters. Chebyshev filters have the property that they minimize the error between the idealized and the actual filter characteristic over the range of the filter, but with ripples in the passband and this type of filter is named after Pafnuty Chebyshev because its mathematical characteristics are derived from Chebyshev polynomials. Because of the passband ripple inherent in Chebyshev filters, the ones that have a response in the passband. Type I Chebyshev filters are the most common types of Chebyshev filters, the passband exhibits equiripple behavior, with the ripple determined by the ripple factor ε. In the passband, the Chebyshev polynomial alternates between -1 and 1 so the filter gain alternate between maxima at G =1 and minima at G =1 /1 + ε2. The ripple factor ε is thus related to the passband ripple δ in decibels by, at the cutoff frequency ω0 the gain again has the value 1 /1 + ε2 but continues to drop into the stopband as the frequency increases. This behavior is shown in the diagram on the right, the 3 dB frequency ωH is related to ω0 by, ω H = ω0 cosh . The order of a Chebyshev filter is equal to the number of components needed to realize the filter using analog electronics. An even steeper roll-off can be obtained if ripple is allowed in the stopband, however, this results in less suppression in the stopband. The result is called a filter, also known as Cauer filter. For simplicity, it is assumed that the frequency is equal to unity. The poles of the function of the Chebyshev filter are the zeroes of the denominator of the gain function. Using the complex s, these occur when,1 + ε2 T n 2 =0. Defining − j s = cos and using the definition of the Chebyshev polynomials yields,1 + ε2 T n 2 =1 + ε2 cos 2 =0. Solving for θ θ =1 n arccos + m π n where the values of the arc cosine function are made explicit using the integer index m. The poles of the Chebyshev gain function are then, s p m = j cos = j cos . Using the properties of the trigonometric and hyperbolic functions, this may be written in complex form. N and θ m = π22 m −1 n, the above expression yields the poles of the gain G
5.
Elliptic filter
–
An elliptic filter is a signal processing filter with equalized ripple behavior in both the passband and the stopband. Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, as the ripple in the stopband approaches zero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches zero, the filter becomes a type II Chebyshev filter and finally, in the passband, the elliptic rational function varies between zero and unity. The gain of the passband therefore will vary between 1 and 1 /1 + ϵ2, the poles of the gain of an elliptic filter may be derived in a manner very similar to the derivation of the poles of the gain of a type I Chebyshev filter. For simplicity, assume that the frequency is equal to unity. The poles of the gain of the filter will be the zeroes of the denominator of the gain. Solving for w w = K n K n c d −1 + m K n where the values of the inverse cd function are made explicit using the integer index m. ζ n is expressible for all n in terms of Jacobi elliptic functions, or algebraically for some orders, especially orders 1,2, and 3. The nesting property of the rational functions can be used to build up higher order expressions for ζ n, ζ m ⋅ n = ζ m where L m = R m. Elliptic filters are specified by requiring a particular value for the passband ripple, stopband ripple. This will generally specify a minimum value of the order which must be used. Another design consideration is the sensitivity of the function to the values of the electronic components used to build the filter. This sensitivity is proportional to the quality factor of the poles of the transfer function of the filter. The Q-factor of a pole is defined as, Q = − | s p m |2 R e = −12 cos and is a measure of the influence of the pole on the gain function. For such filters, as the order increases, the ripple in both bands will decrease and the rate of cutoff will increase. An image of the value of the gain will look very much like the image in the previous section. They will not be evenly spaced and there will be zeroes on the ω axis, unlike the Butterworth filter, calculation of elliptic filter parameters using Mathematica. Daniels, Richard W. Approximation Methods for Electronic Filter Design, lutovac, Miroslav D. Tosic, Dejan V. Evans, Brian L. Filter Design for Signal Processing using MATLAB© and Mathematica©
6.
Bessel filter
–
In electronics and signal processing, a Bessel filter is a type of analog linear filter with a maximally flat group/phase delay, which preserves the wave shape of filtered signals in the passband. Bessel filters are used in audio crossover systems. The filters name is a reference to German mathematician Friedrich Bessel, the filters are also called Bessel–Thomson filters in recognition of W. E. Thomson, who worked out how to apply Bessel functions to filter design in 1949. The Bessel filter is similar to the Gaussian filter. While the time-domain step response of the Gaussian filter has zero overshoot, the Bessel filter has an amount of overshoot. The filter has a group delay of 1 / ω0. Since θ n is indeterminate by the definition of reverse Bessel polynomials, the transfer function for a third-order Bessel low-pass filter, normalized to have unit group delay, is H =15 s 3 +6 s 2 +15 s +15. The roots of the polynomial, the filters poles, include a real pole at s = −2.3222. The numerator 15 is chosen to give a gain of 1 at DC, the gain is then G = | H | =15 ω6 +6 ω4 +45 ω2 +225. The phase is ϕ = − arg = − arctan , the group delay is D = − d ϕ d ω =6 ω4 +45 ω2 +225 ω6 +6 ω4 +45 ω2 +225. The Taylor series expansion of the delay is D =1 − ω6225 + ω81125 + ⋯. Note that the two terms in ω2 and ω4 are zero, resulting in a very flat group delay at ω =0. This is the greatest number of terms that can be set to zero, since there are a total of four coefficients in the third order Bessel polynomial, requiring four equations in order to be defined. One equation specifies that the gain be unity at ω =0, the digital equivalent is the Thiran filter, also an all-pole low-pass filter with maximally-flat group delay, which can also be transformed into an allpass filter, to implement fractional delays. Butterworth filter Comb filter Chebyshev filter Elliptic filter Bessel function Group delay and phase delay Bessel, bond Bessel Filters Polynomials, Poles and Circuit Elements — C. R. Bond Java source code to compute Bessel filter poles
7.
Gaussian filter
–
In electronics and signal processing, a Gaussian filter is a filter whose impulse response is a Gaussian function. Gaussian filters have the properties of having no overshoot to a step function input while minimizing the rise and this behavior is closely connected to the fact that the Gaussian filter has the minimum possible group delay. It is considered the time domain filter, just as the sinc is the ideal frequency domain filter. These properties are important in such as oscilloscopes and digital telecommunication systems. The Gaussian function is for x ∈ and would require an infinite window length. However, since it decays rapidly, it is reasonable to truncate the filter window and implement the filter directly for narrow windows. In other cases, the truncation may introduce significant errors, better results can be achieved by instead using a different window function, see scale space implementation for details. The filter function is said to be the kernel of an integral transform, most commonly, the discrete equivalent is the sampled Gaussian kernel that is produced by sampling points from the continuous Gaussian. An alternate method is to use the discrete Gaussian kernel which has superior characteristics for some purposes, unlike the sampled Gaussian kernel, the discrete Gaussian kernel is the solution to the discrete diffusion equation. Since the Fourier transform of the Gaussian function yields a Gaussian function and this is the standard procedure of applying an arbitrary finite impulse response filter, with the only difference that the Fourier transform of the filter window is explicitly known. Due to the limit theorem, the Gaussian can be approximated by several runs of a very simple filter such as the moving average. In the discrete case the standard deviations are related by σ ⋅ σ f = N2 π, borrowing the terms from statistics, the standard deviation of a filter can be interpreted as a measure of its size. The cut-off frequency of a Gaussian filter might be defined by the deviation in the frequency domain yielding f c = σ f =12 π σ. If σ is measured in samples the cut-off frequency can be calculated with f c = F s 2 π σ, the response value of the Gaussian filter at this cut-off frequency equals exp≈0.607. For c=√2 this constant equals approximately 0.8326 and these values are quite close to 1. A simple moving average corresponds to a probability distribution and thus its filter width of size n has standard deviation /12. Thus the application of successive m moving averages with sizes n 1, …, n m yield a standard deviation of σ = n 12 + ⋯ + n m 2 − m 12. A gaussian kernel requires 6 σ −1 values, e. g. for a σ of 3 it needs a kernel of length 17, a running mean filter of 5 points will have a sigma of 2
8.
Optimum "L" filter
–
The Optimum L filter was proposed by Athanasios Papoulis in 1958. It has the maximum roll off rate for a filter order while maintaining a monotonic frequency response. The filter design is based on Legendre polynomials which is the reason for its alternate name, Optimum “L” Filters, Polynomials, Poles and Circuit Elements by C. Bond,2004 Notes on “L” Filters by C
9.
Composite image filter
–
A composite image filter is an electronic filter consisting of multiple image filter sections of two or more different types. The image method of filter design determines the properties of filter sections by calculating the properties they have in a chain of such sections. In this, the analysis parallels transmission line theory on which it is based, filters designed by this method are called image parameter filters, or just image filters. An important parameter of image filters is their image impedance, the impedance of a chain of identical sections. The basic sections are arranged into a network of several sections. In its simplest form, the filter can consist entirely of identical sections, however, it is more usual to use a composite filter of two or three different types of section to improve different parameters best addressed by a particular type. The most frequent parameters considered are stopband rejection, steepness of the filter skirt, image filters are linear filters and are invariably also passive in implementation. The image method of designing filters originated at AT&T, who were interested in developing filtering that could be used with the multiplexing of many channels on to a single cable. The researchers involved in work and their contributions are briefly listed below. He invented single sideband modulation for the purpose of multiplexing telephone channels and it was the need to recover these signals that gave rise to the need for advanced filtering techniques. He also pioneered the use of calculus to analyse these signals. George Campbell worked on filtering from 1910 onwards and invented the constant k filter and this can be seen as a continuation of his work on loading coils on transmission lines, a concept invented by Oliver Heaviside. Heaviside, incidentally, also invented the operational calculus used by Carson, otto Zobel provided a theoretical basis for Campbells filters. In 1920 he invented the m-derived filter, Zobel also published composite designs incorporating both constant k and m-derived sections. The image analysis starts with a calculation of the input and output impedances and this can be shown to be equivalent to the performance of a section terminated in its image impedances. The image method, therefore, relies on each section being terminated with the correct image impedance. This is easy enough to do with the sections of a multiple section filter. However, the end sections are a problem and they will usually be terminated with fixed resistances that the filter cannot match perfectly except at one specific frequency
10.
Constant k filter
–
Constant k filters, also k-type filters, are a type of electronic filter designed using the image method. They are the original and simplest filters produced by this methodology, historically, they are the first filters that could approach the ideal filter frequency response to within any prescribed limit with the addition of a sufficient number of sections. However, they are considered for a modern design, the principles behind them having been superseded by other methodologies which are more accurate in their prediction of filter response. Constant k filters were invented by George Campbell and he published his work in 1922, but had clearly invented the filters some time before, as his colleague at AT&T Co, Otto Zobel, was already making improvements to the design at this time. Campbells filters were far superior to the single element circuits that had been used previously. Campbell called his filters electric wave filters, but this later came to mean any filter that passes waves of some frequencies. Many new forms of filter were subsequently invented, an early variation was the m-derived filter by Zobel who coined the term constant k for the Campbell filter in order to distinguish them. It was only necessary to add more filter sections until the response was obtained. The filters were designed by Campbell for the purpose of separating multiplexed telephone channels on transmission lines, the design techniques used by Campbell have largely been superseded. However, the topology used by Campbell with the constant k is still in use today with implementations of modern filter designs such as the Tchebyscheff filter. Campbell gave constant k designs for low-pass, high-pass and band-pass filters, band-stop and multiple band filters are also possible. Some of the terms and section terms used in this article are pictured in the diagram below. Image theory defines quantities in terms of a cascade of two-port sections, and in the case of the filters being discussed. Here L should not be confused with the inductance L – in electronic filter topology, the sections of the hypothetical infinite filter are made of series elements having impedance 2Z and shunt elements with admittance 2Y. The factor of two is introduced for mathematical convenience, since it is usual to work in terms of half-sections where it disappears, the image impedance of the input and output port of a section will generally not be the same. However, for a section will have the same image impedance on both ports due to symmetry. This image impedance is designated ZiT due to the T topology of a mid-series section, likewise, the image impedance of a mid-shunt section is designated ZiΠ due to the Π topology. Half of such a T or Π section is called a half-section, there are thus two variant ways of using a half-section
11.
M-derived filter
–
Parts of this article or section rely on the readers knowledge of the complex impedance representation of capacitors and inductors and on knowledge of the frequency domain representation of signals. M-derived filters or m-type filters are a type of electronic filter designed using the image method and they were invented by Otto Zobel in the early 1920s. This filter type was intended for use with telephone multiplexing and was an improvement on the existing constant k type filter. The main problem being addressed was the need to achieve a better match of the filter into the terminating impedances. In general, all designed by the image method fail to give an exact match. The m-type filter section has an advantage in that there is a rapid transition from the cut-off frequency of the pass band to a pole of attenuation just inside the stop band. Despite these advantages, there is a drawback with m-type filters, at frequencies past the pole of attenuation, the starts to rise again. This pre-dated George Campbells publication of his constant k-type design in 1922 on which the filter is based. Zobel published the image analysis theory of m-type filters in 1923, once popular, M-type filters and image parameter designed filters in general are now rarely designed, having been superseded by more advanced network synthesis methods. The building block of m-derived filters, as with all image filters, is the L network, called a half-section and composed of a series impedance Z. The m-derived filter is a derivative of the constant k filter. The starting point of the design is the values of Z and Y derived from the constant k prototype and are given by k 2 = Z Y where k is the impedance of the filter. The designer now multiplies Z and Y by a constant m. There are two different kinds of m-derived section, series and shunt, from the general formula for image impedance, the additional impedance required can be shown to be 1 − m 2 m Z. To obtain the m-derived shunt half section, an admittance is added to 1/mZ to make the image impedance ZiΠ the same as the impedance of the original half section. The additional admittance required can be shown to be 1 − m 2 m Y, the general arrangements of these circuits are shown in the diagrams to the right along with a specific example of a low pass section. A consequence of design is that the m-derived half section will match a k-type section on one side only. Also, a section of one value of m will not match another m-type section of another value of m except on the sides which offer the Zi of the k-type
12.
General mn-type image filter
–
These filters are electrical wave filters designed using the image method. They are an invention of Otto Zobel at AT&T Corp and they are a generalisation of the m-type filter in that a transform is applied that modifies the transfer function while keeping the image impedance unchanged. For filters that have only one there is no distinction with the m-type filter. However, for a filter that has multiple stopbands, there is the possibility that the form of the function in each stopband can be different. For instance, it may be required to one band with the sharpest possible cut-off. If the form is identical at each transition from passband to stopband the filter will be the same as an m-type filter, if they are different, then the general case described here pertains. The k-type filter acts as a prototype for producing the general mn designs, another feature of m-type filters that also applies in the general case is that a half section will have the original k-type image impedance on one side only. The other port will present a new image impedance, the two transformations have equivalent transfer functions but different image impedances and circuit topology. Parts of this article or section rely on the knowledge of the complex impedance representation of capacitors and inductors. When the mn are all equal this reduces to the expression for an m-type filter, a result of this relationship is that the N antiresonators in Zmn will transform into 2N resonators in Ymn. The coefficients mn can be adjusted by the designer to set the frequency of one of the two poles of attenuation, ω∞, in each stopband, the second pole of attenuation is dependant and cannot be set separately. In the case of a filter with a stopband extending to zero frequency, in this case the resonators in Ymn are reduced by one to 2N-1. Similarly, for a filter with a stopband extending to infinity, one antiresonator will reduce to a single capacitor, in a filter where both conditions pertain, the number of resonators will be 2N-2. For these end stopbands, there is only one pole of attenuation in each and these forms are the maximum allowable complexity while maintaining invariance of bandform and one image impedance. The two resonators reduce to an inductor and a capacitor respectively, the number of antiresonators reduces to two. This was a popular topology for multi-section band-pass filters due its low component count, many other such reduced forms are possible by setting one of the poles of attenuation to correspond to one of the critical frequencies for various classes of basic filter. Mathaei, Young, Jones Microwave Filters, Impedance-Matching Networks, bray, J, Innovation and the Communications Revolution, Institution of Electrical Engineers,2002 ISBN 0-85296-218-5
13.
Zobel network
–
For the wave filter invented by Zobel and sometimes named after him see m-derived filters. Zobel networks are a type of filter section based on the design principle. They are named after Otto Zobel of Bell Labs, who published a paper on image filters in 1923. The distinguishing feature of Zobel networks is that the impedance is fixed in the design independently of the transfer function. This characteristic is achieved at the expense of a higher component count compared to other types of filter sections. The impedance would normally be specified to be constant and purely resistive, for this reason, they are also known as constant resistance networks. However, any impedance achievable with discrete components is possible, however, as analogue technology has given way to digital, they are now little used. When used to out the reactive portion of loudspeaker impedance. In this case, only half the network is implemented as fixed components and this network is more akin to the power factor correction circuits used in electrical power distribution, hence the association with Boucherots name. A common circuit form of Zobel networks is in the form of a bridged T and this term is often used to mean a Zobel network, sometimes incorrectly when the circuit implementation is, in fact, something other than a bridged T. Parts of this article or section rely on the knowledge of the complex impedance representation of capacitors and inductors. The basis of a Zobel network is a bridge circuit as shown in the circuit to the right. The bridging impedance ZB is across the points and hence has no potential across it. Consequently, it will draw no current and its value makes no difference to the function of the circuit, however, its value is often chosen to be Z0 for reasons which will become clear in the discussion of bridged T circuits. If the Z0 in the right of the bridge is taken to be the output load then a transfer function of Vin/Vo can be calculated for the section. Only the rhs branch needs to be considered in this calculation, the reason for this can be seen by considering that there is no current flow through RB. None of the current flowing through the lhs branch is going to flow into the load, the lhs branch therefore, cannot possibly affect the output. It certainly affects the input impedance but not the transfer function, if we also set, Z B = Z0 then the circuit to the right results
14.
Lattice phase equaliser
–
A lattice phase equaliser or lattice filter is an example of an all-pass filter. That is, the attenuation of the filter is constant at all frequencies, the topology of a lattice filter, also called an X-section is identical to bridge topology. The lattice phase equaliser was invented by Otto Zobel. using a filter proposed by George Campbell. Phase distortion on a line does not have a serious effect on the quality of the sound unless it is very large. The same is true of the phase distortion on each leg of a stereo pair of lines. However, the phase between legs has a very dramatic effect on the stereo image. This is because the formation of the image in the brain relies on the phase difference information from the two ears. A phase difference translates to a delay, which in turn can be interpreted as a direction the sound came from, consequently, landlines used by broadcasters for stereo transmissions are equalised to very tight differential phase specifications. Another property of the filter is that it is an intrinsically balanced topology. This is useful when used with landlines which invariably use a balanced format, many other types of filter section are intrinsically unbalanced and have to be transformed into a balanced implementation in these applications which increases the component count. This is not required in the case of lattice filters, parts of this article or section rely on the readers knowledge of the complex impedance representation of capacitors and inductors and on knowledge of the frequency domain representation of signals. That is, Z R0 = R0 Z ′ Such a network and that is, it is phase correction for the high end of the band. At low frequencies the phase shift is 0° but as the frequency increases the phase shift approaches 180° and it can be seen qualitatively that this is so by replacing the inductors with open circuits and the capacitors with short circuits, which is what they become at high frequency. At high frequency the lattice filter is a network and will produce 180° phase shift. A 180° phase shift is the same as an inversion in the frequency domain, at an angular frequency of ω =1 rad/s the phase shift is exactly 90° and this is the midpoint of the filters transfer function. The prototype section can be scaled and transformed to the frequency, impedance. A filter which is in-phase at low frequencies can be obtained from the prototype with simple scaling factors, however, it can be seen that due to the lattice topology this is also equivalent to a crossover on the output of the corresponding low-in-phase section. This second method may not only make calculation easier but it is also a property where lines are being equalised on a temporary basis
15.
Bridged T delay equaliser
–
The bridged-T delay equaliser is an electrical all-pass filter circuit utilising bridged-T topology whose purpose is to insert an constant delay at all frequencies in the signal path. It is a class of image filter, the network is used when it is required that two or more signals are matched to each other on some form of timing criterion. Delay is added to all signals so that the total delay is matched to the signal which already has the longest delay. This ensures that cuts between sources do not result in disruption at the receivers, another application occurs when stereophonic sound is connected by landline, for instance from an outside broadcast to the studio centre. It is important that delay is equalised between the two channels as a difference will destroy the stereo image. When the landlines are long and the two channels arrive by substantially different routes it can require many filter sections to fully equalise the delay, the operation is best explained in terms of the phase shift the network introduces. At low frequencies L is low impedance and C is high impedance, transformer action between the two halves of L, which had been steadily becoming more significant as the frequency increased, now becomes dominant. The winding of the coil is such that the secondary winding produces a voltage to the primary. That is, at resonance the phase shift is now 180°, as the frequency continues to increase, the phase delay also continues to increase and the input and output start to come back into phase as a whole cycle delay is approached. At high frequencies L and L approach open-circuit and C approaches short-circuit, the relationship between phase shift and time delay with angular frequency is given by the simple relation, ϕ = ω T D It is required that TD is constant at all frequencies over the band of operation. φ must therefore be kept linearly proportional to ω, with suitable choice of parameters, the network phase shift can be made linear up to about 180° phase shift. The four component values of the network provide four degrees of freedom in the design and it is required from image theory that the L/C branch and the L/C branch are the dual of each other which provides two parameters for calculating component values. Equivalently, every pole, sp in the s-domain left half-plane must have a matching zero. A third parameter is set by choosing a resonant frequency, this is set to the frequency the network is required to operate at. ω0 =1 L C =1 L ′ C ′ There is one remaining degree of freedom that the designer can use to maximimally linearise the phase/frequency response and this parameter is usually stated as the L/C ratio. As stated above, it is not practical to linearise the phase response above 180°, a delay equaliser designed to this specification can therefore insert a delay of 33μs. In reality, the delay that might be required to equalise may be many hundreds of microseconds. A chain of many sections in tandem will be required, for television purposes, a maximum frequency of 6 MHz might be chosen, which corresponds to a delay of 83ns
16.
Mm'-type filter
–
Mm-type filters, also called double-m-derived filters, are a type of electronic filter designed using the image method. They were patented by Otto Zobel in 1932, the filter has a similar transfer function to the m-type, having the same advantage of rapid cut-off, but the input impedance remains much more nearly constant if suitable parameters are chosen. In fact, the performance is better for the mm-type if like-for-like impedance matching are compared rather than like-for-like transfer function. It also has the drawback of a rising response in the stopband as the m-type. However, its disadvantage is its much increased complexity which is the chief reason its use never became widespread. It was only intended to be used as the end sections of composite filters. An incidental advantage of the mm-type is that it has two independent parameters that the designer can adjust and this allows for two different design criteria to be independently optimised. Parts of this article or section rely on the knowledge of the complex impedance representation of capacitors and inductors. The mm-type filter was an extension of Zobels previous m-type filter, Zobels m-type is arrived at by applying the m-derivation process to the k-type filter. Completely analogously, the mm-type is arrived at by applying the m-derived process to the m-type filter, the value of m used in the second transformation is designated m to distinguish it from m, hence the naming of the filter mm-type. However, this filter is not a member of the class of filters, general mn-type image filters, which are a generalisation of m-type filters. Rather, it is an application of the m-derived process and for those filters the arbitrary parameters are usually designated m1, m2. The importance of the lies in its impedance properties. Some of the terms and section terms used in image design theory are pictured in the diagram below. As always, image theory defines quantities in terms of a cascade of two-port sections, and in the case of the filters being discussed. The sections of the hypothetical infinite filter are made up of series elements of 2Z. The factor of two is introduced since it is normal to work in terms of half-sections where it disappears, the image impedance of the input and output port of a section, Zi1 and Zi2, will generally not be the same. However, for a section will have the same image impedance on both ports due to symmetry
17.
RL circuit
–
A resistor–inductor circuit, or RL filter or RL network, is an electric circuit composed of resistors and inductors driven by a voltage or current source. A first-order RL circuit is composed of one resistor and one inductor and is the simplest type of RL circuit, a first order RL circuit is one of the simplest analogue infinite impulse response electronic filters. It consists of a resistor and an inductor, either in series driven by a source or in parallel driven by a current source. The fundamental passive linear circuit elements are the resistor, capacitor and inductor and these circuits exhibit important types of behaviour that are fundamental to analogue electronics. In particular, they are able to act as passive filters and this article considers the RL circuit in both series and parallel as shown in the diagrams. In practice, however, capacitors are preferred to inductors since they can be more easily manufactured and are generally physically smaller. Both RC and RL circuits form a single-pole filter, depending on whether the reactive element is in series with the load, or parallel with the load will dictate whether the filter is low-pass or high-pass. Frequently RL circuits are used for DC power supplies to RF amplifiers and this article relies on knowledge of the complex impedance representation of inductors and on knowledge of the frequency domain representation of signals. The complex impedance ZL of an inductor with inductance L is Z L = L s. The complex frequency s is a number, s = σ + j ω, where j represents the imaginary unit, j2 = −1, σ is the exponential decay constant. From Eulers formula, the real-part of these eigenfunctions are exponentially-decaying sinusoids, sinusoidal steady state is a special case in which the input voltage consists of a pure sinusoid. As a result, σ =0 and the evaluation of s becomes s = j ω, the current in the circuit is the same everywhere since the circuit is in series, I = V i n R + L s. The transfer function, to the current, is H I = I V i n =1 R + L s, the transfer functions have a single pole located at s = − R L. In addition, the function for the inductor has a zero located at the origin. It represents the response of the circuit to an input consisting of an impulse or Dirac delta function. Similarly, the response for the resistor voltage is h R = R L e − t R L u =1 τ e − t τ u. It is called the response because it requires no input. The ZIR of an RL circuit is, I = I e − R L t = I e − t τ, analysis of them will show which frequencies the circuits pass and reject
18.
LC circuit
–
The circuit can act as an electrical resonator, an electrical analogue of a tuning fork, storing energy oscillating at the circuits resonant frequency. LC circuits are used either for generating signals at a particular frequency and they are key components in many electronic devices, particularly radio equipment, used in circuits such as oscillators, filters, tuners and frequency mixers. An LC circuit is a model since it assumes there is no dissipation of energy due to resistance. Any practical implementation of an LC circuit will always include loss resulting from small but non-zero resistance within the components, the purpose of an LC circuit is usually to oscillate with minimal damping, so the resistance is made as low as possible. While no practical circuit is without losses, it is instructive to study this ideal form of the circuit to gain understanding. For a circuit model incorporating resistance, see RLC circuit, the two-element LC circuit described above is the simplest type of inductor-capacitor network. It is also referred to as a second order LC circuit to distinguish it more complicated LC networks with more inductors and capacitors. Such LC networks with more than two reactances may have more than one resonant frequency, the order of the network is the order of the rational function describing the network in the complex frequency variable s. Generally, the order is equal to the number of L and C elements in the circuit, an LC circuit, oscillating at its natural resonant frequency, can store electrical energy. A capacitor stores energy in the field between its plates, depending on the voltage across it, and an inductor stores energy in its magnetic field. If an inductor is connected across a capacitor, current will start to flow through the inductor, building up a magnetic field around it. Eventually all the charge on the capacitor will be gone and the voltage across it will reach zero, however, the current will continue, because inductors resist changes in current. The current will begin to charge the capacitor with a voltage of opposite polarity to its original charge. Due to Faradays law, the EMF which drives the current is caused by a decrease in the magnetic field, when the magnetic field is completely dissipated the current will stop and the charge will again be stored in the capacitor, with the opposite polarity as before. Then the cycle will begin again, with the current flowing in the direction through the inductor. The charge flows back and forth between the plates of the capacitor, through the inductor, the energy oscillates back and forth between the capacitor and the inductor until internal resistance makes the oscillations die out. In most applications the tuned circuit is part of a circuit which applies alternating current to it. If the frequency of the current is the circuits natural resonant frequency
19.
RLC circuit
–
An RLC circuit is an electrical circuit consisting of a resistor, an inductor, and a capacitor, connected in series or in parallel. The name of the circuit is derived from the letters that are used to denote the constituent components of this circuit, the circuit forms a harmonic oscillator for current, and resonates in a similar way as an LC circuit. Introducing the resistor increases the decay of these oscillations, which is known as damping. The resistor also reduces the resonant frequency. Some resistance is unavoidable in real circuits even if a resistor is not specifically included as a component, an ideal, pure LC circuit exists only in the domain of superconductivity. RLC circuits have many applications as oscillator circuits, radio receivers and television sets use them for tuning to select a narrow frequency range from ambient radio waves. In this role the circuit is often referred to as a tuned circuit, an RLC circuit can be used as a band-pass filter, band-stop filter, low-pass filter or high-pass filter. The tuning application, for instance, is an example of band-pass filtering, the RLC filter is described as a second-order circuit, meaning that any voltage or current in the circuit can be described by a second-order differential equation in circuit analysis. The three circuit elements, R, L and C, can be combined in a number of different topologies, all three elements in series or all three elements in parallel are the simplest in concept and the most straightforward to analyse. There are, however, other arrangements, some practical importance in real circuits. One issue often encountered is the need to take into account inductor resistance, inductors are typically constructed from coils of wire, the resistance of which is not usually desirable, but it often has a significant effect on the circuit. An important property of this circuit is its ability to resonate at a specific frequency, frequencies are measured in units of hertz. In this article, however, angular frequency, ω0, is used which is mathematically convenient. This is measured in radians per second and they are related to each other by a simple proportion, ω0 =2 π f 0. Resonance occurs because energy is stored in two different ways, in a field as the capacitor is charged and in a magnetic field as current flows through the inductor. Energy can be transferred from one to the other within the circuit, a mechanical analogy is a weight suspended on a spring which will oscillate up and down when released. The mechanical property answering to the resistor in the circuit is friction in the spring–weight system, friction will slowly bring any oscillation to a halt if there is no external force driving it. Likewise, the resistance in an RLC circuit will damp the oscillation, the resonance frequency is defined as the frequency at which the impedance of the circuit is at a minimum
20.
Electrical network
–
An electrical network is an interconnection of electrical components or a model of such an interconnection, consisting of electrical elements. An electrical circuit is a network consisting of a closed loop, linear electrical networks, a special type consisting only of sources, linear lumped elements, and linear distributed elements, have the property that signals are linearly superimposable. They are thus more easily analyzed, using powerful frequency domain methods such as Laplace transforms, to determine DC response, AC response, a resistive circuit is a circuit containing only resistors and ideal current and voltage sources. Analysis of resistive circuits is less complicated than analysis of circuits containing capacitors and inductors, if the sources are constant sources, the result is a DC circuit. A network that contains active components is known as an electronic circuit. Such networks are generally nonlinear and require more complex design and analysis tools, an active network is a network that contains an active source – either a voltage source or current source. A passive network is a network that does not contain an active source, a network is linear if its signals obey the principle of superposition, otherwise it is non-linear. Sources can be classified as independent sources and dependent sources Ideal Independent Source maintains same voltage or current regardless of the elements present in the circuit. Its value is either constant or sinusoidal, the strength of voltage or current is not changed by any variation in connected network. Dependent Sources depend upon a particular element of the circuit for delivering the power or voltage or current depending upon the type of source it is, a number of electrical laws apply to all electrical networks. These include, Kirchhoffs current law, The sum of all currents entering a node is equal to the sum of all currents leaving the node, Kirchhoffs voltage law, The directed sum of the electrical potential differences around a loop must be zero. Ohms law, The voltage across a resistor is equal to the product of the resistance, nortons theorem, Any network of voltage or current sources and resistors is electrically equivalent to an ideal current source in parallel with a single resistor. Thévenins theorem, Any network of voltage or current sources and resistors is electrically equivalent to a voltage source in series with a single resistor. Other more complex laws may be needed if the network contains nonlinear or reactive components, non-linear self-regenerative heterodyning systems can be approximated. Applying these laws results in a set of equations that can be solved either algebraically or numerically. To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages, simple linear circuits can be analyzed by hand using complex number theory. In more complex cases the circuit may be analyzed with specialized programs or estimation techniques such as the piecewise-linear model. More complex circuits can be analyzed numerically with software such as SPICE or GNUCAP, once the steady state solution is found, the operating points of each element in the circuit are known
21.
Resistor
–
A resistor is a passive two-terminal electrical component that implements electrical resistance as a circuit element. In electronic circuits, resistors are used to reduce current flow, adjust signal levels, to divide voltages, bias active elements, and terminate transmission lines, among other uses. High-power resistors that can dissipate many watts of power as heat may be used as part of motor controls, in power distribution systems. Fixed resistors have resistances that only slightly with temperature, time or operating voltage. Variable resistors can be used to adjust circuit elements, or as sensing devices for heat, light, humidity, force, Resistors are common elements of electrical networks and electronic circuits and are ubiquitous in electronic equipment. Practical resistors as discrete components can be composed of various compounds, Resistors are also implemented within integrated circuits. The electrical function of a resistor is specified by its resistance, the nominal value of the resistance falls within the manufacturing tolerance, indicated on the component. Two typical schematic diagram symbols are as follows, The notation to state a resistors value in a circuit diagram varies, one common scheme is the letter and digit code for resistance values following IEC60062. It avoids using a separator and replaces the decimal separator with a letter loosely associated with SI prefixes corresponding with the parts resistance. For example, 8K2 as part marking code, in a diagram or in a bill of materials indicates a resistor value of 8.2 kΩ. Additional zeros imply a tighter tolerance, for example 15M0 for three significant digits, when the value can be expressed without the need for a prefix, an R is used instead of the decimal separator. For example, 1R2 indicates 1.2 Ω, and 18R indicates 18 Ω, for example, if a 300 ohm resistor is attached across the terminals of a 12 volt battery, then a current of 12 /300 =0.04 amperes flows through that resistor. Practical resistors also have some inductance and capacitance which affect the relation between voltage and current in alternating current circuits, the ohm is the SI unit of electrical resistance, named after Georg Simon Ohm. An ohm is equivalent to a volt per ampere, since resistors are specified and manufactured over a very large range of values, the derived units of milliohm, kilohm, and megohm are also in common usage. The total resistance of resistors connected in series is the sum of their resistance values. R e q = R1 + R2 + ⋯ + R n, the total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors. 1 R e q =1 R1 +1 R2 + ⋯ +1 R n. For example, a 10 ohm resistor connected in parallel with a 5 ohm resistor, a resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other
22.
Capacitor
–
A capacitor is a passive two-terminal electrical component that stores electrical energy in an electric field. The effect of a capacitor is known as capacitance, a capacitor was therefore historically first known as an electric condenser. The physical form and construction of practical capacitors vary widely and many types are in common use. Most capacitors contain at least two electrical conductors often in the form of plates or surfaces separated by a dielectric medium. A conductor may be a foil, thin film, sintered bead of metal, the nonconducting dielectric acts to increase the capacitors charge capacity. Materials commonly used as dielectrics include glass, ceramic, plastic film, paper, mica, Capacitors are widely used as parts of electrical circuits in many common electrical devices. Unlike a resistor, a capacitor does not dissipate energy. No current actually flows through the dielectric, instead, the effect is a displacement of charges through the source circuit, if the condition is maintained sufficiently long, this displacement current through the battery ceases. However, if a voltage is applied across the leads of the capacitor. Capacitance is defined as the ratio of the charge on each conductor to the potential difference between them. The unit of capacitance in the International System of Units is the farad, capacitance values of typical capacitors for use in general electronics range from about 1 pF to about 1 mF. The capacitance of a capacitor is proportional to the area of the plates. In practice, the dielectric between the plates passes a small amount of leakage current and it has an electric field strength limit, known as the breakdown voltage. The conductors and leads introduce an undesired inductance and resistance, Capacitors are widely used in electronic circuits for blocking direct current while allowing alternating current to pass. In analog filter networks, they smooth the output of power supplies, in resonant circuits they tune radios to particular frequencies. In electric power systems, they stabilize voltage and power flow. The property of energy storage in capacitors was exploited as dynamic memory in digital computers. Von Kleists hand and the water acted as conductors, and the jar as a dielectric, von Kleist found that touching the wire resulted in a powerful spark, much more painful than that obtained from an electrostatic machine
23.
Voltage source
–
A voltage source is a two terminal device which can maintain a fixed voltage. An ideal voltage source can maintain the fixed voltage independent of the resistance or the output current. However, a voltage source cannot supply unlimited current. A voltage source is the dual of a current source, an ideal voltage source is a two-terminal device that maintains a fixed voltage drop across its terminals. It is often used as an abstraction that simplifies the analysis of real electric circuits. If the voltage across a voltage source can be specified independently of any other variable in a circuit, it is called an independent voltage source. Conversely, if the voltage across a voltage source is determined by some other voltage or current in a circuit, it is called a dependent or controlled voltage source. A mathematical model of an amplifier will include dependent voltage sources whose magnitude is governed by some fixed relation to an input signal, the internal resistance of an ideal voltage source is zero, it is able to supply or absorb any amount of current. The current through a voltage source is completely determined by the external circuit. When connected to a circuit, there is zero current. When connected to a resistance, the current through the source approaches infinity as the load resistance approaches zero. Thus, a voltage source can supply unlimited power. No real voltage source is ideal, all have an effective internal resistance. However, the resistance of a real voltage source is effectively modeled in linear circuit analysis by combining a non-zero resistance in series with an ideal voltage source. Most sources of energy are modeled as voltage sources. An ideal voltage source provides no energy when it is loaded by an open circuit, such a theoretical device would have a zero ohm output impedance in series with the source. A real-world voltage source has a low, but non-zero internal resistance & output impedance. Conversely, a current source provides a constant current, as long as the connected to the source terminals has sufficiently low impedance
24.
Current source
–
A current source is an electronic circuit that delivers or absorbs an electric current which is independent of the voltage across it. A current source is the dual of a voltage source, the term, constant-current sink, is sometimes used for sources fed from a negative voltage supply. Figure 1 shows the symbol for an ideal current source. An independent current source delivers a constant current, a dependent current source delivers a current which is proportional to some other voltage or current in the circuit. An ideal current source generates a current that is independent of the changes across it. An ideal current source is a model, which real devices can approach very closely. If the current through a current source can be specified independently of any other variable in a circuit, it is called an independent current source. Conversely, if the current through a current source is determined by some other voltage or current in a circuit, it is called a dependent or controlled current source. Symbols for these sources are shown in Figure 2, the internal resistance of an ideal current source is infinite. An independent current source with zero current is identical to an open circuit. The voltage across a current source is completely determined by the circuit it is connected to. When connected to a circuit, there is zero voltage. When connected to a resistance, the voltage across the source approaches infinity as the load resistance approaches infinity. Thus, a current source, if such a thing existed in reality, could supply unlimited power. No physical current source is ideal, for example, no physical current source can operate when applied to an open circuit. There are two characteristics that define a current source in real life, one is its internal resistance and the other is its compliance voltage. The compliance voltage is the voltage that the current source can supply to a load. Over a given range, it is possible for some types of real current sources to exhibit nearly infinite internal resistance
25.
Fuel cell
–
A fuel cell is a device that converts the chemical energy from a fuel into electricity through a chemical reaction of positively charged hydrogen ions with oxygen or another oxidizing agent. Fuel cells can produce electricity continuously for as long as these inputs are supplied, the first fuel cells were invented in 1838. The first commercial use of fuel cells came more than a later in NASA space programs to generate power for satellites. Since then, fuel cells have been used in other applications. Fuel cells are used for primary and backup power for commercial, industrial and residential buildings and they are also used to power fuel cell vehicles, including forklifts, automobiles, buses, boats, motorcycles and submarines. There are many types of cells, but they all consist of an anode, a cathode. The anode and cathode contain catalysts that cause the fuel to undergo reactions that generate positively charged hydrogen ions and electrons. The hydrogen ions are drawn through the electrolyte after the reaction, at the same time, electrons are drawn from the anode to the cathode through an external circuit, producing direct current electricity. At the cathode, hydrogen ions, electrons, and oxygen react to form water, Individual fuel cells produce relatively small electrical potentials, about 0.7 volts, so cells are stacked, or placed in series, to create sufficient voltage to meet an applications requirements. In addition to electricity, fuel cells produce water, heat and, depending on the source, very small amounts of nitrogen dioxide. The energy efficiency of a cell is generally between 40–60%, or up to 85% efficient in cogeneration if waste heat is captured for use. The fuel cell market is growing, and in 2013 Pike Research estimated that the fuel cell market will reach 50 GW by 2020. The first references to hydrogen fuel cells appeared in 1838 and he used a combination of sheet iron, copper and porcelain plates, and a solution of sulphate of copper and dilute acid. In a letter to the publication written in December 1838 but published in June 1839. His letter discussed current generated from hydrogen and oxygen dissolved in water, grove later sketched his design, in 1842, in the same journal. The fuel cell he used similar materials to todays phosphoric-acid fuel cell. In 1939, British engineer Francis Thomas Bacon successfully developed a 5 kW stationary fuel cell and this became known as the Grubb-Niedrach fuel cell. GE went on to develop this technology with NASA and McDonnell Aircraft and this was the first commercial use of a fuel cell
26.
High-pass filter
–
A high-pass filter is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency depends on the filter design, a high-pass filter is usually modeled as a linear time-invariant system. It is sometimes called a filter or bass-cut filter. High-pass filters have many uses, such as blocking DC from circuitry sensitive to non-zero average voltages or radio frequency devices and they can also be used in conjunction with a low-pass filter to produce a bandpass filter. Figure 2 shows an active electronic implementation of a first-order high-pass filter using an operational amplifier and that is, high-frequency signals are inverted and amplified by R2/R1. Discrete-time high-pass filters can also be designed, discrete-time filter design is beyond the scope of this article, however, a simple example comes from the conversion of the continuous-time high-pass filter above to a discrete-time realization. That is, the behavior can be discretized. Substituting Equation into Equation and then Equation into Equation gives, V out = C ⏞ I R = R C This equation can be discretized, for simplicity, assume that samples of the input and output are taken at evenly spaced points in time separated by Δ T time. Let the samples of V in be represented by the sequence, the expression for parameter α yields the equivalent time constant R C in terms of the sampling period Δ T and α, R C = Δ T. If α =0.5, then the R C time constant equal to the sampling period, if α ≪0.5, then R C is significantly smaller than the sampling interval, and R C ≈ α Δ T. The filter recurrence relation provides a way to determine the output samples in terms of the input samples, in particular, A large α implies that the output will decay very slowly but will also be strongly influenced by even small changes in input. By the relationship between parameter α and time constant R C above, a large α corresponds to a large R C, hence, this case corresponds to a high-pass filter with a very narrow stop band. Because it is excited by changes and tends to hold its prior output values for a long time. However, a constant input will always decay to zero, as would be expected with a filter with a large R C. A small α implies that the output will decay quickly and will require changes in the input to cause the output to change much. By the relationship between parameter α and time constant R C above, a small α corresponds to a small R C, hence, this case corresponds to a high-pass filter with a very wide stop band. Because it requires large changes and tends to forget its prior output values, it can only pass relatively high frequencies. They are used as part of a crossover to direct high frequencies to a tweeter while attenuating bass signals which could interfere with, or damage
27.
Low-pass filter
–
A low-pass filter is a filter that passes signals with a frequency lower than a certain cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design, the filter is sometimes called a high-cut filter, or treble cut filter in audio applications. A low-pass filter is the complement of a high-pass filter, Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations, and leaving the longer-term trend. Filter designers will often use the form as a prototype filter. That is, a filter with unity bandwidth and impedance, the desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform. Examples of low-pass filters occur in acoustics, optics and electronics, a stiff physical barrier tends to reflect higher sound frequencies, and so acts as a low-pass filter for transmitting sound. When music is playing in another room, the low notes are easily heard, an optical filter with the same function can correctly be called a low-pass filter, but conventionally is called a longpass filter, to avoid confusion. For current signals, a circuit, using a resistor and capacitor in parallel. Electronic low-pass filters are used on inputs to subwoofers and other types of loudspeakers, radio transmitters use low-pass filters to block harmonic emissions that might interfere with other communications. The tone knob on many electric guitars is a filter used to reduce the amount of treble in the sound. An integrator is another time constant low-pass filter, telephone lines fitted with DSL splitters use low-pass and high-pass filters to separate DSL and POTS signals sharing the same pair of wires. Low-pass filters also play a significant role in the sculpting of sound created by analogue, the transition region present in practical filters does not exist in an ideal filter. The filter would therefore need to have infinite delay, or knowledge of the future and past. It is effectively realizable for pre-recorded digital signals by assuming extensions of zero into the past and future, or more typically by making the signal repetitive and this delay is manifested as phase shift. Greater accuracy in approximation requires a longer delay, an ideal low-pass filter results in ringing artifacts via the Gibbs phenomenon. These can be reduced or worsened by choice of windowing function, for example, simple truncation causes severe ringing artifacts, in signal reconstruction, and to reduce these artifacts one uses window functions which drop off more smoothly at the edges. The Whittaker–Shannon interpolation formula describes how to use a perfect low-pass filter to reconstruct a signal from a sampled digital signal. Real digital-to-analog converters use real filter approximations, there are many different types of filter circuits, with different responses to changing frequency
28.
Band-pass filter
–
A band-pass filter is a device that passes frequencies within a certain range and rejects frequencies outside that range. An example of an analogue electronic band-pass filter is an RLC circuit and these filters can also be created by combining a low-pass filter with a high-pass filter. Bandpass is an adjective that describes a type of filter or filtering process, it is to be distinguished from passband, hence, one might say A dual bandpass filter has two passbands. A bandpass signal is a signal containing a band of frequencies not adjacent to zero frequency, an ideal bandpass filter would have a completely flat passband and would completely attenuate all frequencies outside the passband. Additionally, the out of the passband would have brickwall characteristics. In practice, no bandpass filter is ideal and this is known as the filter roll-off, and it is usually expressed in dB of attenuation per octave or decade of frequency. Generally, the design of a filter seeks to make the roll-off as narrow as possible, often, this is achieved at the expense of pass-band or stop-band ripple. The bandwidth of the filter is simply the difference between the upper and lower cutoff frequencies, optical band-pass filters are common in photography and theatre lighting work. These filters take the form of a transparent coloured film or sheet, a band-pass filter can be characterized by its Q factor. The Q-factor is the inverse of the fractional bandwidth, a high-Q filter will have a narrow passband and a low-Q filter will have a wide passband. These are respectively referred to as narrow-band and wide-band filters, Bandpass filters are widely used in wireless transmitters and receivers. The main function of such a filter in a transmitter is to limit the bandwidth of the signal to the band allocated for the transmission. This prevents the transmitter from interfering with other stations, in a receiver, a bandpass filter allows signals within a selected range of frequencies to be heard or decoded, while preventing signals at unwanted frequencies from getting through. A bandpass filter also optimizes the ratio and sensitivity of a receiver. Outside of electronics and signal processing, one example of the use of filters is in the atmospheric sciences. It is common to band-pass filter recent meteorological data with a range of, for example,3 to 10 days. In neuroscience, visual cortical simple cells were first shown by David Hubel and Torsten Wiesel to have properties that resemble Gabor filters. In astronomy, band-pass filters are used to only a single portion of the light spectrum into an instrument
29.
Band-stop filter
–
In signal processing, a band-stop filter or band-rejection filter is a filter that passes most frequencies unaltered, but attenuates those in a specific range to very low levels. It is the opposite of a band-pass filter, a notch filter is a band-stop filter with a narrow stopband. Other names include band limit filter, T-notch filter, band-elimination filter, typically, the width of the stopband is 1 to 2 decades. However, in the band, a notch filter has high. Anti-hum filter For countries using 60 Hz power lines, Low Freq,59 Hz Middle Freq 60 Hz High Freq,61 Hz This means that the filter passes all frequencies, except for the range of 59–61 Hz. This would be used to out the mains hum from the 60 Hz power line. For countries where power transmission is at 50 Hz, the filter would have a 49–51 Hz range, non-linearities of power amplifiers When measuring the non-linearities of power amplifiers, a very narrow notch filter can be very useful to avoid the carrier frequency. Use of the filter may ensure that the input power of a spectrum analyser used to detect spurious content will not be exceeded. Wave trap A notch filter, usually a simple LC circuit, is used to remove a specific interfering frequency and this is a technique used with radio receivers that are so close to a transmitter that it swamps all other signals. The wave trap is used to remove, or greatly reduce, optical notch filters rely on destructive interference
30.
Lumped element model
–
It is useful in electrical systems, mechanical multibody systems, heat transfer, acoustics, etc. The lumped matter discipline is a set of imposed assumptions in electrical engineering that provides the foundation for lumped circuit abstraction used in network analysis, the change of the magnetic flux in time outside a conductor is zero. The change of the charge in time inside conducting elements is zero, signal timescales of interest are much larger than propagation delay of electromagnetic waves across the lumped element. The first two assumptions result in Kirchhoffs circuit laws when applied to Maxwells equations and are applicable when the circuit is in steady state. The third assumption is the basis of the lumped element model used in network analysis, less severe assumptions result in the distributed element model, while still not requiring the direct application of the full Maxwell equations. Joined by a network of perfectly conducting wires, the lumped element model is valid whenever L c ≪ λ, where L c denotes the circuits characteristic length, and λ denotes the circuits operating wavelength. Another way of viewing the validity of the lumped element model is to note that this ignores the finite time it takes signals to propagate around a circuit. Whenever this propagation time is not significant to the application the lumped element model can be used and this is the case when the propagation time is much less than the period of the signal involved. The exact point at which the lumped element model can no longer be used depends to an extent on how accurately the signal needs to be known in a given application. Real-world components exhibit non-ideal characteristics which are, in reality, distributed elements but are represented to a first-order approximation by lumped elements. Similarly a wire-wound resistor has significant inductance as well as resistance distributed along its length, a lumped capacitance model, also called lumped system analysis, reduces a thermal system to a number of discrete “lumps” and assumes that the temperature difference inside each lump is negligible. This approximation is useful to simplify otherwise complex differential heat equations and it was developed as a mathematical analog of electrical capacitance, although it also includes thermal analogs of electrical resistance as well. The method of approximation then suitably reduces one aspect of the transient conduction system to a mathematically tractable form. The rising uniform temperature within the object or part of a system, an early-discovered example of a lumped-capacitance system which exhibits mathematically simple behavior due to such physical simplifications, are systems which conform to Newtons law of cooling. This law simply states that the temperature of a hot object progresses toward the temperature of its environment in an exponential fashion. Objects follow this law only if the rate of heat conduction within them is much larger than the heat flow into or out of them. This in turn leads to simple heating or cooling behavior. To determine the number of lumps, the Biot number, a parameter of the system, is used
31.
Inductor
–
An inductor, also called a coil or reactor, is a passive two-terminal electrical component that stores electrical energy in a magnetic field when electric current is flowing through it. An inductor typically consists of a conductor, such as a wire. When the current flowing through an inductor changes, the magnetic field induces a voltage in the conductor. According to Lenzs law, the direction of induced electromotive force opposes the change in current that created it, as a result, inductors oppose any changes in current through them. An inductor is characterized by its inductance, which is the ratio of the voltage to the rate of change of current, in the International System of Units, the unit of inductance is the henry. Inductors have values that range from 1 µH to 1 H. Many inductors have a core made of iron or ferrite inside the coil. Along with capacitors and resistors, inductors are one of the three passive linear circuit elements that make up electronic circuits, Inductors are widely used in alternating current electronic equipment, particularly in radio equipment. They are used to block AC while allowing DC to pass and they are also used in electronic filters to separate signals of different frequencies, and in combination with capacitors to make tuned circuits, used to tune radio and TV receivers. An electric current flowing through a conductor generates a magnetic field surrounding it, any changes of current and therefore in the magnetic flux through the cross-section of the inductor creates an opposing electromotive force in the conductor. An inductor is a component consisting of a wire or other conductor shaped to increase the flux through the circuit. Winding the wire into a coil increases the number of times the magnetic flux lines link the circuit, increasing the field, the more turns, the higher the inductance. The inductance also depends on the shape of the coil, separation of the turns, by adding a magnetic core made of a ferromagnetic material like iron inside the coil, the magnetizing field from the coil will induce magnetization in the material, increasing the magnetic flux. The high permeability of a core can increase the inductance of a coil by a factor of several thousand over what it would be without it. Any change in the current through an inductor creates a changing flux, for example, an inductor with an inductance of 1 henry produces an EMF of 1 volt when the current through the inductor changes at the rate of 1 ampere per second. This is usually taken to be the relation of the inductor. The dual of the inductor is the capacitor, which stores energy in a field rather than a magnetic field. Its current-voltage relation is obtained by exchanging current and voltage in the inductor equations, the polarity of the induced voltage is given by Lenzs law, which states that it will be such as to oppose the change in current
32.
Series and parallel circuits
–
Components of an electrical circuit or electronic circuit can be connected in many different ways. The two simplest of these are called series and parallel and occur frequently, components connected in series are connected along a single path, so the same current flows through all of the components. Components connected in parallel are connected, so the voltage is applied to each component. A circuit composed solely of components connected in series is known as a circuit, likewise. In a series circuit, the current through each of the components is the same, in a parallel circuit, the voltage across each of the components is the same, and the total current is the sum of the currents through each component. Consider a very simple circuit consisting of four light bulbs and one 6 V battery. If a wire joins the battery to one bulb, to the bulb, to the next bulb, to the next bulb, then back to the battery, in one continuous loop. If each bulb is wired to the battery in a separate loop, the bulbs are said to be in parallel. If the four light bulbs are connected in series, there is current through all of them, and the voltage drop is 1.5 V across each bulb. If the light bulbs are connected in parallel, the currents through the light bulbs combine to form the current in the battery, while the drop is across each bulb. In a series circuit, every device must function for the circuit to be complete, one bulb burning out in a series circuit breaks the circuit. In parallel circuits, each light has its own circuit, so all but one light could be burned out, Series circuits are sometimes called current-coupled or daisy chain-coupled. The current in a series circuit goes through every component in the circuit, therefore, all of the components in a series connection carry the same current. There is only one path in a circuit in which the current can flow. For example, if one of the light bulbs in an older-style string of Christmas tree lights burns out or is removed. = I n In a series circuit the current is the same for all of elements. The total resistance of resistors in series is equal to the sum of their individual resistances, R total = R1 + R2 + ⋯ + R n Electrical conductance presents a reciprocal quantity to resistance. Total conductance of a series circuits of pure resistors, therefore, for a special case of two resistors in series, the total conductance is equal to, G total = G1 G2 G1 + G2
33.
Kirchhoff's circuit laws
–
See also Kirchhoffs laws for other laws named after Gustav Kirchhoff. Kirchhoffs circuit laws are two equalities that deal with the current and potential difference in the lumped element model of electrical circuits and they were first described in 1845 by German physicist Gustav Kirchhoff. This generalized the work of Georg Ohm and preceded the work of Maxwell, widely used in electrical engineering, they are also called Kirchhoffs rules or simply Kirchhoffs laws. Both of Kirchhoffs laws can be understood as corollaries of the Maxwell equations in the low-frequency limit and they are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits. This law is also called Kirchhoffs first law, Kirchhoffs point rule, or Kirchhoffs junction rule. This formula is valid for complex currents, ∑ k =1 n I ~ k =0 The law is based on the conservation of charge whereby the charge is the product of the current and the time. A matrix version of Kirchhoffs current law is the basis of most circuit simulation software, Kirchhoffs current law combined with Ohms Law is used in nodal analysis. KCL is applicable to any lumped network irrespective of the nature of the network, whether unilateral or bilateral, active or passive and this law is also called Kirchhoffs second law, Kirchhoffs loop rule, and Kirchhoffs second rule. Similarly to KCL, it can be stated as, ∑ k =1 n V k =0 Here, n is the total number of voltages measured. The voltages may also be complex, ∑ k =1 n V ~ k =0 This law is based on the conservation of energy whereby voltage is defined as the energy per unit charge. The total amount of energy gained per unit charge must be equal to the amount of energy lost per unit charge, as energy, in the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements, in the low-frequency limit, this is a corollary of Faradays law of induction. This has practical application in situations involving static electricity, KCL and KVL both depend on the lumped element model being applicable to the circuit in question. When the model is not applicable, the laws do not apply, KCL, in its usual form, is dependent on the assumption that current flows only in conductors, and that whenever current flows into one end of a conductor it immediately flows out the other end. This is not an assumption for high-frequency AC circuits, where the lumped element model is no longer applicable. It is often possible to improve the applicability of KCL by considering parasitic capacitances distributed along the conductors, significant violations of KCL can occur even at 60 Hz, which is not a very high frequency. In other words, KCL is valid if the total electric charge, Q. In practical cases this is always so when KCL is applied at a geometric point, when investigating a finite region, however, it is possible that the charge density within the region may change
34.
Linear differential equation
–
In mathematics, linear differential equations are differential equations having solutions which can be added together in particular linear combinations to form further solutions. They equate 0 to a polynomial that is linear in the value and various derivatives of a variable, linear differential equations can be ordinary or partial. The solutions to differential equations form a vector space. For a function dependent on time we may write the equation more expressly as L y = f and and it is convenient to rewrite this equation in an operator form L n ≡ y where D is the differential operator d/dt, and the An are given functions. Such an equation is said to have order n, the index of the highest derivative of y that is involved, a typical simple example is the linear differential equation used to model radioactive decay. Let N denote the number of radioactive atoms remaining in some sample of material at time t, the case where f =0 is called a homogeneous equation and its solutions are called complementary functions. It is particularly important to the solution of the general case, when the Ai are numbers, the equation is said to have constant coefficients. The exponential function is one of the few functions to keep its shape after differentiation, allowing the sum of its derivatives to cancel out to zero. Division by ezx gives the nth-order polynomial, F = z n + A1 z n −1 + ⋯ + A n =0 and this algebraic equation F =0 is the characteristic equation considered later by Gaspard Monge and Augustin-Louis Cauchy. Formally, the terms y of the differential equation are replaced by zk. Solving the polynomial gives n values of z, z1, substitution of any of those values for z into ezx gives a solution ezix. Since homogeneous linear differential equations obey the principle, any linear combination of these functions also satisfies the differential equation. When these roots are all distinct, we have n distinct solutions to the differential equation and it can be shown that these are linearly independent, by applying the Vandermonde determinant, and together they form a basis of the space of all solutions of the differential equation. The preceding gave a solution for the case when all zeros are distinct, for the general case, if z is a zero of F having multiplicity m, then, for k ∈, y = x k e z x is a solution of the ordinary differential equation. Applying this to all roots gives a collection of n distinct and linearly independent functions, as before, these functions make up a basis of the solution space. If the coefficients Ai of the equation are real, then real-valued solutions are generally preferable. A case that involves complex roots can be solved with the aid of Eulers formula, in the n=2 case y ″ + y ′ + b y =0, the characteristic equation is of the form z 2 + z + b =0. In case #2, the solution is given by y = e z x
35.
Exponential decay
–
A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. Symbolically, this process can be expressed by the differential equation. The solution to this equation is, N = N0 e − λ t, where N is the quantity at time t, and N0 = N is the initial quantity, i. e. the quantity at time t =0. If the decaying quantity, N, is the number of elements in a certain set. This is called the lifetime, τ, and it can be shown that it relates to the decay rate, λ, in the following way. For example, if the population of the assembly, N, is 1000. A very similar equation will be seen below, which arises when the base of the exponential is chosen to be 2, in that case the scaling time is the half-life. A more intuitive characteristic of exponential decay for many people is the time required for the quantity to fall to one half of its initial value. This time is called the half-life, and often denoted by the symbol t1/2, the half-life can be written in terms of the decay constant, or the mean lifetime, as, t 1 /2 = ln λ = τ ln . When this expression is inserted for τ in the equation above, and ln 2 is absorbed into the base. Thus, the amount of left is 2−1 = 1/2 raised to the number of half-lives that have passed. Thus, after 3 half-lives there will be 1/23 = 1/8 of the material left. Therefore, the mean lifetime τ is equal to the half-life divided by the log of 2, or. E. g. polonium-210 has a half-life of 138 days, the equation that describes exponential decay is d N d t = − λ N or, by rearranging, d N N = − λ d t. This is the form of the equation that is most commonly used to describe exponential decay, any one of decay constant, mean lifetime, or half-life is sufficient to characterise the decay. The notation λ for the constant is a remnant of the usual notation for an eigenvalue. In this case, λ is the eigenvalue of the negative of the operator with N as the corresponding eigenfunction. The units of the constant are s−1
36.
Complex impedance
–
Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. In quantitative terms, it is the ratio of the voltage to the current in an alternating current circuit. Impedance extends the concept of resistance to AC circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. When a circuit is driven with direct current, there is no distinction between impedance and resistance, the latter can be thought of as impedance with zero phase angle. The impedance caused by two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. The symbol for impedance is usually Z and it may be represented by writing its magnitude, however, cartesian complex number representation is often more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886, arthur Kennelly was the first to represent impedance with complex numbers in 1893. Impedance is defined as the frequency ratio of the voltage to the current. In other words, it is the voltage–current ratio for a complex exponential at a particular frequency ω. In general, impedance will be a number, with the same units as resistance. For a sinusoidal current or voltage input, the form of the complex impedance relates the amplitude and phase of the voltage. In particular, The magnitude of the impedance is the ratio of the voltage amplitude to the current amplitude. The phase of the impedance is the phase shift by which the current lags the voltage. The reciprocal of impedance is admittance, Impedance is represented as a complex quantity Z and the term complex impedance may be used interchangeably. J is the unit, and is used instead of i in this context to avoid confusion with the symbol for electric current. In Cartesian form, impedance is defined as Z = R + j X where the part of impedance is the resistance R. Where it is needed to add or subtract impedances, the form is more convenient, but when quantities are multiplied or divided. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation, conversion between the forms follows the normal conversion rules of complex numbers
37.
Ohm
–
The ohm is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. The definition of the ohm was revised several times, today the definition of the ohm is expressed from the quantum Hall effect. In many cases the resistance of a conductor in ohms is approximately constant within a range of voltages, temperatures. In alternating current circuits, electrical impedance is also measured in ohms, the siemens is the SI derived unit of electric conductance and admittance, also known as the mho, it is the reciprocal of resistance in ohms. The power dissipated by a resistor may be calculated from its resistance, non-linear resistors have a value that may vary depending on the applied voltage. The rapid rise of electrotechnology in the last half of the 19th century created a demand for a rational, coherent, consistent, telegraphers and other early users of electricity in the 19th century needed a practical standard unit of measurement for resistance. Two different methods of establishing a system of units can be chosen. Various artifacts, such as a length of wire or a standard cell, could be specified as producing defined quantities for resistance, voltage. This latter method ensures coherence with the units of energy, defining a unit for resistance that is coherent with units of energy and time in effect also requires defining units for potential and current. Some early definitions of a unit of resistance, for example, the absolute-units system related magnetic and electrostatic quantities to metric base units of mass, time, and length. These units had the advantage of simplifying the equations used in the solution of electromagnetic problems. However, the CGS units turned out to have impractical sizes for practical measurements, various artifact standards were proposed as the definition of the unit of resistance. In 1860 Werner Siemens published a suggestion for a reproducible resistance standard in Poggendorffs Annalen der Physik und Chemie and he proposed a column of pure mercury, of one square millimetre cross section, one metre long, Siemens mercury unit. However, this unit was not coherent with other units, one proposal was to devise a unit based on a mercury column that would be coherent – in effect, adjusting the length to make the resistance one ohm. Not all users of units had the resources to carry out experiments to the required precision. The BAAS in 1861 appointed a committee including Maxwell and Thomson to report upon Standards of Electrical Resistance, in the third report of the committee,1864, the resistance unit is referred to as B. A. unit, or Ohmad. By 1867 the unit is referred to as simply Ohm, the B. A. ohm was intended to be 109 CGS units but owing to an error in calculations the definition was 1. 3% too small. The error was significant for preparation of working standards, on September 21,1881 the Congrès internationale délectriciens defined a practical unit of Ohm for the resistance, based on CGS units, using a mercury column at zero deg
38.
Farad
–
The farad is the SI derived unit of electrical capacitance, the ability of a body to store an electrical charge. It is named after the English physicist Michael Faraday, one farad is defined as the capacitance across which, when charged with one coulomb, there is a potential difference of one volt. Equally, one farad can be described as the capacitance which stores a one-coulomb charge across a potential difference of one volt, the relationship between capacitance, charge and potential difference is linear. For example, if the difference across a capacitor is halved. For most applications, the farad is a large unit of capacitance. Most electrical and electronic applications are covered by the following SI prefixes,1 mF =1000 μF =1000000 nF1 μF =0.000001 F =1000 nF =1000000 pF1 nF =0. In 1881 at the International Congress of Electricians in Paris, the name farad was officially used for the unit of electrical capacitance, a capacitor consists of two conducting surfaces, frequently referred to as plates, separated by an insulating layer usually referred to as a dielectric. The original capacitor was the Leyden jar developed in the 18th century and it is the accumulation of electric charge on the plates that results in capacitance. Values of capacitors are specified in farads, microfarads, nanofarads and picofarads. The millifarad is rarely used in practice, while the nanofarad is uncommon in North America, the size of commercially available capacitors ranges from around 0.1 pF to 5000F supercapacitors. Capacitance values of 1 pF or lower can be achieved by twisting two short lengths of insulated wire together, the capacitance of the Earths ionosphere with respect to the ground is calculated to be about 1 F. The picofarad is sometimes pronounced as puff or pic, as in a ten-puff capacitor. Similarly, mic is sometimes used informally to signify microfarads, if the Greek letter μ is not available, the notation uF is often used as a substitute for μF in electronics literature. A micro-microfarad, an obsolete unit sometimes found in texts, is the equivalent of a picofarad. In texts prior to 1960, and on capacitor packages even more recently. Similarly, mmf or MMFD represented picofarads, the reciprocal of capacitance is called electrical elastance, the unit of which is the daraf. The abfarad is an obsolete CGS unit of equal to 109 farads. The statfarad is a rarely used CGS unit equivalent to the capacitance of a capacitor with a charge of 1 statcoulomb across a potential difference of 1 statvolt and it is 1/ farad, approximately 1.1126 picofarads
39.
Complex number
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
40.
Imaginary unit
–
The imaginary unit or unit imaginary number is a solution to the quadratic equation x2 +1 =0. The term imaginary is used there is no real number having a negative square. There are two square roots of −1, namely i and −i, just as there are two complex square roots of every real number other than zero, which has one double square root. In contexts where i is ambiguous or problematic, j or the Greek ι is sometimes used, in the disciplines of electrical engineering and control systems engineering, the imaginary unit is normally denoted by j instead of i, because i is commonly used to denote electric current. For the history of the unit, see Complex number § History. The imaginary number i is defined solely by the property that its square is −1, with i defined this way, it follows directly from algebra that i and −i are both square roots of −1. In polar form, i is represented as 1eiπ/2, having a value of 1. In the complex plane, i is the point located one unit from the origin along the imaginary axis, more precisely, once a solution i of the equation has been fixed, the value −i, which is distinct from i, is also a solution. Since the equation is the definition of i, it appears that the definition is ambiguous. However, no ambiguity results as long as one or other of the solutions is chosen and labelled as i and this is because, although −i and i are not quantitatively equivalent, there is no algebraic difference between i and −i. Both imaginary numbers have equal claim to being the number whose square is −1, the issue can be a subtle one. See also Complex conjugate and Galois group, a more precise explanation is to say that the automorphism group of the special orthogonal group SO has exactly two elements — the identity and the automorphism which exchanges CW and CCW rotations. All these ambiguities can be solved by adopting a rigorous definition of complex number. For example, the pair, in the usual construction of the complex numbers with two-dimensional vectors. The imaginary unit is sometimes written √−1 in advanced mathematics contexts, however, great care needs to be taken when manipulating formulas involving radicals. The radical sign notation is reserved either for the square root function. Similarly,1 i =1 −1 =1 −1 = −11 = −1 = i, the calculation rules a ⋅ b = a ⋅ b and a b = a b are only valid for real, non-negative values of a and b. These problems are avoided by writing and manipulating expressions like i√7, for a more thorough discussion, see Square root and Branch point
41.
Neper
–
The neper is a logarithmic unit for ratios of measurements of physical field and power quantities, such as gain and loss of electronic signals. The units name is derived from the name of John Napier, the inventor of logarithms. As is the case for the decibel and bel, the neper is a unit of the International System of Quantities, but not part of the International System of Units, like the decibel, the neper is a unit in a logarithmic scale. While the bel uses the logarithm to compute ratios, the neper uses the natural logarithm. The value of a ratio in nepers is given by L N p = ln x 1 x 2 = ln x 1 − ln x 2, where x 1 and x 2 are the values of interest, and ln is the natural logarithm. When the values are quadratic in the amplitude, they are first linearised by taking the square root before the logarithm is taken, in the ISQ, the neper is defined as 1 Np =1. The neper is defined in terms of ratios of field quantities, a power ratio 10 log r dB is equivalent to a field-quantity ratio 20 log r dB, since power is proportional to the square of the amplitude. Hence the neper and decibel are related via,1 N p =20 log 10 e d B ≈8,685889638 d B and 1 d B =120 log 10 e N p ≈0. The decibel and the neper have a ratio to each other. Like the decibel, the neper is a dimensionless unit, the International Telecommunication Union recognizes both units. The neper is a linear unit of relative difference, meaning in nepers, relative differences add. This property is shared with logarithmic units in other bases, such as the bel, the centineper can thus be used as a linear replacement for percentage differences. The linear approximation for small differences, =1 + δ + ϵ + δ ϵ ≈1 + δ + ϵ, is widely used. However, it is approximate, with error increasing for large percentage changes. Conversion of level gain and loss, neper, decibel, and bel Calculating transmission line losses
42.
Sine wave
–
A sine wave or sinusoid is a mathematical curve that describes a smooth repetitive oscillation. It is named after the sine, of which it is the graph. It occurs often in pure and applied mathematics, as well as physics, engineering, signal processing and many other fields. Its most basic form as a function of time is, y = A sin = A sin where, A = the amplitude, F = the ordinary frequency, the number of oscillations that occur each second of time. ω = 2πf, the frequency, the rate of change of the function argument in units of radians per second φ = the phase. When φ is non-zero, the entire waveform appears to be shifted in time by the amount φ /ω seconds, a negative value represents a delay, and a positive value represents an advance. The sine wave is important in physics because it retains its shape when added to another sine wave of the same frequency and arbitrary phase. It is the only periodic waveform that has this property and this property leads to its importance in Fourier analysis and makes it acoustically unique. The wavenumber is related to the frequency by. K = ω v =2 π f v =2 π λ where λ is the wavelength, f is the frequency, and v is the linear speed. This equation gives a wave for a single dimension, thus the generalized equation given above gives the displacement of the wave at a position x at time t along a single line. This could, for example, be considered the value of a wave along a wire, in two or three spatial dimensions, the same equation describes a travelling plane wave if position x and wavenumber k are interpreted as vectors, and their product as a dot product. For more complex such as the height of a water wave in a pond after a stone has been dropped in. This wave pattern occurs often in nature, including wind waves, sound waves, a cosine wave is said to be sinusoidal, because cos = sin , which is also a sine wave with a phase-shift of π/2 radians. Because of this start, it is often said that the cosine function leads the sine function or the sine lags the cosine. The human ear can recognize single sine waves as sounding clear because sine waves are representations of a frequency with no harmonics. Presence of higher harmonics in addition to the fundamental causes variation in the timbre, on the other hand, if the sound contains aperiodic waves along with sine waves, then the sound will be perceived noisy as noise is characterized as being aperiodic or having a non-repetitive pattern. In 1822, French mathematician Joseph Fourier discovered that sinusoidal waves can be used as building blocks to describe and approximate any periodic waveform
43.
Voltage divider
–
In electronics, a voltage divider is a passive linear circuit that produces an output voltage that is a fraction of its input voltage. Voltage division is the result of distributing the input voltage among the components of the divider, a simple example of a voltage divider is two resistors connected in series, with the input voltage applied across the resistor pair and the output voltage emerging from the connection between them. Resistor voltage dividers are used to create reference voltages, or to reduce the magnitude of a voltage so it can be measured. In electric power transmission, a voltage divider is used for measurement of high voltage. A voltage divider referenced to ground is created by connecting two electrical impedances in series, as shown in Figure 1, the input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of such as resistors, inductors and capacitors. A resistive divider is the case where both impedances, Z1 and Z2, are purely resistive and that is, using resistors alone it is not possible to either invert the voltage or increase Vout above Vin. Consider a divider consisting of a resistor and capacitor as shown in Figure 3, the product τ = RC is called the time constant of the circuit. The ratio then depends on frequency, in this case decreasing as frequency increases and this circuit is, in fact, a basic lowpass filter. The ratio contains a number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is, | V o u t V i n | =11 +2. Inductive dividers split AC input according to inductance, V o u t = L2 L1 + L2 ⋅ V i n The above equation is for non-interacting inductors, inductive dividers split DC input according to the resistance of the elements as for the resistive divider above. Capacitive dividers do not pass DC input, for an AC input a simple capacitive equation is, V o u t = C1 C1 + C2 ⋅ V i n Any leakage current in the capactive elements requires use of the generalized expression with two impedances. By selection of parallel R and C elements in the proper proportions and this is the principle applied in compensated oscilloscope probes to increase measurement bandwidth. The output voltage of a voltage divider will vary according to the current it is supplying to its external electrical load. The effective source impedance coming from a divider of Z1 and Z2, as above, will be Z1 in parallel with Z2, to obtain a sufficiently stable output voltage, the output current must either be stable or limited to an appropriately small percentage of the dividers input current. Voltage regulators are used in lieu of passive voltage dividers when it is necessary to accommodate high or fluctuating load currents. Voltage dividers are used for adjusting the level of a signal, for bias of active devices in amplifiers, a Wheatstone bridge and a multimeter both include voltage dividers