The decibel is a unit of measurement used to express the ratio of one value of a power or field quantity to another on a logarithmic scale, the logarithmic quantity being called the power level or field level, respectively. It can be used to express a change in an absolute value. In the latter case, it expresses the ratio of a value to a fixed reference value. For example, if the reference value is 1 volt the suffix is "V", if the reference value is one milliwatt the suffix is "m". Two different scales are used when expressing a ratio in decibels, depending on the nature of the quantities: power and field; when expressing a power ratio, the number of decibels is ten times its logarithm to base 10. That is, a change in power by a factor of 10 corresponds to a 10 dB change in level; when expressing field quantities, a change in amplitude by a factor of 10 corresponds to a 20 dB change in level. The decibel scales differ by a factor of two so that the related power and field levels change by the same number of decibels in, for example, resistive loads.
The definition of the decibel is based on the measurement of power in telephony of the early 20th century in the Bell System in the United States. One decibel is one tenth of one bel, named in honor of Alexander Graham Bell. Today, the decibel is used for a wide variety of measurements in science and engineering, most prominently in acoustics and control theory. In electronics, the gains of amplifiers, attenuation of signals, signal-to-noise ratios are expressed in decibels. In the International System of Quantities, the decibel is defined as a unit of measurement for quantities of type level or level difference, which are defined as the logarithm of the ratio of power- or field-type quantities; the decibel originates from methods used to quantify signal loss in telegraph and telephone circuits. The unit for loss was Miles of Standard Cable. 1 MSC corresponded to the loss of power over a 1 mile length of standard telephone cable at a frequency of 5000 radians per second, matched the smallest attenuation detectable to the average listener.
The standard telephone cable implied was "a cable having uniformly distributed resistance of 88 Ohms per loop-mile and uniformly distributed shunt capacitance of 0.054 microfarads per mile". In 1924, Bell Telephone Laboratories received favorable response to a new unit definition among members of the International Advisory Committee on Long Distance Telephony in Europe and replaced the MSC with the Transmission Unit. 1 TU was defined such that the number of TUs was ten times the base-10 logarithm of the ratio of measured power to a reference power. The definition was conveniently chosen such that 1 TU approximated 1 MSC. In 1928, the Bell system renamed the TU into the decibel, being one tenth of a newly defined unit for the base-10 logarithm of the power ratio, it was named the bel, in honor of the telecommunications pioneer Alexander Graham Bell. The bel is used, as the decibel was the proposed working unit; the naming and early definition of the decibel is described in the NBS Standard's Yearbook of 1931: Since the earliest days of the telephone, the need for a unit in which to measure the transmission efficiency of telephone facilities has been recognized.
The introduction of cable in 1896 afforded a stable basis for a convenient unit and the "mile of standard" cable came into general use shortly thereafter. This unit was employed up to 1923 when a new unit was adopted as being more suitable for modern telephone work; the new transmission unit is used among the foreign telephone organizations and it was termed the "decibel" at the suggestion of the International Advisory Committee on Long Distance Telephony. The decibel may be defined by the statement that two amounts of power differ by 1 decibel when they are in the ratio of 100.1 and any two amounts of power differ by N decibels when they are in the ratio of 10N. The number of transmission units expressing the ratio of any two powers is therefore ten times the common logarithm of that ratio; this method of designating the gain or loss of power in telephone circuits permits direct addition or subtraction of the units expressing the efficiency of different parts of the circuit... In 1954, J. W. Horton argued that the use of the decibel as a unit for quantities other than transmission loss led to confusion, suggested the name'logit' for "standard magnitudes which combine by addition".
In April 2003, the International Committee for Weights and Measures considered a recommendation for the inclusion of the decibel in the International System of Units, but decided against the proposal. However, the decibel is recognized by other international bodies such as the International Electrotechnical Commission and International Organization for Standardization; the IEC permits the use of the decibel with field quantities as well as power and this recommendation is followed by many national standards bodies, such as NIST, which justifies the use of the decibel for voltage ratios. The term field quantity is deprecated by ISO 80000-1. In spite of their widespread use, suffixes are not recognized by the IEC or ISO. ISO 80000-3 describes definitions for units of space and time; the decibel for use in acoustics is defined in ISO 80000-8. The major difference from the article below is that for acoustics the decibel has no
In radio-frequency engineering, a transmission line is a specialized cable or other structure designed to conduct alternating current of radio frequency, that is, currents with a frequency high enough that their wave nature must be taken into account. Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas, distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses; this article covers two-conductor transmission line such as parallel line, coaxial cable and microstrip. Some sources refer to waveguide, dielectric waveguide, optical fibre as transmission line, however these lines require different analytical techniques and so are not covered by this article. Ordinary electrical cables suffice to carry low frequency alternating current, such as mains power, which reverses direction 100 to 120 times per second, audio signals. However, they cannot be used to carry currents in the radio frequency range, above about 30 kHz, because the energy tends to radiate off the cable as radio waves, causing power losses.
Radio frequency currents tend to reflect from discontinuities in the cable such as connectors and joints, travel back down the cable toward the source. These reflections act as bottlenecks. Transmission lines use specialized construction, impedance matching, to carry electromagnetic signals with minimal reflections and power losses; the distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance, called the characteristic impedance, to prevent reflections. Types of transmission line include parallel line, coaxial cable, planar transmission lines such as stripline and microstrip; the higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength. At microwave frequencies and above, power losses in transmission lines become excessive, waveguides are used instead, which function as "pipes" to confine and guide the electromagnetic waves.
Some sources define waveguides as a type of transmission line. At higher frequencies, in the terahertz and visible ranges, waveguides in turn become lossy, optical methods, are used to guide electromagnetic waves; the theory of sound wave propagation is similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are used to build structures to conduct acoustic waves. Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell, Lord Kelvin and Oliver Heaviside. In 1855 Lord Kelvin formulated a diffusion model of the current in a submarine cable; the model predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable. In 1885 Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations. In many electric circuits, the length of the wires connecting the components can for the most part be ignored; that is, the voltage on the wire at a given time can be assumed to be the same at all points.
However, when the voltage changes in a time interval comparable to the time it takes for the signal to travel down the wire, the length becomes important and the wire must be treated as a transmission line. Stated another way, the length of the wire is important when the signal includes frequency components with corresponding wavelengths comparable to or less than the length of the wire. A common rule of thumb is that the cable or wire should be treated as a transmission line if the length is greater than 1/10 of the wavelength. At this length the phase delay and the interference of any reflections on the line become important and can lead to unpredictable behaviour in systems which have not been designed using transmission line theory. For the purposes of analysis, an electrical transmission line can be modelled as a two-port network, as follows: In the simplest case, the network is assumed to be linear, the two ports are assumed to be interchangeable. If the transmission line is uniform along its length its behaviour is described by a single parameter called the characteristic impedance, symbol Z0.
This is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z0 are 50 or 75 ohms for a coaxial cable, about 100 ohms for a twisted pair of wires, about 300 ohms for a common type of untwisted pair used in radio transmission; when sending power down a transmission line, it is desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z0, in which case the transmission line is said to be matched; some of the power, fed into a transmission line is lost because of its resistance. This effect is called resistive loss. At high frequencies, another effect cal
In physics, power is the rate of doing work or of transferring heat, i.e. the amount of energy transferred or converted per unit time. Having no direction, it is a scalar quantity. In the International System of Units, the unit of power is the joule per second, known as the watt in honour of James Watt, the eighteenth-century developer of the condenser steam engine. Another common and traditional measure is horsepower. Being the rate of work, the equation for power can be written: power = work time As a physical concept, power requires both a change in the physical system and a specified time in which the change occurs; this is distinct from the concept of work, only measured in terms of a net change in the state of the physical system. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is needed for running because the work is done in a shorter amount of time; the output power of an electric motor is the product of the torque that the motor generates and the angular velocity of its output shaft.
The power involved in moving a vehicle is the product of the traction force of the wheels and the velocity of the vehicle. The rate at which a light bulb converts electrical energy into light and heat is measured in watts—the higher the wattage, the more power, or equivalently the more electrical energy is used per unit time; the dimension of power is energy divided by time. The SI unit of power is the watt, equal to one joule per second. Other units of power include ergs per second, metric horsepower, foot-pounds per minute. One horsepower is equivalent to 33,000 foot-pounds per minute, or the power required to lift 550 pounds by one foot in one second, is equivalent to about 746 watts. Other units include a logarithmic measure relative to a reference of 1 milliwatt. Power, as a function of time, is the rate at which work is done, so can be expressed by this equation: P = d W d t where P is power, W is work, t is time; because work is a force F applied over a distance x, W = F ⋅ x for a constant force, power can be rewritten as: P = d W d t = d d t = F ⋅ d x d t = F ⋅ v In fact, this is valid for any force, as a consequence of applying the fundamental theorem of calculus.
As a simple example, burning one kilogram of coal releases much more energy than does detonating a kilogram of TNT, but because the TNT reaction releases energy much more it delivers far more power than the coal. If ΔW is the amount of work performed during a period of time of duration Δt, the average power Pavg over that period is given by the formula P a v g = Δ W Δ t, it is the average amount of energy converted per unit of time. The average power is simply called "power" when the context makes it clear; the instantaneous power is the limiting value of the average power as the time interval Δt approaches zero. P = lim Δ t → 0 P a v g = lim Δ t → 0 Δ W Δ t = d W d t. In the case of constant power P, the amount of work performed during a period of duration t is given by: W = P t. In the context of energy conversion, it is more customary to use the symbol E rather than W. Power in mechanical systems is the combination of forces and movement. In particular, power is the product of a force on an object and the object's velocity, or the product of a torque on a shaft and the shaft's angular velocity.
Mechanical power is described as the time derivative of work. In mechanics, the work done by a force F on an object that travels along a curve C is given by the line integral: W C = ∫ C F ⋅ v d t = ∫ C F ⋅ d x, where x defines the path C and v is the velocity along this path. If the force F is derivable from a potential applying the gradi
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. The term complex impedance may be used interchangeably. Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of a sinusoidal voltage between its terminals to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage. Impedance extends the concept of resistance to AC circuits, possesses both magnitude and phase, unlike resistance, which has only magnitude; when a circuit is driven with direct current, there is no distinction between impedance and resistance. The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law. In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix.
Impedance is a complex number, with the same units as resistance. Its symbol is Z, it may be represented by writing its magnitude and phase in the form |Z|∠θ. However, cartesian complex number representation is more powerful for circuit analysis purposes; the reciprocal of impedance is admittance, whose SI unit is the siemens called mho. Instruments used to measure the electrical impedance are called impedance analyzers; the term impedance was coined by Oliver Heaviside in July 1886. Arthur Kennelly was the first to represent impedance with complex numbers in 1893. In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields, the electrostatic storage of charge induced by voltages between conductors; the impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. Impedance is defined as the frequency domain ratio of the voltage to the current.
In other words, it is the voltage–current ratio for a single complex exponential at a particular frequency ω. For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular: The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude; the impedance of a two-terminal circuit element is represented as a complex quantity Z. The polar form conveniently captures both magnitude and phase characteristics as Z = | Z | e j arg where the magnitude | Z | represents the ratio of the voltage difference amplitude to the current amplitude, while the argument arg gives the phase difference between voltage and current. J is the imaginary unit, is used instead of i in this context to avoid confusion with the symbol for electric current. In Cartesian form, impedance is defined as Z = R + j X where the real part of impedance is the resistance R and the imaginary part is the reactance X.
Where it is needed to add or subtract impedances, the cartesian form is more convenient. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers. To simplify calculations, sinusoidal voltage and current waves are represented as complex-valued functions of time denoted as V and I. V = | V | e j; the impedance of a bipolar circuit is defined as the ratio of these quantities: Z = V I = | V | | I | e j ( ϕ V − ϕ I
In communication systems, signal processing, electrical engineering, a signal is a function that "conveys information about the behavior or attributes of some phenomenon". In its most common usage, in electronics and telecommunication, this is a time varying voltage, current or electromagnetic wave used to carry information. A signal may be defined as an "observable change in a quantifiable entity". In the physical world, any quantity exhibiting variation in time or variation in space is a signal that might provide information on the status of a physical system, or convey a message between observers, among other possibilities; the IEEE Transactions on Signal Processing states that the term "signal" includes audio, speech, communication, sonar, radar and musical signals. In a effort of redefining a signal, anything, only a function of space, such as an image, is excluded from the category of signals, it is stated that a signal may or may not contain any information. In nature, signals can take the form of any action by one organism able to be perceived by other organisms, ranging from the release of chemicals by plants to alert nearby plants of the same type of a predator, to sounds or motions made by animals to alert other animals of the presence of danger or of food.
Signaling occurs in organisms all the way down to the cellular level, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability for animals to communicate with each other by developing ways of signaling. In human engineering, signals are provided by a sensor, the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, a speaker does the reverse; the formal study of the information content of signals is the field of information theory. The information in a signal is accompanied by noise; the term noise means an undesirable random disturbance, but is extended to include unwanted signals conflicting with the desired signal. The prevention of noise is covered in part under the heading of signal integrity; the separation of desired signals from a background is the field of signal recovery, one branch of, estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have led the way in the design and implementation of systems involving transmission and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines, specialising in the design and analysis of systems that manipulate physical signals. Definitions specific to sub-fields are common. For example, in information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message. In the context of signal processing, signals are analog and digital representations of analog physical quantities. In terms of their spatial distributions, signals may be categorized as point source signals and distributed source signals. In a communication system, a transmitter encodes a message to create a signal, carried to a receiver by the communications channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone.
The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires. In telephone networks, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal. Signals can be categorized in various ways; the most common distinction is between discrete and continuous spaces that the functions are defined over, for example discrete and continuous time domains. Discrete-time signals are referred to as time series in other fields. Continuous-time signals are referred to as continuous signals. A second important distinction is between continuous-valued. In digital signal processing, a digital signal may be defined as a sequence of discrete values associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream. Another important property of a signal is its information content.
Two main types of signals encountered in practice are digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized. An analog signal is any continuous signal for which the time varying feature of the signal is a representation of some other time varying quantity, i.e. analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves, it differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal refers to electrical signals. An analog signal uses some property of the medium to convey the signal's information. For ex