Digital television is the transmission of television signals, including the sound channel, using digital encoding, in contrast to the earlier television technology, analog television, in which the video and audio are carried by analog signals. It is an innovative advance that represents the first significant evolution in television technology since color television in the 1950s. Digital TV transmits in a new image format called HDTV, with greater resolution than analog TV, in a wide screen aspect ratio similar to recent movies in contrast to the narrower screen of analog TV, it makes more economical use of scarce radio spectrum space. A transition from analog to digital broadcasting began around 2006 in some countries, many industrial countries have now completed the changeover, while other countries are in various stages of adaptation. Different digital television broadcasting standards have been adopted in different parts of the world; this standard has been adopted in Europe, Asia, total about 60 countries.
Advanced Television System Committee uses eight-level vestigial sideband for terrestrial broadcasting. This standard has been adopted by 6 countries: United States, Mexico, South Korea, Dominican Republic and Honduras. Integrated Services Digital Broadcasting is a system designed to provide good reception to fixed receivers and portable or mobile receivers, it utilizes two-dimensional interleaving. It supports hierarchical transmission of up to three layers and uses MPEG-2 video and Advanced Audio Coding; this standard has been adopted in Japan and the Philippines. ISDB-T International is an adaptation of this standard using H.264/MPEG-4 AVC that been adopted in most of South America and is being embraced by Portuguese-speaking African countries. Digital Terrestrial Multimedia Broadcasting adopts time-domain synchronous OFDM technology with a pseudo-random signal frame to serve as the guard interval of the OFDM block and the training symbol; the DTMB standard has been adopted in the People's Republic including Hong Kong and Macau.
Digital Multimedia Broadcasting is a digital radio transmission technology developed in South Korea as part of the national IT project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems. Digital TV's roots have been tied closely to the availability of inexpensive, high performance computers, it wasn't until the 1990s. In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard, Japanese advancements were seen as pacesetters that threatened to eclipse U. S. electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration. An American company, General Instrument, demonstrated the feasibility of a digital television signal; this breakthrough was of such significance that the FCC was persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.
In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new ATV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. To ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels; the new ATV standard allowed the new DTV signal to be based on new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements; the final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry and the computer industry over which of the two scanning processes—interlaced or progressive—is superior.
Interlaced scanning, used in televisions worldwide, scans even-numbered lines first odd-numbered ones. Progressive scanning, the format used in computers, scans lines in sequences, from top to bottom; the computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It argued that progressive scanning enables easier connections with the Internet, is more cheaply converted to interlaced formats than vice versa; the film industry supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures feasible, i.e. 1,080 lines per picture and 1,920 pixels per line. Broadcasters favored interlaced scanning because their vast archive of interlaced
Analog television or analogue television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness and sound are represented by rapid variations of either the amplitude, frequency or phase of the signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference becomes reproduced by the receiver, thus with analog, a moderately weak signal becomes subject to interference. In contrast, a moderately weak digital signal and a strong digital signal transmit equal picture quality. Analog television can be distributed over a cable network using cable converters. All broadcast. Motivated by the lower bandwidth requirements of compressed digital signals, since the 2000s a digital television transition is proceeding in most countries of the world, with different deadlines for cessation of analog broadcasts; the earliest systems of analog television were mechanical television systems, which used spinning disks with patterns of holes punched into the disc to scan an image.
A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information; however these mechanical systems were slow, the images were dim and flickered and the image resolution low. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. Analog television did not begin as an industry until the development of the cathode-ray tube, which uses a focused electron beam to trace lines across a phosphor coated surface; the electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more spaced scan lines and much higher image resolution. Far less maintenance was required of an all-electronic system compared to a spinning disc system. All-electronic systems became popular with households after the Second World War. Broadcasters of analog television encode their signal using different systems.
The official systems of transmission are named: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, so on; the colors in those systems are encoded with one of three color coding schemes: NTSC, PAL, or SECAM, use RF modulation to modulate this signal onto a high frequency or ultra high frequency carrier. Each frame of a television image is composed of lines drawn on the screen; the lines are of varying brightness. The next sequential frame is displayed; the analog television signal contains timing and synchronization information, so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial television systems were black-and-white. A practical television system needs to take luminance, chrominance and audio signals, broadcast them over a radio transmission; the transmission system must include a means of television channel selection.
Analog broadcast television systems come in a variety of frame resolutions. Further differences exist in the modulation of the audio carrier; the monochrome combinations still existing in the 1950s are standardized by the International Telecommunication Union as capital letters A through N. When color television was introduced, the hue and saturation information was added to the monochrome signals in a way that black and white televisions ignore. In this way backwards compatibility was achieved; that concept is true for all analog television standards. There were three standards for the way the additional color information can be encoded and transmitted; the first was the American NTSC color television system. The European/Australian PAL and the French-former Soviet Union SECAM standard were developed and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, uses a different modulation approach than PAL or NTSC. In principle, all three color encoding systems can be combined with any scan line/frame rate combination.
Therefore, in order to describe a given signal it's necessary to quote the color system and the broadcast standard as a capital letter. For example, the United States, Canada and South Korea use NTSC-M, Japan uses NTSC-J, the UK uses PAL-I, France uses SECAM-L, much of Western Europe and Australia use PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on. However, not all of these possible combinations exist. NTSC is only used with system M though there were experiments with NTSC-A in the UK and NTSC-N in part of South America. PAL is used with a variety of 625-line standards but with the North American 525-line standard, accordingly n
In communications and electronic engineering, an intermediate frequency is a frequency to which a carrier wave is shifted as an intermediate step in transmission or reception. The intermediate frequency is created by mixing the carrier signal with a local oscillator signal in a process called heterodyning, resulting in a signal at the difference or beat frequency. Intermediate frequencies are used in superheterodyne radio receivers, in which an incoming signal is shifted to an IF for amplification before final detection is done. Conversion to an intermediate frequency is useful for several reasons; when several stages of filters are used, they can all be set to a fixed frequency, which makes them easier to build and to tune. Lower frequency transistors have higher gains so fewer stages are required. It's easier to make selective filters at lower fixed frequencies. There may be several such stages of intermediate frequency in a superheterodyne receiver. Intermediate frequencies are used for three general reasons.
At high frequencies, signal processing circuitry performs poorly. Active devices such as transistors cannot deliver much amplification. Ordinary circuits using capacitors and inductors must be replaced with cumbersome high frequency techniques such as striplines and waveguides. So a high frequency signal is converted to a lower IF for more convenient processing. For example, in satellite dishes, the microwave downlink signal received by the dish is converted to a much lower IF at the dish, to allow a inexpensive coaxial cable to carry the signal to the receiver inside the building. Bringing the signal in at the original microwave frequency would require an expensive waveguide. A second reason, in receivers that can be tuned to different frequencies, is to convert the various different frequencies of the stations to a common frequency for processing, it is difficult to build multistage amplifiers and detectors that can have all stages track in tuning different frequencies, but it is comparatively easy to build tunable oscillators.
Superheterodyne receivers tune in different frequencies by adjusting the frequency of the local oscillator on the input stage, all processing after, done at the same fixed frequency, the IF. Without using an IF, all the complicated filters and detectors in a radio or television would have to be tuned in unison each time the frequency was changed, as was necessary in the early tuned radio frequency receivers. A more important advantage is; the bandwidth of a filter is proportional to its center frequency. In receivers like the TRF in which the filtering is done at the incoming RF frequency, as the receiver is tuned to higher frequencies its bandwidth increases; the main reason for using an intermediate frequency is to improve frequency selectivity. In communication circuits, a common task is to separate out or extract signals or components of a signal that are close together in frequency; this is called filtering. Some examples are, picking up a radio station among several that are close in frequency, or extracting the chrominance subcarrier from a TV signal.
With all known filtering techniques the filter's bandwidth increases proportionately with the frequency. So a narrower bandwidth and more selectivity can be achieved by converting the signal to a lower IF and performing the filtering at that frequency. FM and television broadcasting with their narrow channel widths, as well as more modern telecommunications services such as cell phones and cable television, would be impossible without using frequency conversion; the most used intermediate frequencies for broadcast receivers are around 455 kHz for AM receivers and 10.7 MHz for FM receivers. In special purpose receivers other frequencies can be used. A dual-conversion receiver may have two intermediate frequencies, a higher one to improve image rejection and a second, lower one, for desired selectivity. A first intermediate frequency may be higher than the input signal, so that all undesired responses can be filtered out by a fixed-tuned RF stage. In a digital receiver, the analog to digital converter operates at low sampling rates, so input RF must be mixed down to IF to be processed.
Intermediate frequency tends to be lower frequency range compared to the transmitted RF frequency. However, the choices for the IF are most dependent on the available components such as mixer, filters and others that can operate at lower frequency. There are other factors involved in deciding the IF frequency, because lower IF is susceptible to noise and higher IF can cause clock jitters. Modern satellite television receivers use several intermediate frequencies; the 500 television channels of a typical system are transmitted from the satellite to subscribers in the Ku microwave band, in two subbands of 10.7 - 11.7 and 11.7 - 12.75 GHz. The downlink signal is received by a satellite dish. In the box at the focus of the dish, called a low-noise block downconverter, each block of frequencies is converted to the IF range of 950 - 2150 MHz by two fixed frequency local oscillators at 9.75 and 10.6 GHz. One of the two blocks is selected by a control signal from the set top box inside, which switches on one of the local oscillators.
This IF is carried into the building to the television receiver on a coaxial cable. At the cable company's set top box, the signal is converted to a lower IF of 480 MHz for filtering, by a variable frequency oscillator; this is sent through a 30 MHz bandpass filter, which selects the signal from one of the transponders on the satellite, which carries several channels. Further
Digital signal processing
Digital signal processing is the use of digital processing, such as by computers or more specialized digital signal processors, to perform a wide variety of signal processing operations. The signals processed in this manner are a sequence of numbers that represent samples of a continuous variable in a domain such as time, space, or frequency. Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include audio and speech processing, sonar and other sensor array processing, spectral density estimation, statistical signal processing, digital image processing, signal processing for telecommunications, control systems, biomedical engineering, among others. DSP can involve linear or nonlinear operations. Nonlinear signal processing is related to nonlinear system identification and can be implemented in the time and spatio-temporal domains; the application of digital computation to signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.
DSP is applicable to static data. To digitally analyze and manipulate an analog signal, it must be digitized with an analog-to-digital converter. Sampling is carried out in two stages and quantization. Discretization means that the signal is divided into equal intervals of time, each interval is represented by a single measurement of amplitude. Quantization means. Rounding real numbers to integers is an example; the Nyquist–Shannon sampling theorem states that a signal can be reconstructed from its samples if the sampling frequency is greater than twice the highest frequency component in the signal. In practice, the sampling frequency is significantly higher than twice the Nyquist frequency. Theoretical DSP analyses and derivations are performed on discrete-time signal models with no amplitude inaccuracies, "created" by the abstract process of sampling. Numerical methods require a quantized signal, such as those produced by an ADC; the processed result might be a set of statistics. But it is another quantized signal, converted back to analog form by a digital-to-analog converter.
In DSP, engineers study digital signals in one of the following domains: time domain, spatial domain, frequency domain, wavelet domains. They choose the domain in which to process a signal by making an informed assumption as to which domain best represents the essential characteristics of the signal and the processing to be applied to it. A sequence of samples from a measuring device produces a temporal or spatial domain representation, whereas a discrete Fourier transform produces the frequency domain representation; the most common processing approach in the time or space domain is enhancement of the input signal through a method called filtering. Digital filtering consists of some linear transformation of a number of surrounding samples around the current sample of the input or output signal. There are various ways to characterize filters. Linear filters satisfy the superposition principle, i.e. if an input is a weighted linear combination of different signals, the output is a weighted linear combination of the corresponding output signals.
A causal filter uses only previous samples of the output signals. A non-causal filter can be changed into a causal filter by adding a delay to it. A time-invariant filter has constant properties over time. A stable filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An unstable filter can produce an output that grows without bounds, with bounded or zero input. A finite impulse response filter uses only the input signals, while an infinite impulse response filter uses both the input signal and previous samples of the output signal. FIR filters are always stable. A filter can be represented by a block diagram, which can be used to derive a sample processing algorithm to implement the filter with hardware instructions. A filter may be described as a difference equation, a collection of zeros and poles or an impulse response or step response; the output of a linear digital filter to any given input may be calculated by convolving the input signal with the impulse response.
Signals are converted from time or space domain to the frequency domain through use of the Fourier transform. The Fourier transform converts the time or space information to a magnitude and phase component of each frequency. With some applications, how the phase varies with frequency can be a significant consideration. Where phase is unimportant the Fourier transform is converted to the power spectrum, the magnitude of each frequency component squared; the most common purpose for analysis of signals in the frequency domain is analysis of signal properties. The engineer can study the spectrum to determine which frequencies are present in the input signal and which are missing. Frequency domain analysis is called spectrum- or spectral analysis. Filtering in non-realtime work can be achieved in the frequency domain, applying the filter and converting back to the time domain; this can be an efficient implementation and can g
In communication systems, signal processing, electrical engineering, a signal is a function that "conveys information about the behavior or attributes of some phenomenon". In its most common usage, in electronics and telecommunication, this is a time varying voltage, current or electromagnetic wave used to carry information. A signal may be defined as an "observable change in a quantifiable entity". In the physical world, any quantity exhibiting variation in time or variation in space is a signal that might provide information on the status of a physical system, or convey a message between observers, among other possibilities; the IEEE Transactions on Signal Processing states that the term "signal" includes audio, speech, communication, sonar, radar and musical signals. In a effort of redefining a signal, anything, only a function of space, such as an image, is excluded from the category of signals, it is stated that a signal may or may not contain any information. In nature, signals can take the form of any action by one organism able to be perceived by other organisms, ranging from the release of chemicals by plants to alert nearby plants of the same type of a predator, to sounds or motions made by animals to alert other animals of the presence of danger or of food.
Signaling occurs in organisms all the way down to the cellular level, with cell signaling. Signaling theory, in evolutionary biology, proposes that a substantial driver for evolution is the ability for animals to communicate with each other by developing ways of signaling. In human engineering, signals are provided by a sensor, the original form of a signal is converted to another form of energy using a transducer. For example, a microphone converts an acoustic signal to a voltage waveform, a speaker does the reverse; the formal study of the information content of signals is the field of information theory. The information in a signal is accompanied by noise; the term noise means an undesirable random disturbance, but is extended to include unwanted signals conflicting with the desired signal. The prevention of noise is covered in part under the heading of signal integrity; the separation of desired signals from a background is the field of signal recovery, one branch of, estimation theory, a probabilistic approach to suppressing random disturbances.
Engineering disciplines such as electrical engineering have led the way in the design and implementation of systems involving transmission and manipulation of information. In the latter half of the 20th century, electrical engineering itself separated into several disciplines, specialising in the design and analysis of systems that manipulate physical signals. Definitions specific to sub-fields are common. For example, in information theory, a signal is a codified message, that is, the sequence of states in a communication channel that encodes a message. In the context of signal processing, signals are analog and digital representations of analog physical quantities. In terms of their spatial distributions, signals may be categorized as point source signals and distributed source signals. In a communication system, a transmitter encodes a message to create a signal, carried to a receiver by the communications channel. For example, the words "Mary had a little lamb" might be the message spoken into a telephone.
The telephone transmitter converts the sounds into an electrical signal. The signal is transmitted to the receiving telephone by wires. In telephone networks, for example common-channel signaling, refers to phone number and other digital control information rather than the actual voice signal. Signals can be categorized in various ways; the most common distinction is between discrete and continuous spaces that the functions are defined over, for example discrete and continuous time domains. Discrete-time signals are referred to as time series in other fields. Continuous-time signals are referred to as continuous signals. A second important distinction is between continuous-valued. In digital signal processing, a digital signal may be defined as a sequence of discrete values associated with an underlying continuous-valued physical process. In digital electronics, digital signals are the continuous-time waveform signals in a digital system, representing a bit-stream. Another important property of a signal is its information content.
Two main types of signals encountered in practice are digital. The figure shows a digital signal that results from approximating an analog signal by its values at particular time instants. Digital signals are quantized. An analog signal is any continuous signal for which the time varying feature of the signal is a representation of some other time varying quantity, i.e. analogous to another time varying signal. For example, in an analog audio signal, the instantaneous voltage of the signal varies continuously with the pressure of the sound waves, it differs from a digital signal, in which the continuous quantity is a representation of a sequence of discrete values which can only take on one of a finite number of values. The term analog signal refers to electrical signals. An analog signal uses some property of the medium to convey the signal's information. For ex
In telecommunications and signal processing, frequency modulation is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. In analog frequency modulation, such as FM radio broadcasting of an audio signal representing voice or music, the instantaneous frequency deviation, the difference between the frequency of the carrier and its center frequency, is proportional to the modulating signal. Digital data can be encoded and transmitted via FM by shifting the carrier's frequency among a predefined set of frequencies representing digits – for example one frequency can represent a binary 1 and a second can represent binary 0; this modulation technique is known as frequency-shift keying. FSK is used in modems such as fax modems, can be used to send Morse code. Radioteletype uses FSK. Frequency modulation is used for FM radio broadcasting, it is used in telemetry, seismic prospecting, monitoring newborns for seizures via EEG, two-way radio systems, music synthesis, magnetic tape-recording systems and some video-transmission systems.
In radio transmission, an advantage of frequency modulation is that it has a larger signal-to-noise ratio and therefore rejects radio frequency interference better than an equal power amplitude modulation signal. For this reason, most music is broadcast over FM radio. Frequency modulation and phase modulation are the two complementary principal methods of angle modulation; these methods contrast with amplitude modulation, in which the amplitude of the carrier wave varies, while the frequency and phase remain constant. If the information to be transmitted is x m and the sinusoidal carrier is x c = A c cos , where fc is the carrier's base frequency, Ac is the carrier's amplitude, the modulator combines the carrier with the baseband data signal to get the transmitted signal: y = A c cos = A c cos = A c cos where f Δ = K f A m, K f being the sensitivity of the frequency modulator and A m being the amplitude of the modulating signal or baseband signal. In this equation, f is the instantaneous frequency of the oscillator and f Δ is the frequency deviation, which represents the maximum shift away from fc in one direction, assuming xm is limited to the range ±1.
While most of the energy of the signal is contained within fc ± fΔ, it can be shown by Fourier analysis that a wider range of frequencies is required to represent an FM signal. The frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are neglected in practical design problems. Mathematically, a baseband modulating signal may be approximated by a sinusoidal continuous wave signal with a frequency fm; this method is named as single-tone modulation. The integral of such a signal is: ∫ 0 t x m d τ = A m sin
Electronic filters are circuits which perform signal processing functions to remove unwanted frequency components from the signal, to enhance wanted ones, or both. Electronic filters can be: passive or active analog or digital high-pass, low-pass, band-pass, band-stop, or all-pass. Discrete-time or continuous-time linear or non-linear infinite impulse response or finite impulse response The most common types of electronic filters are linear filters, regardless of other aspects of their design. See the article on linear filters for details on their design and analysis; the oldest forms of electronic filters are passive analog linear filters, constructed using only resistors and capacitors or resistors and inductors. These are known as RL single-pole filters respectively. However, these simple filters have limited uses. Multipole LC filters provide greater control of response form and transition bands; the first of these filters was the constant k filter, invented by George Campbell in 1910. Campbell's filter was a ladder network based on transmission line theory.
Together with improved filters by Otto Zobel and others, these filters are known as image parameter filters. A major step forward was taken by Wilhelm Cauer who founded the field of network synthesis around the time of World War II. Cauer's theory allowed filters to be constructed that followed some prescribed frequency function. Passive implementations of linear filters are based on combinations of resistors and capacitors; these types are collectively known as passive filters, because they do not depend upon an external power supply and/or they do not contain active components such as transistors. Inductors block high-frequency signals and conduct low-frequency signals, while capacitors do the reverse. A filter in which the signal passes through an inductor, or in which a capacitor provides a path to ground, presents less attenuation to low-frequency signals than high-frequency signals and is therefore a low-pass filter. If the signal passes through a capacitor, or has a path to ground through an inductor the filter presents less attenuation to high-frequency signals than low-frequency signals and therefore is a high-pass filter.
Resistors on their own have no frequency-selective properties, but are added to inductors and capacitors to determine the time-constants of the circuit, therefore the frequencies to which it responds. The inductors and capacitors are the reactive elements of the filter; the number of elements determines the order of the filter. In this context, an LC tuned circuit being used in a band-pass or band-stop filter is considered a single element though it consists of two components. At high frequencies, sometimes the inductors consist of single loops or strips of sheet metal, the capacitors consist of adjacent strips of metal; these inductive or capacitive pieces of metal are called stubs. The simplest passive filters, RC and RL filters, include only one reactive element, except hybrid LC filter, characterized by inductance and capacitance integrated in one element. An L filter consists of one in series and one in parallel. Three-element filters can have a'T' or'π' topology and in either geometries, a low-pass, high-pass, band-pass, or band-stop characteristic is possible.
The components can be chosen symmetric or not, depending on the required frequency characteristics. The high-pass T filter in the illustration, has a low impedance at high frequencies, a high impedance at low frequencies; that means that it can be inserted in a transmission line, resulting in the high frequencies being passed and low frequencies being reflected. For the illustrated low-pass π filter, the circuit can be connected to a transmission line, transmitting low frequencies and reflecting high frequencies. Using m-derived filter sections with correct termination impedances, the input impedance can be reasonably constant in the pass band. Multiple element filters are constructed as a ladder network; these can be seen as a continuation of the L, π designs of filters. More elements are needed when it is desired to improve some parameter of the filter such as stop-band rejection or slope of transition from pass-band to stop-band. Active filters are implemented using a combination of passive and active components, require an outside power source.
Operational amplifiers are used in active filter designs. These can have high Q factor, can achieve resonance without the use of inductors. However, their upper frequency limit is limited by the bandwidth of the amplifiers. There are many filter technologies other than lumped component electronics; these include digital filters, crystal filters, mechanical filters, surface acoustic wave filters, bulk acoustic wave filters, garnet filters, atomic filters. See Filter for further analysisThe transfer function H of a filter is the ratio of the output signal Y to that of the input signal X as a function of the complex frequency s: H = Y X with s = σ + j ω; the transfer function of all linear time-invariant filt