1.
Digital video compression
–
In signal processing, data compression, source coding, or bit-rate reduction involves encoding information using fewer bits than the original representation. Compression can be lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy, no information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information, the process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called coding in opposition to channel coding. Compression is useful because it reduces resources required to store and transmit data, computational resources are consumed in the compression process and, usually, in the reversal of the process. Data compression is subject to a space–time complexity trade-off, Lossless data compression algorithms usually exploit statistical redundancy to represent data without losing any information, so that the process is reversible. Lossless compression is possible because most real-world data exhibits statistical redundancy, for example, an image may have areas of color that do not change over several pixels, instead of coding red pixel, red pixel. The data may be encoded as 279 red pixels and this is a basic example of run-length encoding, there are many schemes to reduce file size by eliminating redundancy. The Lempel–Ziv compression methods are among the most popular algorithms for lossless storage, DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. DEFLATE is used in PKZIP, Gzip, and PNG, LZW is used in GIF images. LZ methods use a table-based compression model where table entries are substituted for repeated strings of data, for most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded, current LZ-based coding schemes that perform well are Brotli and LZX. LZX is used in Microsofts CAB format, the best modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as a form of statistical modelling. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string, sequitur and Re-Pair are practical grammar compression algorithms for which software is publicly available. In a further refinement of the use of probabilistic modelling. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a machine to produce a string of encoded bits from a series of input data symbols
2.
High definition video
–
High-definition video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with more than 480 horizontal lines or 576 horizontal lines is considered high-definition. 480 scan lines is generally the even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal, by a camera may be considered high-definition in some contexts. Some television series shot on video are made to look as if they have been shot on film. The first electronic scanning format,405 lines, was the first high definition television system, from 1939, Europe and the US tried 605 and 441 lines until, in 1941, the FCC mandated 525 for the US. In wartime France, René Barthélemy tested higher resolutions, up to 1,042, in late 1949, official French transmissions finally began with 819. In 1984, however, this standard was abandoned for 625-line color on the TF1 network, modern HD specifications date to the early 1980s, when Japanese engineers developed the HighVision 1, 125-line interlaced TV standard that ran at 60 frames per second. The Sony HDVS system was presented at a meeting of television engineers in Algiers, April 1981. HighVision video is still usable for HDTV video interchange, but there is almost no modern equipment available to perform this function, attempts at implementing HighVision as a 6 MHz broadcast channel were mostly unsuccessful. All attempts at using this format for terrestrial TV transmission were abandoned by the mid-1990s, Europe developed HD-MAC, a member of the MAC family of hybrid analogue/digital video standards, however, it never took off as a terrestrial video transmission format. HD-MAC was never designated for video interchange except by the European Broadcasting Union, in essence, the end of the 1980s was a death knell for most analog high definition technologies that had developed up to that time. In the end, however, the DVB standard of resolutions, the FCC officially adopted the ATSC transmission standard in 1996, with the first broadcasts on October 28,1998. In the early 2000s, it looked as if DVB would be the video standard far into the future, high definition video is defined threefold, by, The number of lines in the vertical display resolution. High-definition television resolution is 1,080 or 720 lines, in contrast, regular digital television is 480 lines or 576 lines. However, since HD is broadcast digitally, its introduction sometimes coincides with the introduction of DTV, additionally, current DVD quality is not high-definition, although the high-definition disc systems Blu-ray Disc and the HD DVD are. The scanning system, progressive scanning or interlaced scanning, progressive scanning redraws an image frame when refreshing each image, for example 720p/1080p. Interlaced scanning yields greater image resolution if subject is not moving, the number of frames or fields per second
3.
Interlaced video
–
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times and this enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon. This effectively doubles the resolution as compared to non-interlaced footage. Interlaced signals require a display that is capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals, Interlaced scan refers to one of two common methods for painting a video image on an electronic display screen by scanning or displaying each line or row of pixels. This technique uses two fields to create a frame, one field contains all odd-numbered lines in the image, the other contains all even-numbered lines. A Phase Alternating Line -based television set display, for example, the two sets of 25 fields work together to create a full frame every 1/25 of a second, but with interlacing create a new half frame every 1/50 of a second. To display interlaced video on progressive displays, playback applies deinterlacing to the video signal. The European Broadcasting Union has argued against interlaced video in production and they recommend 720p 50 fps for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, despite arguments against it, television standards organizations continue to support interlacing. It is still included in digital video formats such as DV, DVB. Progressive scan captures, transmits, and displays an image in a similar to text on a page—line by line. The interlaced scan pattern in a CRT display also completes such a scan, the first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all even numbered lines and this scan of alternate lines is called interlacing. A field is an image that contains only half of the lines needed to make a complete picture, persistence of vision makes the eye perceive the two fields as a continuous image. In the days of CRT displays, the afterglow of the displays phosphor aided this effect, interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan of twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing, format identifiers like 576i 50 and 720p 50 specify the frame rate for progressive scan formats, but for interlaced formats they typically specify the field rate. This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate
4.
Single-sideband modulation
–
Amplitude modulation produces an output signal that has twice the bandwidth of the original baseband signal. Single-sideband modulation avoids this bandwidth doubling, and the power wasted on a carrier, at the cost of increased device complexity, radio transmitters work by mixing a radio frequency signal of a specific frequency, the carrier wave, with the signal to be broadcast. That is, the signal has a spectrum with twice the bandwidth of the original input signal. In conventional AM radio, this signal is sent to the radio frequency amplifier. Due to the nature of the process, the quality of the resulting signal can be defined by the difference between the maximum and minimum signal energy. Normally the maximum signal energy will be the carrier itself, perhaps twice as powerful as the mixed signals, SSB takes advantage of the fact that the entire original signal is encoded in either one of these sidebands. It is not necessary to broadcast the entire mixed signal, a receiver can extract the entire signal from either the upper or lower sideband. This means that the amplifier can be used more efficiently. A transmitter can choose to only the upper or lower sideband. By doing so, the amplifier only has to work effectively on one half the bandwidth, as a result, SSB transmissions use the available amplifier energy more efficiently, providing longer-range transmission with little or no additional cost. The first U. S. patent for SSB modulation was applied for on December 1,1915 by John Renshaw Carson, the U. S. Navy experimented with SSB over its radio circuits before World War I. SSB first entered service on January 7,1927 on the longwave transatlantic public radiotelephone circuit between New York and London. The high power SSB transmitters were located at Rocky Point, New York and Rugby, the receivers were in very quiet locations in Houlton, Maine and Cupar Scotland. SSB was also used over long distance lines, as part of a technique known as frequency-division multiplexing. FDM was pioneered by telephone companies in the 1930s and this enabled many voice channels to be sent down a single physical circuit, for example in L-carrier. SSB allowed channels to be spaced just 4,000 Hz apart, amateur radio operators began serious experimentation with SSB after World War II. The Strategic Air Command established SSB as the standard for its aircraft in 1957. It has become a de facto standard for voice radio transmissions since then
5.
Super high frequency
–
Super high frequency is the ITU designation for radio frequencies in the range between 3 GHz and 30 GHz. This band of frequencies is known as the centimetre band or centimetre wave as the wavelengths range from one to ten centimetres. These frequencies fall within the band, so radio waves with these frequencies are called microwaves. This frequency range is used for most radar transmitters, wireless LANs, satellite communication, microwave relay links. Wireless USB technology is anticipated to use approximately one-third of this spectrum, frequencies in the SHF range are often referred to by their IEEE radar band designations, S, C, X, Ku, K, or Ka band, or by similar NATO or EU designations. Microwaves propagate entirely by line of sight, groundwave and ionospheric reflection do not occur, although in some cases they can penetrate building walls enough for useful reception, unobstructed rights of way cleared to the first Fresnel zone are usually required. Wavelengths are small enough at microwave frequencies that the antenna can be larger than a wavelength. Therefore, they are used in point-to-point terrestrial communications links limited by the visual horizon, such high gain antennas allow frequency reuse by nearby transmitters. The size of SHF waves allows strong reflections from objects the size of automobiles, aircraft, and ships. Thus, the narrow beamwidths possible with high gain antennas and the low atmospheric attenuation as compared with higher frequencies make SHF the main frequencies used in radar. Attenuation and scattering by moisture in the increase with frequency. Small amounts of energy are randomly scattered by water vapor molecules in the troposphere. This is used in communications systems, operating at a few GHz. A powerful microwave beam is aimed just above the horizon, as it passes through the some of the microwaves are scattered back to Earth to a receiver beyond the horizon. Distances of 300 km can be achieved and these are mainly used for military communication. The wavelengths of SHF waves are small enough that they can be focused into narrow beams by high gain antennas from a meter to five meters in diameter. Directive antennas at SHF frequencies are mostly aperture antennas, such as antennas, dielectric lens, slot. Large parabolic antennas can produce very narrow beams of a few degrees or less, for omnidirectional applications like wireless devices and cellphones, small dipoles or monopoles are used
6.
Extremely high frequency
–
Extremely high frequency is the International Telecommunications Union designation for the band of radio frequencies in the electromagnetic spectrum from 30 to 300 gigahertz. It lies between the high frequency band, and the far infrared band which is also referred to as the terahertz gap. Radio waves in this band have wavelengths from ten to one millimetre, giving it the name millimetre band or millimetre wave, millimetre-length electromagnetic waves were first investigated in the 1890s by Indian scientist Jagadish Chandra Bose. Compared to lower bands, radio waves in this band have high atmospheric attenuation, therefore, they have a short range and can only be used for terrestrial communication over about a kilometer. Absorption by humidity in the atmosphere is significant except in desert environments, however the short propagation range allows smaller frequency reuse distances than lower frequencies. The short wavelength allows modest size antennas to have a beam width. Millimeter waves propagate solely by line-of-sight paths, they are not reflected by the ionosphere nor do they travel along the Earth as ground waves as lower frequency radio waves do, at typical power densities they are blocked by building walls and suffer significant attenuation passing through foliage. The high free space loss and atmospheric absorption limits useful propagation to a few kilometers, thus they are useful for densely packed communications networks such as personal area networks that improve spectrum utilization through frequency reuse. They show optical propagation characteristics and can be reflected and focused by small metal surfaces around 1 ft. diameter, at millimeter wavelengths, surfaces appear rougher so diffuse reflection increases. Multipath propagation, particularly reflection from indoor walls and surfaces, causes serious fading, doppler shift of frequency can be significant even at pedestrian speeds. In portable devices, shadowing due to the body is a problem. Since the waves penetrate clothing and their small wavelength allows them to reflect from small metal objects they are used in millimeter wave scanners for security scanning. This band is used in radio astronomy and remote sensing. Ground-based radio astronomy is limited to high altitude such as Kitt Peak. Satellite-based remote sensing near 60 GHz can determine temperature in the atmosphere by measuring radiation emitted from oxygen molecules that is a function of temperature and pressure. The ITU non-exclusive passive frequency allocation at 57-59 and it is used commonly in flat terrain. The 71-76, 81-86 and 92–95 GHz bands are used for point-to-point high-bandwidth communication links. These higher frequencies do not suffer from oxygen absorption, but require a license in the US from the Federal Communications Commission
7.
Optical fiber
–
An optical fiber or optical fibre is a flexible, transparent fiber made by drawing glass or plastic to a diameter slightly thicker than that of a human hair. Fibers are also used for illumination, and are wrapped in bundles so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of a fiberscope. Specially designed fibers are used for a variety of other applications, some of them being fiber optic sensors. Optical fibers typically include a transparent core surrounded by a transparent cladding material with an index of refraction. Light is kept in the core by the phenomenon of internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters, being able to join optical fibers with low loss is important in fiber optic communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the cores. For applications that demand a permanent connection a fusion splice is common, in this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors, the field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics. The term was coined by Indian physicist Narinder Singh Kapany who is acknowledged as the father of fiber optics. Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon, John Tyndall included a demonstration of it in his public lectures in London,12 years later. When the ray passes from water to air it is bent from the perpendicular. If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, unpigmented human hairs have also been shown to act as an optical fiber. Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century, image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. The principle was first used for medical examinations by Heinrich Lamm in the following decade
8.
Frequency modulation
–
In telecommunications and signal processing, frequency modulation is the encoding of information in a carrier wave by varying the instantaneous frequency of the wave. This contrasts with amplitude modulation, in which the amplitude of the wave varies. This modulation technique is known as frequency-shift keying, FSK is widely used in modems and fax modems, and can also be used to send Morse code. Frequency modulation is used for FM radio broadcasting. For this reason, most music is broadcast over FM radio, frequency modulation has a close relationship with phase modulation, phase modulation is often used as an intermediate step to achieve frequency modulation. Mathematically both of these are considered a case of quadrature amplitude modulation. While most of the energy of the signal is contained within fc ± fΔ, the frequency spectrum of an actual FM signal has components extending infinitely, although their amplitude decreases and higher-order components are often neglected in practical design problems. Mathematically, a baseband modulated signal may be approximated by a continuous wave signal with a frequency fm. This method is also named as Single-tone Modulation. As in other systems, the modulation index indicates by how much the modulated variable varies around its unmodulated level. e. The maximum deviation of the frequency from the carrier frequency. For a sine wave modulation, the index is seen to be the ratio of the peak frequency deviation of the carrier wave to the frequency of the modulating sine wave. If h ≪1, the modulation is called narrowband FM, sometimes modulation index h<0.3 rad is considered as Narrowband FM otherwise Wideband FM. In the case of digital modulation, the carrier f c is never transmitted, rather, one of two frequencies is transmitted, either f c + Δ f or f c − Δ f, depending on the binary state 0 or 1 of the modulation signal. If h ≫1, the modulation is called wideband FM, if the frequency deviation is held constant and the modulation frequency increased, the spacing between spectra increases. The carrier and sideband amplitudes are illustrated for different modulation indices of FM signals, for particular values of the modulation index, the carrier amplitude becomes zero and all the signal power is in the sidebands. Since the sidebands are on sides of the carrier, their count is doubled, and then multiplied by the modulating frequency to find the bandwidth. For example,3 kHz deviation modulated by a 2.2 kHz audio tone produces an index of 1.36. Suppose that we limit ourselves to only those sidebands that have an amplitude of at least 0.01
9.
Companding
–
In telecommunication and signal processing companding is a method of mitigating the detrimental effects of a channel with limited dynamic range. The name is a combination of the words compressing and expanding, the use of companding allows signals with a large dynamic range to be transmitted over facilities that have a smaller dynamic range capability. Companding is employed in telephony and other applications such as professional wireless microphones. The dynamic range of a signal is compressed before transmission and is expanded to the value at the receiver. The electronic circuit that does this is called a compander and works by compressing or expanding the range of an analog electronic signal such as sound recorded by a microphone. One variety is a triplet of amplifiers, an amplifier, followed by a variable-gain linear amplifier. Such a triplet has the property that its voltage is proportional to the input voltage raised to an adjustable power. This type of quantization is used in telephony systems. The two most popular compander functions used for telecommunications are the A-law and μ-law functions, companding is used in digital telephony systems, compressing before input to an analog-to-digital converter, and then expanding after a digital-to-analog converter. This is equivalent to using a non-linear ADC as in a T-carrier telephone system that implements A-law or μ-law companding and this method is also used in digital file formats for better signal-to-noise ratio at lower bit rates. This is effectively a form of audio data compression. Professional wireless microphones do this since the range of the microphone audio signal itself is larger than the dynamic range provided by radio transmission. Companding also reduces the noise and crosstalk levels at the receiver, companders are used in concert audio systems and in some noise reduction schemes such as dbx and Dolby NR. The use of companding in a picture transmission system was patented by A. B. In 1942, Clark and his team completed the SIGSALY secure voice system that included the first use of companding in a PCM system. In 1953, B. Smith showed that a nonlinear DAC could be complemented by the nonlinearity in a successive-approximation ADC configuration. In 1970, H. Kaneko developed the uniform description of segment companding laws that had by then adopted in digital telephony. In the 1980s, many of the equipment manufacturers used companding when compressing the library waveform data in their digital synthesizers
10.
Luminance
–
Luminance is a photometric measure of the luminous intensity per unit area of light travelling in a given direction. It describes the amount of light passes through, is emitted or reflected from a particular area. The SI unit for luminance is candela per square metre, a non-SI term for the same unit is the nit. The CGS unit of luminance is the stilb, which is equal to one candela per square centimetre or 10 kcd/m2, luminance is often used to characterize emission or reflection from flat, diffuse surfaces. The luminance indicates how much power will be detected by an eye looking at the surface from a particular angle of view. Luminance is thus an indicator of how bright the surface will appear, in this case, the solid angle of interest is the solid angle subtended by the eyes pupil. Luminance is used in the industry to characterize the brightness of displays. A typical computer display emits between 50 and 300 cd/m2, the sun has luminance of about 1. 6×109 cd/m2 at noon. Luminance is invariant in geometric optics and this means that for an ideal optical system, the luminance at the output is the same as the input luminance. For real, passive, optical systems, the luminance is at most equal to the input. As an example, if one uses a lens to form an image that is smaller than the source object, the luminous power is concentrated into a smaller area, meaning that the illuminance is higher at the image. The light at the plane, however, fills a larger solid angle so the luminance comes out to be the same assuming there is no loss at the lens. The image can never be brighter than the source, if light travels through a lossless medium, the luminance does not change along a given light ray. In the case of a perfectly diffuse reflector, the luminance is isotropic, then the relationship is simply L v = E v R / π A variety of units have been used for luminance, besides the candela per square metre. One candela per square metre is equal to, 10−4 stilbs π apostilbs π×10−4 lamberts 0.292 foot-lamberts Retinal damage can occur when the eye is exposed to high luminance, damage can occur due to local heating of the retina. Photochemical effects can cause damage, especially at short wavelengths. Also available in PDF form and Google Docs online version Autodesk Design Academy Measuring Light Levels
11.
Chrominance
–
Chrominance is the signal used in video systems to convey the color information of the picture, separately from the accompanying luma signal. Chrominance is usually represented as two color-difference components, U = B′ − Y′ and V = R′ − Y′, each of these difference components may have scale factors and offsets applied to it, as specified by the applicable video standard. In digital-video and still-image color spaces such as Y′CbCr, the luma, separating RGB color signals into luma and chrominance allows the bandwidth of each to be determined separately. Typically, the bandwidth is reduced in analog composite video by reducing the bandwidth of a modulated color subcarrier. The idea of transmitting a color signal with distinct luma and chrominance components originated with Georges Valensi. Previous schemes for color television systems, which were incompatible with existing monochrome receivers, in analog television, chrominance is encoded into a video signal using a subcarrier frequency. Depending on the standard, the chrominance subcarrier may be either quadrature-amplitude-modulated or frequency-modulated. In the PAL system, the subcarrier is 4.43 MHz above the video carrier. The NTSC and PAL standards are the most commonly used, although there are other standards that employ different subcarrier frequencies. For example, PAL-M uses a 3.58 MHz subcarrier, the presence of chrominance in a video signal is indicated by a color burst signal transmitted on the back porch, just after horizontal synchronization and before each line of video starts. If the color burst signal were visible on a television screen, in NTSC and PAL, hue is represented by a phase shift of the chrominance signal relative to the color burst, while saturation is determined by the amplitude of the subcarrier. In SECAM and signals are transmitted alternately and phase does not matter, chrominance is represented by the U-V color plane in PAL and SECAM video signals, and by the I-Q color plane in NTSC. Digital video and digital photography systems sometimes use a luma/chroma decomposition for improved compression. On decompression, the Y′CbCr space is rotated back to RGB
12.
Chroma subsampling
–
It is used in many video encoding schemes — both analog and digital — and also in JPEG encoding. Digital signals are compressed to save transmission time and reduce file size. In compressed images, for example, the 4,2,2 YCbCr scheme requires two-thirds the bandwidth of RGB and this reduction results in almost no visual difference as perceived by the viewer. Because the human system is less sensitive to the position and motion of color than luminance. At normal viewing distances, there is no perceptible loss incurred by sampling the color detail at a lower rate, in video systems, this is achieved through the use of color difference components. The signal is divided into a component and two color difference components. In human vision there are three channels for detection, and for many color systems, three channels is sufficient for representing most colors. For example, red, green, blue or magenta, yellow, but there are other ways to represent the color. In many video systems, the three channels are luminance and two chroma channels, in video, the luma and chroma components are formed as a weighted sum of gamma-corrected RGB components instead of linear RGB components. As a result, luma must be distinguished from luminance, indeed, similar bleeding can occur also with gamma =1, whence the reversing of the order of operations between gamma correction and forming the weighted sum can make no difference. The chroma can influence the luma specifically at the pixels where the subsampling put no chroma, the parts are, J, horizontal sampling reference. A, number of samples in the first row of J pixels. B, number of changes of chrominance samples between first and second row of J pixels, may be omitted if alpha component is not present, and is equal to J when present. This notation is not valid for all combinations and has exceptions, the mapping examples given are only theoretical and for illustration. Also note that the diagram does not indicate any chroma filtering, to calculate required bandwidth factor relative to 4,4,4, one needs to sum all the factors and divide the result by 12. Each of the three YCbCr components have the sample rate, thus there is no chroma subsampling. This scheme is used in high-end film scanners and cinematic post production. Note that 4,4,4 may instead be referring to RGB color space, formats such as HDCAM SR can record 4,4,4 RGB over dual-link HD-SDI
13.
YUV
–
YUV is a color space typically used as part of a color image pipeline. The scope of the terms Y′UV, YUV, YCbCr, YPbPr, etc. is sometimes ambiguous, today, the term YUV is commonly used in the computer industry to describe file-formats that are encoded using YCbCr. The Y′UV model defines a space in terms of one luma. The Y′UV color model is used in the PAL and SECAM composite color video standards, previous black-and-white systems used only luma information. The YPbPr color model used in analog component video and its digital version YCbCr used in video are more or less derived from it. The Y′IQ color space used in the analog NTSC television broadcasting system is related to it, as for etymology, Y, Y′, U, and V are not abbreviations. The use of the letter Y for luminance can be traced back to the choice of X Y Z primaries and this lends itself naturally to the usage of the same letter in luma, which approximates a perceptually uniform correlate of luminance. Likewise, U and V were chosen to differentiate the U and V axes from those in other spaces, see the equations below or compare the historical development of the math. Y′UV was invented when engineers wanted color television in a black-and-white infrastructure and they needed a signal transmission method that was compatible with black-and-white TV while being able to add color. The luma component already existed as the black and white signal, the UV representation of chrominance was chosen over straight R and B signals because U and V are color difference signals. This meant that in a black and white scene the U and V signals would be zero, if R and B were to have been used, these would have non-zero values even in a B&W scene, requiring all three data-carrying signals. In addition, black and white receivers could take the Y′ signal and ignore the signals, making Y′UV backward-compatible with all existing black-and-white equipment. It was necessary to assign a narrower bandwidth to the channel because there was no additional bandwidth available. If some of the information arrived via the chrominance channel. Y′UV signals are created from RGB source. Weighted values of R, G, and B are summed to produce Y′, U and V are computed as scaled differences between Y′ and the B and R values. BT.601 defines the following constants, W R =0.299, W G =1 − W R − W B =0.587, W B =0.114, U max =0.436, V max =0.615. The resulting ranges of Y′, U, and V respectively are, equivalently, substituting values for the constants and expressing them as matrices gives these formulas for BT.601, =, =
14.
Sampling (signal processing)
–
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a wave to a sequence of samples. A sample is a value or set of values at a point in time and/or space, a sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the value of the continuous signal at the desired points. Sampling can be done for functions varying in space, time, or any other dimension, then the sampled function is given by the sequence, s, for integer values of n. The sampling frequency or sampling rate, fs, is the number of samples obtained in one second. Reconstructing a continuous function from samples is done by interpolation algorithms, the Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal lowpass filter whose input is a sequence of Dirac delta functions that are modulated by the sample values. When the time interval between adjacent samples is a constant, the sequence of functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the function with s. That purely mathematical abstraction is sometimes referred to as impulse sampling, most sampled signals are not simply stored and reconstructed. But the fidelity of a reconstruction is a customary measure of the effectiveness of sampling. That fidelity is reduced when s contains frequency components whose periodicity is smaller than 2 samples, the quantity ½ cycles/sample × fs samples/sec = fs/2 cycles/sec is known as the Nyquist frequency of the sampler. Therefore, s is usually the output of a lowpass filter, without an anti-aliasing filter, frequencies higher than the Nyquist frequency will influence the samples in a way that is misinterpreted by the interpolation process. In practice, the signal is sampled using an analog-to-digital converter. This results in deviations from the theoretically perfect reconstruction, collectively referred to as distortion, various types of distortion can occur, including, Aliasing. Some amount of aliasing is inevitable because only theoretical, infinitely long, Aliasing can be made arbitrarily small by using a sufficiently large order of the anti-aliasing filter. Aperture error results from the fact that the sample is obtained as a time average within a sampling region, in a capacitor-based sample and hold circuit, aperture error is introduced because the capacitor cannot instantly change voltage thus requiring the sample to have non-zero width. Jitter or deviation from the precise sample timing intervals, noise, including thermal sensor noise, analog circuit noise, etc
15.
Compositing
–
Compositing is the combining of visual elements from separate sources into single images, often to create the illusion that all those elements are parts of the same scene. Live-action shooting for compositing is variously called chroma key, blue screen, green screen, today, most, though not all, compositing is achieved through digital image manipulation. Pre-digital compositing techniques, however, go back as far as the films of Georges Méliès in the late 19th century. All compositing involves the replacement of selected parts of an image with other material, usually, in the digital method of compositing, software commands designate a narrowly defined color as the part of an image to be replaced. Then the software replaces every pixel within the color range with a pixel from another image. In television studios, blue or green screens may back news-readers to allow the compositing of stories behind them, in other cases, presenters may be completely within compositing backgrounds that are replaced with entire virtual sets executed in computer graphics programs. Virtual sets are used in motion pictures filmmaking, some of which are photographed entirely in blue or green screen environments, as for example in Sky Captain. More commonly, composited backgrounds are combined with sets – both full-size and models – and vehicles, furniture, and other objects that enhance the reality of the composited visuals. That way, subjects recorded in modest areas can be placed in large virtual vistas, most common of all, perhaps, are set extensions, digital additions to actual performing environments. Digital compositing is a form of matting, one of four basic compositing methods, the others are physical compositing, multiple exposure, and background projection. In physical compositing the separate parts of the image are placed together in the photographic frame, the components are aligned so that they give the appearance of a single image. The most common physical compositing elements are partial models and glass paintings, partial models are typically used as set extensions such as ceilings or the upper stories of buildings. The model, built to match the set but on a much smaller scale, is hung in front of the camera. Models are often large because they must be placed far enough from the camera so that both they and the set far beyond them are in sharp focus. Glass shots are made by positioning a large pane of glass so that it fills the camera frame, the entire scene is painted on the glass, except for the area revealing the background where action is to take place. Photographed through the glass, the action is composited with the painted area. A classic example of a shot is the approach to Ashley Wilkes plantation in Gone with the Wind. The plantation and fields are all painted, while the road, a variant uses the opposite technique, most of the area is clear, except for individual elements affixed to the glass
16.
Dbx (noise reduction)
–
Dbx is a family of noise reduction systems developed by the company of the same name. The most common implementations are dbx Type I and dbx Type II for analog recording and, less commonly. A separate implementation, known as dbx-TV, is part of the MTS system used to provide sound to North American. The company – dbx, Inc. – was also involved with Dynamic Noise Reduction systems, the original dbx Type I and Type II systems were based on so-called linear decibel companding - compressing the signal on recording and expanding it on playback. It was invented by David E. Blackmer of dbx, Inc. in 1971, dbx marketed the PPA-1 Silencer, a decoder that could be used with non-dbx players such as the Sony Walkman. A version of this chip also contained a Dolby B-compatible noise reduction decoder, described as dbx Type B noise reduction, dbx Type I and Type II are types of companding noise reduction. Companding noise reduction works by first compressing the source materials dynamic range in anticipation of being recorded on a relatively noisy medium, upon playback, the encoded material, now contaminated with noise, is passed through an expander which restores the original dynamic range of the source material. The contaminating signal is heavily attenuated and/or masked by the expansion process. Because dbx Type I and Type II are broadband companders, they are susceptible to audible noise modulation, to deal with this, both Type I and II use very strong high-frequency pre-emphasis of the audio signal in both the recording path and the control signal path. The dbx Type-II disc setting on consumer dbx decoders adds an additional 1–3 dB of low-frequency roll-off in both the path and control path. This protects the system from audible mistracking due to record warps, dbx Type I was widely adopted in professional recording, particularly used with what is referred to in the industry as semi-pro formats such as half-inch 8 track and one-inch 16 track. Tascam incorporated dbx Type II in their Portastudio four-track cassette recorders, tascams Portastudio family of 4 track cassette recorders became a standard for home hobbyists. Undecoded dbx playback also exhibited large amounts of dynamic error, with audio levels going up, dbx was also used on vinyl records, which were known as dbx discs. While the earliest release is from 1971, their numbers peaked between 1977 until around 1982, billboard noted in August 1981 that the total number of releases with dbx encoding was expected to approach 200 albums. When employed on LPs, the dbx Type-II system reduced the audibility of dust and scratches, reducing them to tiny pops and clicks, dbx encoded LPs had, in theory, a dynamic range of up to 120 dB. In addition, dbx LPs were produced only the original master tapes, with no copies being used. Most were released in limited quantities with premium pricing, yet another similar noise reduction system, called CX, was introduced by CBS Laboratories in 1981. Undecoded playback of CX discs was less objectionable than undecoded playback of dbx-encoded material and it did, however, find widespread use in LaserDiscs and stereo SelectaVision CED video discs
17.
Cathode ray tube
–
The cathode ray tube is a vacuum tube that contains one or more electron guns and a phosphorescent screen, and is used to display images. It modulates, accelerates, and deflects electron beam onto the screen to create the images, the images may represent electrical waveforms, pictures, radar targets, or others. CRTs have also used as memory devices, in which case the visible light emitted from the fluorescent material is not intended to have significant meaning to a visual observer. In television sets and computer monitors, the front area of the tube is scanned repetitively and systematically in a fixed pattern called a raster. An image is produced by controlling the intensity of each of the three beams, one for each additive primary color with a video signal as a reference. A CRT is constructed from an envelope which is large, deep, fairly heavy. The interior of a CRT is evacuated to approximately 0.01 Pa to 133 nPa. evacuation being necessary to facilitate the flight of electrons from the gun to the tubes face. That it is evacuated makes handling an intact CRT potentially dangerous due to the risk of breaking the tube and causing a violent implosion that can hurl shards of glass at great velocity. As a matter of safety, the face is made of thick lead glass so as to be highly shatter-resistant and to block most X-ray emissions. Flat panel displays can also be made in large sizes, whereas 38 to 40 was about the largest size of a CRT television, flat panels are available in 60. Cathode rays were discovered by Johann Hittorf in 1869 in primitive Crookes tubes and he observed that some unknown rays were emitted from the cathode which could cast shadows on the glowing wall of the tube, indicating the rays were traveling in straight lines. In 1890, Arthur Schuster demonstrated cathode rays could be deflected by electric fields, the earliest version of the CRT was known as the Braun tube, invented by the German physicist Ferdinand Braun in 1897. It was a diode, a modification of the Crookes tube with a phosphor-coated screen. In 1907, Russian scientist Boris Rosing used a CRT in the end of an experimental video signal to form a picture. He managed to display simple geometric shapes onto the screen, which marked the first time that CRT technology was used for what is now known as television. The first cathode ray tube to use a hot cathode was developed by John B. Johnson and Harry Weiner Weinhart of Western Electric and it was named by inventor Vladimir K. Zworykin in 1929. RCA was granted a trademark for the term in 1932, it released the term to the public domain in 1950. The first commercially made electronic television sets with cathode ray tubes were manufactured by Telefunken in Germany in 1934, in oscilloscope CRTs, electrostatic deflection is used, rather than the magnetic deflection commonly used with television and other large CRTs
18.
Kell factor
–
The Kell factor, named after RCA engineer Raymond D.7. It was later revised to 0.85 but can go higher than 0.9, from a different perspective, the Kell factor defines the effective resolution of a discrete display device since the full resolution cannot be used without viewing experience degradation. The actual sampled resolution will depend on the size and intensity distribution. For electron gun scanning systems, the spot usually has a Gaussian intensity distribution, for CCDs, the distribution is somewhat rectangular, and is also affected by the sampling grid and inter-pixel spacing. Kell factor is sometimes stated to exist to account for the effects of interlacing. To understand how the distortion comes about, let us consider a linear process from sampling to display. In discrete displays the signal is not low-pass filtered since the display takes discrete values as input. The proximity of the highest frequency of the signal to the lowest frequency of the first repeat spectra induces the beat frequency pattern. The pattern seen on screen can at times be similar to a Moiré pattern, the Kell factor is the reduction necessary in signal bandwidth such that no beat frequency is perceived by the viewer. A 625-line analog television picture is divided into 576 visible lines from top to bottom, suppose a card featuring horizontal black and white stripes is placed in front of the camera. The effective vertical resolution of the TV system is equal to the largest number of stripes that can be within the picture height, since it is unlikely the stripes will line up perfectly with the lines on the cameras sensor, the number is slightly less than 576. Using a Kell factor of 0.7, the number can be determined to be 0. 7×576 =403.2 lines of resolution. Kell factor can be used to determine the resolution that is required to match the vertical resolution attained by a given number of scan lines. For 576i at 50 Hz, given its 4,3 aspect ratio, Kell factor applies equally to digital devices. Using a Kell factor of 0.9, a 1080p HDTV video system using a CCD camera, optical resolution Resel M. Robin, Revisiting Kell, Broadcast Engineering, May 2003. S. Mullen, Just What is 1080, J. Amanatides, Antialiasing of Interlaced Video Animation, SIGGRAPH90. G. Tonge, The Television Scanning Process, SMPTE Journal, July 1984 pg 657 Kell factor explained in simple terms
19.
Integrated Services Digital Broadcasting
–
The Integrated Services Digital Broadcasting is a Japanese standard for digital television and digital radio used by the countrys radio and television networks. ISDB replaced NTSC-J analog television system and the previously used MUSE Hi-vision analogue HDTV system in Japan, Digital Terrestrial Television Broadcasting services using ISDB-T started in Japan in December 2003 and in Brazil in December 2007 as a trial. Since many countries have adopted ISDB over other digital broadcasting standards, the standards can be obtained for free at the Japanese organization DiBEG website and at ARIB. The core standards of ISDB are ISDB-S, ISDB-T, ISDB-C and 2, ISDB-T and ISDB-Tsb are for mobile reception in TV bands. 1seg is the name of an ISDB-T service for reception on cell phones, laptop computers, the concept was named for its similarity to ISDN, because both allow multiple channels of data to be transmitted together. This is also much like another digital radio system, Eureka 147, which each group of stations on a transmitter an ensemble. ISDB-T operates on unused TV channels, an approach taken by countries for TV. The various flavors of ISDB differ mainly in the modulations used, the 12 GHz band ISDB-S uses PSK modulation,2.6 GHz band digital sound broadcasting uses CDM, and ISDB-T uses COFDM with PSK/QAM. Besides audio and video transmission, ISDB also defines data connections with the internet as a channel over several media. This is used, for example, for interactive interfaces like data broadcasting, the ISDB specification describes a lot of interfaces, but most importantly the Common Interface for Conditional Access System. While ISDB has examples of implementing various kinds of CASes, in Japan CAS called B-CAS is used, defines the Common Scrambling Algorithm system called MULTI2 required for scrambling television. The ISDB CAS system in Japan is operated by a company named B-CAS, the Japanese ISDB signal is always encrypted by the B-CAS system even if it is a free television program. That is why it is commonly called Pay per view system without charge, an interface for mobile reception is under consideration. Since all digital systems carry digital data content, a DVD or high-definition recorder could easily copy content losslessly. Hollywood requested copy protection, this was the reason for RMP being mandated. The content has three modes, “copy once”, “copy free” and “copy never”, “Copy never” programming may only be timeshifted and cannot be permanently stored. There are two types of ISDB receiver, Television and set-top box, the aspect ratio of an ISDB-receiving television set is 16,9, televisions fulfilling these specs are called Hi-Vision TV. There are three TV types, Cathode ray tube, plasma display panel and liquid crystal display, with LCD being the most popular Hi-Vision TV on the Japanese market nowadays
20.
Popular Science
–
Popular Science is an American bi-monthly magazine carrying popular science content, which refers to articles for the general reader on science and technology subjects. Popular Science has won over 58 awards, including the American Society of Magazine Editors awards for its excellence in both 2003 and 2004. With roots beginning in 1872, PopSci has been translated into over 30 languages and is distributed to at least 45 countries. The Popular Science Monthly, as the publication was called, was founded in May 1872 by Edward L. Youmans to disseminate scientific knowledge to the educated layman. Youmans had previously worked as an editor for the weekly Appletons Journal, early issues were mostly reprints of English periodicals. William Jay Youmans, Edwards brother, helped found Popular Science Monthly in 1872 and was an editor as well and he became editor-in-chief on Edwards death in 1887. The publisher, D. Appleton & Company, was forced for economic reasons to sell the journal in 1900, james McKeen Cattell became the editor in 1900 and the publisher in 1901. Cattell had a background in academics and continued publishing articles for educated readers, by 1915 the readership was declining and publishing a science journal was a financial challenge. The existing journal would continue the tradition as Scientific Monthly. Existing subscribers would remain subscribed under the new name, Scientific Monthly was published until 1958 when it was absorbed into Science. The Modern Publishing Company acquired the Popular Science Monthly name and this company had purchased Electrician and Mechanic magazine in 1914 and over the next two years merged several magazines together into a science magazine for a general audience. The October 1915 issue was titled Popular Science Monthly and Worlds Advance, the volume number was that of Popular Science but the content was that of Worlds Advance. The new editor was Waldemar Kaempffert, an editor of Scientific American. The change in Popular Science Monthly was dramatic, the old version was a scholarly journal that had eight to ten articles in a 100-page issue. There would be ten to twenty photographs or illustrations, the new version had hundreds of short, easy to read articles with hundreds of illustrations. Editor Kaempffert was writing for the craftsman and hobbyist who wanted to know something about the world of science. The circulation doubled in the first year, from the mid-1930s to the 1960s, the magazine featured fictional stories of Gus Wilsons Model Garage, centered on car problems. An annual review of changes to the new model year cars ran in 1940 and 41 and it continued until the mid-1970s when the magazine reverted to publishing the new models over multiple issues as information became available
21.
European Broadcasting Union
–
The European Broadcasting Union is an alliance of public service media entities, established on 12 February 1950. As of 2015, the organisation comprises 73 active members in 56 countries, most EU states are part of this organisation and therefore EBU has been subject to supranational legislation and regulation. It also hosted debates between candidates for the European Commission presidency for the 2014 parliamentary elections but is unrelated to the institution itself and it is best known for producing the Eurovision Song Contest. EBU is a member of the International Music Council, Members of the EBU are radio and television companies, most of which are government-owned public service broadcasters or privately owned stations with public service missions. Active Members come from as far north as Iceland and as far south as Egypt, from Ireland in the west and Azerbaijan in the east, Associate Members are from countries and territories beyond Europe, such as Canada, Japan, Mexico, India, and Hong Kong. Associate Members from the United States include ABC, CBS, NBC, the Corporation for Public Broadcasting, Time Warner, and the only individual station, Chicago-based classical music station WFMT. Active Members are those paying EBU members meeting all technical criteria for full membership, syria is an example of a country within the EBA not complying with all technical criteria for full membership, and thus it is currently only granted Associated Membership. The EBUs highest profile production is the Eurovision Song Contest, organised by its Eurovision Network, the countries represented in the EBU also co-operate to create documentaries and childrens programming. Most EBU broadcasters have a deal to carry the Olympics. Another annually recurring event which is broadcast across Europe through the EBU is the Vienna New Years Concert, the theme music played before EBU broadcasts is Marc-Antoine Charpentiers Prelude to Te Deum. It is well known to Europeans as it is played before and after the Eurovision Song Contest, the EBU was a successor to the International Broadcasting Union that was founded in 1925 and had its administrative headquarters in Geneva and technical office in Brussels. It fostered programming exchanges between members and mediated disputes between members that were mostly concerned with frequency and interference issues. It was in effect taken over by Nazi Germany during the Second World War, france proposed that it would have four votes with the inclusion of its North African colonies. Great Britain felt it would have influence with just one vote. On 27 June 1946 the alternative International Broadcasting Organisation was founded with 26 members, the following day the IBU met in General Assembly and an attempt was made to dissolve it but failed, though 18 of its 28 members left to join the IBO. For a period of time in the late 1940s both the IBU and IBO vied for the role of organising frequencies but Britain decided to be in involved in neither, the BBC attempted but failed to find suitable working arrangements with them. However, for practical purposes the IBO rented the IBU technical centre in Brussels, in August 1949 a meeting took place in Stresa, Italy but it resulted in disagreement between delegates on how to resolve the problems. One proposal was for the European Broadcasting Area to be replaced by one that would exclude Eastern Europe, after Stresa, a consensus emerged among the Western Europeans to form a new organisation and the BBC proposed it be based in London
22.
B-MAC
–
B-MAC is a form of analog video encoding, specifically a type of (Multiplexed Analogue Components encoding. MAC encoding was designed in the mid 80s for use with Direct Broadcast Satellite systems, other analog video encoding systems include NTSC, PAL and SECAM. Unlike the FDM method used in those, MAC encoding uses a TDM method, B-MAC was a proprietary MAC encoding used by Scientific-Atlanta for encrypting broadcast video services, the full name was Multiple Analogue Component, Type B. B-MAC uses teletext-style non-return-to-zero signaling with a capacity of 1.625 Mbit/s, the video and audio/data signals are therefore combined at baseband. Both PAL and NTSC versions of B-MAC were developed and used, user base This system was used in Australia for TVRO until 2000. B-MAC was used for broadcasts of the American Forces Radio. B-MAC has not been used for DTH applications since Primestar switched to a delivery system in the mid-1990s. MAC transmits luminance and chrominance data separately in time rather than separately in frequency, Audio and Scrambling Audio, in a format similar to NICAM was transmitted digitally rather than as an FM subcarrier. The MAC standard included a standard scrambling system, EuroCrypt, a precursor to the standard DVB-CSA encryption system, dVB-S, MAC technology was replaced by this standard DVB-T, MAC technology was replaced by this standard Multiplexed Analogue Components in Analog TV Broadcast Systems by Paul Schlyter
23.
HD-MAC
–
HD-MAC was a proposed broadcast television systems standard by the European Commission in 1986, a part of Eureka 95 project. It is an attempt by the EEC to provide High-definition television in Europe. It is a mix of analogue signal, multiplexed with digital sound. The video signal was encoded with a modified D2-MAC encoder, HD-MAC could be decoded by standard D2-MAC receivers, but in that mode only 625 lines and certain artifacts were visible. To decode the signal in full resolution required a specific HD-MAC tuner, lines are transmitted in the natural sequence,1,2,3,4, and so on. Thus, there are two fields in a frame, resulting in a frequency of 25 ×2 =50 Hz. The visible part of the signal provided by an HD-MAC receiver was 1152i/25. The amount of information is multplied by 4, considering the encoder started its operations from a 1440x1152i/25 sampling grid, work on HD-MAC specification started officially in May 1986. The purpose was to react against a Japanese proposal, supported by the US, besides preservation of the European electronic industry, there was also a need to produce a standard that would be compliant with the 50 Hz field frequency systems. Truth be said, the precisely 60 Hz of the Japanese proposal was also worrying the US and this apparently minor difference had the potential for a lot of trouble. In September,1988, the Japanese performed the first High Definition broadcasts of the olympic games, in that same month of September, Europe showed for the first time a credible alternative, namely a complete HD-MAC broadcasting chain, at IBC88 in Brighton. This show included the first progressive scan HD video camera protoTypes, for the 1992 Winter Olympics and 1992 Summer Olympics, a public demonstration of HD-MAC broadcasting took place. 60 HD-MAC receivers for the Albertville games and 700 for the Barcelona games were set up in Eurosites to show the capabilities of the standard,1152 lines CRT Video projectors were used to create an image a few meters wide. There were some Thomson Space system 16/9 CRT TV sets as well, the project sometimes used Rear-projection televisions. In addition, some 80,000 viewers of D2-MAC receivers were able to watch the channel. It is estimated that 350,000 people across Europe were able to see demonstration of European HDTV. This project was financed by the EEC, the PAL-converted signal was used by mainstream broadcasters such as SWR, BR et 3Sat. Because UHF spare bandwidth was very scarce, HD-MAC was usable de facto only to cable and satellite providers, however, the standard never became popular among broadcasters
24.
ATSC standards
–
Advanced Television Systems Committee standards are a set of standards for digital television transmission over terrestrial, cable, and satellite networks. It is largely a replacement for the analog NTSC standard, and like that standard, used mostly in the United States, other former users of NTSC, like Japan, have not used ATSC during their digital television transition. The standard is now administered by the Advanced Television Systems Committee, the standard includes a number of patented elements, and licensing is required for devices that use these parts of the standard. Key among these is the 8VSB modulation system used for over-the-air broadcasts, ATSC includes two primary high definition video formats, 1080i and 720p. It also includes standard-definition formats, although initially only HDTV services were launched in the digital format, the high-definition television standards defined by the ATSC produce wide screen 16,9 images up to 1920×1080 pixels in size – more than six times the display resolution of the earlier standard. However, many different image sizes are also supported, the reduced bandwidth requirements of lower-resolution images allow up to six standard-definition subchannels to be broadcast on a single 6 MHz TV channel. ATSC standards are marked A/x and can be downloaded for free from the ATSCs website at ATSC. org, ATSC Standard A/72 was approved in 2008 and introduces H. 264/AVC video coding to the ATSC system. ATSC supports 5. 1-channel surround sound using the Dolby Digitals AC-3 format, numerous auxiliary datacasting services can also be provided. Many aspects of ATSC are patented, including elements of the MPEG video coding, the AC-3 audio coding, the cost of patent licensing, estimated at up to $50 per digital TV receiver, has prompted complaints by manufacturers. As with other systems, ATSC depends on numerous interwoven standards, broadcasters who used ATSC and wanted to retain an analog signal were temporarily forced to broadcast on two separate channels, as the ATSC system requires the use of an entire separate channel. Channel numbers in ATSC do not correspond to RF frequency ranges, there is also a standard for distributed transmission systems, a form of single-frequency network which allows for the synchronised operation of multiple on-channel booster stations. Dolby Digital AC-3 is used as the audio codec, though it was standardized as A/52 by the ATSC and it allows the transport of up to five channels of sound with a sixth channel for low-frequency effects. In contrast, Japanese ISDB HDTV broadcasts use MPEGs Advanced Audio Coding as the audio codec, MPEG-2 audio was a contender for the ATSC standard during the DTV Grand Alliance shootout, but lost out to Dolby AC-3. The Grand Alliance issued a statement finding the MPEG-2 system to be equivalent to Dolby. Later, a story emerged that MIT had entered into an agreement with Dolby whereupon the university would be awarded a sum of money if the MPEG-2 system was rejected. Dolby also offered an incentive for Zenith to switch their vote, however, the ATSC system supports a number of different display resolutions, aspect ratios, and frame rates. The formats are listed here by resolution, form of scanning, for transport, ATSC uses the MPEG systems specification, known as an MPEG transport stream, to encapsulate data, subject to certain constraints. ATSC uses 188-byte MPEG transport stream packets to carry data, before decoding of audio and video takes place, the receiver must demodulate and apply error correction to the signal
25.
LaserDisc
–
LaserDisc is a home video format and the first commercial optical disc storage medium, initially licensed, sold and marketed as MCA DiscoVision in North America in 1978. It was not a format in Europe and Australia when first released but was popular in the 1990s. Its superior video and audio quality made it a choice among videophiles. The technologies and concepts behind LaserDisc were the foundation for later optical disc formats including Compact Disc, DVD, Optical video recording technology, using a transparent disc, was invented by David Paul Gregg and James Russell in 1958. The Gregg patents were purchased by MCA in 1968, by 1969, Philips had developed a videodisc in reflective mode, which has advantages over the transparent mode. MCA and Philips then decided to combine their efforts and first publicly demonstrated the video disc in 1972. LaserDisc was first available on the market, in Atlanta, Georgia, on December 15,1978, Philips produced the players while MCA produced the discs. The Philips-MCA cooperation was not successful, and discontinued after a few years, several of the scientists responsible for the early research founded Optical Disc Corporation. In 1979, the Museum of Science and Industry in Chicago opened its Newspaper exhibit which used interactive LaserDiscs to allow visitors to search for the front page of any Chicago Tribune newspaper and this was a very early example of public access to electronically stored information in a museum. The first LaserDisc title marketed in North America was the MCA DiscoVision release of Jaws in 1978, the last title released in North America was Paramounts Bringing Out the Dead in 2000. The last Japanese released movie was the Hong Kong film Tokyo Raiders from Golden Harvest, a dozen or so more titles continued to be released in Japan, until the end of 2001. Production of LaserDisc players continued until January 14,2009, when Pioneer stopped making them and it was estimated that in 1998, LaserDisc players were in approximately 2% of U. S. households. By comparison, in 1999, players were in 10% of Japanese households, LaserDisc was released on June 10,1981 in Japan, and a total of 3.6 million LaserDisc players were sold there. A total of 16.8 million LaserDisc players were sold worldwide, by the early 2000s, LaserDisc was completely replaced by DVD in the North American retail marketplace, as neither players nor software were then produced. Players were still exported to North America from Japan until the end of 2001, the format has retained some popularity among American collectors, and to a greater degree in Japan, where the format was better supported and more prevalent during its life. In Europe, LaserDisc always remained an obscure format and it was chosen by the British Broadcasting Corporation for the BBC Domesday Project in the mid-1980s, a school-based project to commemorate 900 years since the original Domesday Book in England. From 1991 up until the early 2000s, the BBC also used LaserDisc technology to play out the channel idents, the standard home video LaserDisc was 30 cm in diameter and made up of two single-sided aluminum discs layered in plastic. Although appearing similar to compact discs or DVDs, LaserDiscs used analog video stored in the domain with analog FM stereo sound
26.
High-definition video
–
High-definition video is video of higher resolution and quality than standard-definition. While there is no standardized meaning for high-definition, generally any video image with more than 480 horizontal lines or 576 horizontal lines is considered high-definition. 480 scan lines is generally the even though the majority of systems greatly exceed that. Images of standard resolution captured at rates faster than normal, by a camera may be considered high-definition in some contexts. Some television series shot on video are made to look as if they have been shot on film. The first electronic scanning format,405 lines, was the first high definition television system, from 1939, Europe and the US tried 605 and 441 lines until, in 1941, the FCC mandated 525 for the US. In wartime France, René Barthélemy tested higher resolutions, up to 1,042, in late 1949, official French transmissions finally began with 819. In 1984, however, this standard was abandoned for 625-line color on the TF1 network, modern HD specifications date to the early 1980s, when Japanese engineers developed the HighVision 1, 125-line interlaced TV standard that ran at 60 frames per second. The Sony HDVS system was presented at a meeting of television engineers in Algiers, April 1981. HighVision video is still usable for HDTV video interchange, but there is almost no modern equipment available to perform this function, attempts at implementing HighVision as a 6 MHz broadcast channel were mostly unsuccessful. All attempts at using this format for terrestrial TV transmission were abandoned by the mid-1990s, Europe developed HD-MAC, a member of the MAC family of hybrid analogue/digital video standards, however, it never took off as a terrestrial video transmission format. HD-MAC was never designated for video interchange except by the European Broadcasting Union, in essence, the end of the 1980s was a death knell for most analog high definition technologies that had developed up to that time. In the end, however, the DVB standard of resolutions, the FCC officially adopted the ATSC transmission standard in 1996, with the first broadcasts on October 28,1998. In the early 2000s, it looked as if DVB would be the video standard far into the future, high definition video is defined threefold, by, The number of lines in the vertical display resolution. High-definition television resolution is 1,080 or 720 lines, in contrast, regular digital television is 480 lines or 576 lines. However, since HD is broadcast digitally, its introduction sometimes coincides with the introduction of DTV, additionally, current DVD quality is not high-definition, although the high-definition disc systems Blu-ray Disc and the HD DVD are. The scanning system, progressive scanning or interlaced scanning, progressive scanning redraws an image frame when refreshing each image, for example 720p/1080p. Interlaced scanning yields greater image resolution if subject is not moving, the number of frames or fields per second
27.
W-VHS
–
W-VHS is a HDTV analog recording videocassette format created by JVC. The format was introduced in 1994 for use with Japans Hi-Vision. In Japan, the letter W is often used as shorthand for the English word double, the recording medium of W-VHS is a ½-inch metallic magnetic tape stored in a cartridge the same size as VHS. The tape can be used to store 1035i or 480i and a channel of 480i analog signals. Audio is stored in the VHS Hi-Fi or S-VHS Digital Audio formats, W-VHS VCRs were one of the only devices consumers could use to record a standard or high definition video signal via an analog Y/Pb/Pr component interface. Very few devices with this capability exist, possibly due to content copyright restrictions, W-VHS has also been used for medical imaging, professional previewing, and broadcasting. Currently, it is difficult to find either W-VHS VCRs or tapes. Since W-VHS tapes are harder to find users have turned to the similar Digital-S tape, while D-9 tapes are still not that easy to find, they are more available than W-VHS tapes in certain regions. JVC Professional even recommends the use of them for W-VHS, the running time between W-VHS and Digital-S is not the same, a Digital-S tape with a length of 64 min is approximately 105 min when used with W-VHS. D-VHS S-VHS Blu-ray Disc High Definition Video Format Guide, with an overview of W-VHS Quadruplex Park vtr formats, with a mention of W-VHS
28.
Analog high-definition television system
–
Analog high-definition television was an analog video broadcast television system developed in the 1930s to replace early experimental systems with as few as 12-lines. On 2 November 1936 the BBC began transmitting the worlds first public regular analog television service from the Victorian Alexandra Palace in north London. It therefore claims to be the birthplace of television broadcasting as we know it today, most patents were expiring by the end of World War II leaving no worldwide standard for television. The standards introduced in the early 1950s stayed for half a century. When the UK introduced 405-line television broadcasting in 1936, it was described as high definition television, by todays standards it most certainly was not even approaching high definition. The description merely referred to its definition in comparison to the early 30-line experimental system broadcast in the 1920s, when Europe resumed TV transmissions after WWII most countries standardized on a 576i television system. The two exceptions were the British 405-line system, which had already introduced in 1936. During the 1940s Barthélemy reached 1015-lines and even 1042-lines, on November 20,1948, François Mitterrand, the then Secretary of State for Information, decreed a broadcast standard of 819-lines, broadcasting began at the end of 1949 in this definition. It was used only in France by TF1, and in Monaco by Tele Monte Carlo, however, the theoretical picture quality far exceeded the capabilities of the equipment of its time, and each 819-line channel occupied a wide 14 MHz of VHF bandwidth. By comparison, the modern 720p standard is 1280×720 pixels, of which the 4,3 portion would be 960×720 pixels, television channels were arranged as follows, Technical specifications of the broadcast television systems used with 819-lines. System E implementation provided very good quality but with an uneconomical use of bandwidth. In addition, an adapted 819-line system known as System F was used in Belgium and it allowed French 819-line programming to be broadcast on the 7 MHz VHF channels used in those countries, with a substantial cost in horizontal resolution. It was discontinued in Belgium in February 1968, and in Luxembourg in September 1971, TMC in Monaco were the last broadcasters to transmit 819-line television, closing down their System E transmitter in 1985. When switching to 625-lines, most gapfillers did not change UHF channel and they were switched to 625-lines in June 1981. Japan had the earliest working HDTV system, with design efforts going back to 1979, the Japanese system, developed by NHK Science & Technology Research Laboratories in the 1980s, employed filtering tricks to reduce the original source signal to decrease bandwidth utilization. MUSE was marketed as Hi-Vision by NHK, Japanese broadcast engineers rejected conventional vestigial sideband broadcasting. It was decided early on that MUSE would be a satellite broadcast format as Japan economically supports satellite broadcasting, in the typical setup, three picture elements on a line were actually derived from three separate scans. Stationary images were transmitted at full resolution, in fact, whole-camera pans would result in a loss of 50% of horizontal resolution
29.
SECAM
–
SECAM, also written SÉCAM, is an analogue colour television system first used in France. It was one of three major television standards, the others being the European PAL and North American NTSC. Development of SECAM began in 1956 by a led by Henri de France working at Compagnie Française de Télévision. The first SECAM broadcast was made in France in 1967, making it the first such standard to go live in Europe, the system was also selected as the standard for colour in the Soviet Union, who began broadcasts shortly after the French. The standard spread from these two countries to many client states and former colonies, SECAM remained a major standard into the 2000s. It is in the process of being phased out and replaced by DVB, work on SECAM began in 1956. The technology was ready by the end of the 1950s, a version of SECAM for the French 819-line television standard was devised and tested, but not introduced. The first proposed system was called SECAM I in 1961, followed by studies to improve compatibility. These improvements were called SECAM II and SECAM III, with the latter being presented at the 1965 CCIR General Assembly in Vienna, further improvements were SECAM III A followed by SECAM III B, the adopted system for general use in 1967. Soviet technicians were involved in the development of the standard, and created their own incompatible variant called NIR or SECAM IV, the team was working in Moscows Telecentrum under the direction of Professor Shmakov. The NIR designation comes from the name of the Nautchno-Issledovatelskiy Institut Radio, two standards were developed, Non-linear NIR, in which a process analogous to gamma correction is used, and Linear NIR or SECAM IV that omits this process. SECAM was inaugurated in France on 1 October 1967, on la deuxième chaîne, a group of four suited men—a presenter and three contributors to the systems development—were shown standing in a studio. Following a count from 10, at 2,15 pm the black-and-white image switched to color, in 1967, CLT of Lebanon became the third television station in the world, after the Soviet Union and France, to broadcast in color utilizing the French SECAM technology. The first color television sets cost 5000 Francs, color TV was not very popular initially, only about 1500 people watched the inaugural program in color. A year later, only 200,000 sets had been sold of an expected million and this pattern was similar to the earlier slow build-up of color television popularity in the US. SECAM was later adopted by former French and Belgian colonies, Greece, the Soviet Union and Eastern bloc countries, and Middle Eastern countries. However, with the fall of communism, and following a period when multi-standard TV sets became a commodity, other countries, notably the United Kingdom and Italy, briefly experimented with SECAM before opting for PAL. Since late 2000s, SECAM is in the process of being phased out, some have argued that the primary motivation for the development of SECAM in France was to protect French television equipment manufacturers
30.
NTSC
–
The first NTSC standard was developed in 1941 and had no provision for color. In 1953 a second NTSC standard was adopted, which allowed for television broadcasting which was compatible with the existing stock of black-and-white receivers. NTSC was the first widely adopted broadcast color system and remained dominant until 1997, North America, parts of Central America, and South Korea are adopting or have adopted the ATSC standards, while other countries are adopting or have adopted other standards instead of ATSC. After nearly 70 years, the majority of over-the-air NTSC transmissions in the United States ceased on January 1,2010, the majority of NTSC transmissions ended in Japan on July 24,2011, with the Japanese prefectures of Iwate, Miyagi, and Fukushima ending the next year. In March 1941, the committee issued a standard for black-and-white television that built upon a 1936 recommendation made by the Radio Manufacturers Association. Technical advancements of the side band technique allowed for the opportunity to increase the image resolution. The NTSC selected 525 scan lines as a compromise between RCAs 441-scan line standard and Philcos and DuMonts desire to increase the number of lines to between 605 and 800. The standard recommended a frame rate of 30 frames per second, other standards in the final recommendation were an aspect ratio of 4,3, and frequency modulation for the sound signal. In January 1950, the committee was reconstituted to standardize color television, in December 1953, it unanimously approved what is now called the NTSC color television standard. The compatible color standard retained full backward compatibility with existing black-and-white television sets, Color information was added to the black-and-white image by introducing a color subcarrier of precisely 315/88 MHz. These changes amounted to 0.1 percent and were tolerated by existing television receivers. The FCC had briefly approved a different color standard, starting in October 1950. However, this standard was incompatible with black-and-white broadcasts and it used a rotating color wheel, reduced the number of scan lines from 525 to 405, and increased the field rate from 60 to 144, but had an effective frame rate of only 24 frames per second. CBS rescinded its system in March 1953, and the FCC replaced it on December 17,1953, with the NTSC color standard, later that year, the improved TK-41 became the standard camera used throughout much of the 1960s. The NTSC standard has been adopted by countries, including most of the Americas. With the advent of television, analog broadcasts are being phased out. Most US NTSC broadcasters were required by the FCC to shut down their analog transmitters in 2009, low-power stations, Class A stations and translators were required to shut down by 2015. NTSC color encoding is used with the System M television signal, each frame is composed of two fields, each consisting of 262.5 scan lines, for a total of 525 scan lines
31.
PAL
–
Phase Alternating Line is a colour encoding system for analogue television used in broadcast television systems in most countries broadcasting at 625-line /50 field per second. Other common colour encoding systems are NTSC and SECAM, all the countries using PAL are currently in process of conversion or have already converted standards to DVB, ISDB or DTMB. This page primarily discusses the PAL colour encoding system, the articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation. To overcome NTSCs shortcomings, alternative standards were devised, resulting in the development of the PAL, the goal was to provide a colour TV standard for the European picture frequency of 50 fields per second, and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Hannover, Germany, with important input from Dr. Kruse, the format was patented by Telefunken in 1962, citing Bruch as inventor, and unveiled to members of the European Broadcasting Union on 3 January 1963. When asked, why the system was named PAL and not Bruch the inventor answered that a Bruch system would not have sold very well. The first broadcasts began in the United Kingdom in June 1967, the one BBC channel initially using the broadcast standard was BBC2, which had been the first UK TV service to introduce 625-lines in 1964. Telefunken PALcolor 708T was the first PAL commercial TV set and it was followed by Loewe-Farbfernseher S920 & F900. Telefunken was later bought by the French electronics manufacturer Thomson, Thomson also bought the Compagnie Générale de Télévision where Henri de France developed SECAM, the first European Standard for colour television. The term PAL was often used informally and somewhat imprecisely to refer to the 625-line/50 Hz television system in general, accordingly, DVDs were labelled as PAL or NTSC even though technically the discs do not carry either PAL or NTSC composite signal. CCIR 625/50 and EIA 525/60 are the names for these standards, PAL. Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the chrominance information added to the video signal to form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL and NTSC4.43, the SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. Early PAL receivers relied on the eye to do that cancelling, however. The effect is that phase errors result in changes, which are less objectionable than the equivalent hue changes of NTSC. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth reduced greatly compared to the luminance signal. The 4.43361875 MHz frequency of the carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency is 15625 Hz, the carrier frequency calculates as follows,4.43361875 MHz =283.75 ×15625 Hz +25 Hz
32.
ISDB
–
The Integrated Services Digital Broadcasting is a Japanese standard for digital television and digital radio used by the countrys radio and television networks. ISDB replaced NTSC-J analog television system and the previously used MUSE Hi-vision analogue HDTV system in Japan, Digital Terrestrial Television Broadcasting services using ISDB-T started in Japan in December 2003 and in Brazil in December 2007 as a trial. Since many countries have adopted ISDB over other digital broadcasting standards, the standards can be obtained for free at the Japanese organization DiBEG website and at ARIB. The core standards of ISDB are ISDB-S, ISDB-T, ISDB-C and 2, ISDB-T and ISDB-Tsb are for mobile reception in TV bands. 1seg is the name of an ISDB-T service for reception on cell phones, laptop computers, the concept was named for its similarity to ISDN, because both allow multiple channels of data to be transmitted together. This is also much like another digital radio system, Eureka 147, which each group of stations on a transmitter an ensemble. ISDB-T operates on unused TV channels, an approach taken by countries for TV. The various flavors of ISDB differ mainly in the modulations used, the 12 GHz band ISDB-S uses PSK modulation,2.6 GHz band digital sound broadcasting uses CDM, and ISDB-T uses COFDM with PSK/QAM. Besides audio and video transmission, ISDB also defines data connections with the internet as a channel over several media. This is used, for example, for interactive interfaces like data broadcasting, the ISDB specification describes a lot of interfaces, but most importantly the Common Interface for Conditional Access System. While ISDB has examples of implementing various kinds of CASes, in Japan CAS called B-CAS is used, defines the Common Scrambling Algorithm system called MULTI2 required for scrambling television. The ISDB CAS system in Japan is operated by a company named B-CAS, the Japanese ISDB signal is always encrypted by the B-CAS system even if it is a free television program. That is why it is commonly called Pay per view system without charge, an interface for mobile reception is under consideration. Since all digital systems carry digital data content, a DVD or high-definition recorder could easily copy content losslessly. Hollywood requested copy protection, this was the reason for RMP being mandated. The content has three modes, “copy once”, “copy free” and “copy never”, “Copy never” programming may only be timeshifted and cannot be permanently stored. There are two types of ISDB receiver, Television and set-top box, the aspect ratio of an ISDB-receiving television set is 16,9, televisions fulfilling these specs are called Hi-Vision TV. There are three TV types, Cathode ray tube, plasma display panel and liquid crystal display, with LCD being the most popular Hi-Vision TV on the Japanese market nowadays
33.
Video
–
Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video systems vary greatly in the resolution of the display and refresh rate, video can be carried on a variety of media, including radio broadcast, tapes, DVDs, computer files etc. Video was originally exclusively a live technology, charles Ginsburg led an Ampex research team developing one of the first practical video tape recorder. In 1951 the first video tape recorder captured live images from television cameras by converting the electrical impulses. Video recorders were sold for $50,000 in 1956, however, prices gradually dropped over the years, in 1971, Sony began selling videocassette recorder decks and tapes into the consumer market. The use of techniques in video created digital video, which allowed higher quality and, eventually. After the invention of the DVD in 1997 and Blu-ray Disc in 2006, sales of videotape, the advent of digital broadcasting and the subsequent digital television transition is in the process of relegating analog video to the status of a legacy technology in most parts of the world. PAL standards and SECAM specify 25 frame/s, while NTSC standards specify 29.97 frames, film is shot at the slower frame rate of 24 frames per second, which slightly complicates the process of transferring a cinematic motion picture to video. The minimum frame rate to achieve a comfortable illusion of an image is about sixteen frames per second. Video can be interlaced or progressive, analog display devices reproduce each frame in the same way, effectively doubling the frame rate as far as perceptible overall flicker is concerned. NTSC, PAL and SECAM are interlaced formats, abbreviated video resolution specifications often include an i to indicate interlacing. For example, PAL video format is specified as 576i50, where 576 indicates the total number of horizontal scan lines, i indicates interlacing. In progressive scan systems, each refresh period updates all scan lines in each frame in sequence, when displaying a natively progressive broadcast or recorded signal, the result is optimum spatial resolution of both the stationary and moving parts of the image. Deinterlacing cannot, however, produce video quality that is equivalent to true progressive scan source material, aspect ratio describes the dimensions of video screens and video picture elements. All popular video formats are rectilinear, and so can be described by a ratio between width and height, the screen aspect ratio of a traditional television screen is 4,3, or about 1.33,1. High definition televisions use a ratio of 16,9. The aspect ratio of a full 35 mm film frame with soundtrack is 1.375,1. Therefore, a 720 by 480 pixel NTSC DV image displayes with the 4,3 aspect ratio if the pixels are thin, the popularity of viewing video on mobile phones has led to the growth of vertical video
34.
Analog television
–
Analog television or analogue television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by variations of either the amplitude. Analog signals vary over a range of possible values which means that electronic noise. So with analog, a weak signal becomes snowy and subject to interference. In contrast, a moderately weak signal and a very strong digital signal transmit equal picture quality. Analog television may be wireless or can be distributed over a network using cable converters. All broadcast television systems preceding digital transmission of digital television used analog signals, analog television around the world has been in the process of shutting down since the late 2000s. The earliest systems were mechanical systems which used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver, synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. However these mechanical systems were slow, the images were dim and flickered severely, camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. Analog television did not really begin as an industry until the development of the cathode-ray tube, the electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also far less maintenance was required of an all-electronic system compared to a spinning disc system, all-electronic systems became popular with households after the Second World War. Broadcasters using analog television systems encode their signal using different systems, the official systems of transmission are named, A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of lines, channel width, vision bandwidth, vision-sound separation, each frame of a television image is composed of lines drawn on the screen. The lines are of varying brightness, the set of lines is drawn quickly enough that the human eye perceives it as one image. The next sequential frame is displayed, allowing the depiction of motion, the analog television signal contains timing and synchronization information, so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial systems were black-and-white, the beginning of color television was in the 1950s. A practical television system needs to take luminance, chrominance, synchronization, and audio signals, the transmission system must include a means of television channel selection
35.
CCIR System M
–
Japan uses System J, which is nearly identical to System M. The systems were given their letter designations in the ITU identification scheme adopted in Stockholm in 1961, both System M and System J display 525 lines of video at 30 frames per second using 6 MHz spacing between channel numbers, and is used for both VHF and UHF channels. Currently, System M and J is being replaced by digital broadcasting such as the Americas, Japan, South Korea, Taiwan, strictly speaking, System M does not designate how color is transmitted. In NTSC-M and Japans NTSC-J, the rate is offset slightly, becoming 30⁄1.001 frames per second. PAL-M signals are at 30 frames per second instead of slowing down to 29.97 like NTSC, NTSC — dominant color system used with System M, so much so that System M is often referred to as NTSC
36.
NTSC-J
–
NTSC-J is the discontinued analog television system and video display standard for the region of Japan that ceased operations in 44 of the countrys 47 prefectures on July 24,2011. Analog broadcasting ended on March 31,2012 in the three prefectures devastated by the 2011 Tohoku earthquake, while NTSC-M is an official standard, J is more a colloquial indicator as used in Marketing definition but not an official term. It is based on regular NTSC, but is slightly different, the black level and blanking level of the NTSC-J signal are identical, as they are in PAL, another video standard, while in American NTSC, black level is slightly higher than blanking level. Because of the way this appears in the waveform, the black level is also called pedestal. Since the difference is small, a slight change of the brightness setting is all that is required to enjoy the other variant of NTSC on any set as it is supposed to be. NTSC-J also uses a reference of 9300K instead of the usual NTSC standard of 6500K. The over-the-air RF frequencies in use in Japan do not match those of the US NTSC standard, the encoding of the stereo subcarrier also differs between NTSC-M/MTS and the Japanese broadcast standards. The term NTSC-J is also used to distinguish regions in console video games, NTSC-J is used as the name of the video gaming region of Japan, South East Asia, Taiwan, Hong Kong, Macau and South Korea. Most games designated as part of this region will not run on hardware designated as part of the NTSC-US, PAL or NTSC-C mostly due to the differences of the PAL. China received its own due to fears of an influx of illegal copies flooding out of China. Television in Japan Broadcast television systems ATSC BTSC NTSC NTSC-C PAL PAL/E SECAM Related topics RCA Moving image formats Oldest television station
37.
PAL-M
–
PAL-M is the analog TV system used in Brazil since February 19,1972. At that time, Brazil was the first South American country to broadcast in colour, colour TV broadcast began on September 1972 when the TV networks Globo, Tupi and Bandeirantes TV transmitted the Caxias do Sul Grape Festival. Transition from black and white to colour was not complete until 1977, two years later, in 1979 colour broadcast nationwide in Brazil was commonplace. PAL-M signals are identical to North American NTSC signals, except for the encoding of the colour carrier, therefore, PAL-M will display in monochrome with sound on an NTSC set and vice versa. PAL-M is incompatible with 625-line based versions of PAL, because its rate, scan line, colour subcarrier. It will therefore give a rolling and/or squashed monochrome picture with no sound on a native European PAL television. PAL-M being a unique to one country, the need of to convert it to/from other standards often arises. Conversion to/from NTSC is easy, as only the colour needs to be changed. Frame rate and scan lines can remain untouched, conversion to/from PAL/625 lines/25 frame/s and SECAM/625/25 signals involves changing the frame rates as well as the scan lines. This is achieved using complicated circuitry involving a frame store. The fact that the encoding of PAL-M and PAL/625/25 is the same does not help. However some special VHS video recorders are available which can allow viewers the flexibility of enjoying PAL-M recordings using a standard PAL colour TV, or even through multi-system TV sets. Video recorders like Panasonic NV-W1E, AG-W2, AG-W3, NV-J700AM, Aiwa HV-MX100, HV-MX1U, the PAL colour system can also be applied to an NTSC-like 525-line picture to form what is often known as PAL-60. This non-standard signal is a used in European domestic VCRs. Its not identical to PAL-M and incompatible with it, because the colour subcarrier is at a different frequency, it will display in monochrome on PAL-M. Before SBTVD, from 1999 to 2000, the ABERT/SET group in Brazil did system comparison tests of DTV under the supervision of the CPqD foundation, the comparison tests were done under the direction of a work group of SET and ABERT. Originally, Brazil including Argentina, Paraguay and Uruguay are planned to adopt the DVB-T system, however, the ABERT/SET group selected ISDB-T as the best system among ATSC, DVB-T and ISDB-T. The outdoor coverage of field-tests result in Brazilian digital television tests show that ISDB-T is most robust system in Brazil
38.
CCIR System B
–
CCIR System B was the 625-line analog broadcast television system which at its peak was the system used in the most countries. It is being replaced across Western Europe, part of Asia, the system was developed for VHF band Some of the important specs are listed below. A frame is the total picture, the frame rate is the number of pictures displayed in one second. But each frame is scanned twice interleaving odd and even lines. Each scan is known as a field So field rate is twice the frame rate, in each frame there are 625 lines So line rate is 625 times the frame frequency or 625•25=15625 Hz. The video bandwidth is 5.0 MHz, the video signal modulates the carrier by Amplitude Modulation. But a portion of the side band is suppressed. This technique is known as vestigial side band modulation, the polarity of modulation is negative, meaning that an increase in the instantaneous brightness of the video signal results in a decrease in RF power and vice versa. Specifically, the sync pulses result in power from the transmitter. The primary audio signal is modulated by Frequency modulation with a time constant of τ =50 μs. The deviation for a 1.0 kHz, the separation between the primary audio FM subcarrier and the video carrier is 5.5 MHz. In specs, sometimes, other such as vestigial sideband characteristics. System B has variously been used both the PAL or SECAM colour systems. It could have used with a 625-line variant of the NTSC color system, but apart from possible technical tests in the 1950s. When used with PAL, the subcarrier is 4.43361875 MHz. On the low-frequency side, the full 1.3 MHz sideband is radiated, when used with SECAM, the R lines carrier is at 4.40625 MHz deviating from +350±18 kHz to -506±25 kHz. The B lines carrier is at 4.250 MHz deviating +506±25 kHz to -350±18 kHz, neither colour encoding system has any effect on the bandwidth of system B as a whole. Enhancements have been made to the specification of System Bs audio capabilities over the years, the introduction of Zweiton in the 1970s allowed for stereo sound or twin monophonic audio tracks
39.
CCIR System G
–
CCIR System G is an analog broadcast television system used in many countries. There are several systems in use and letter G is assigned for the European UHF system which is used in the majority of Asian and African countries. Some of the important specs are listed below, a frame is the total picture. The frame rate is the number of pictures displayed in one second, but each frame is actually scanned twice interleaving odd and even lines. Each scan is known as a field So field rate is twice the frame rate, in each frame there are 625 lines So line rate is 625 times the frame frequency or 625•25=15625 Hz. The RF parameters of the signal are exactly the same as those for System B which is used on the 7.0 MHz wide channels of the VHF bands. The only difference is the width of the band between the channels, which on System G is 1.0 MHz wider than for System B. A few countries use a variant of system G which is known as System H. System H is similar to system G but the lower side band is 500 kHz wider. This makes much use of the 8.0 MHz channels of the UHF bands by reducing the width of the guard-band by 500 kHz to the still perfectly generous value of 650 kHz. Broadcast television systems Television transmitter Transposer World Analogue Television Standards and Waveforms Fernsehnormen aller Staaten und Gebiete der Welt
40.
CCIR System H
–
CCIR System H is an analog broadcast television system primarily used in Belgium, the Balkans and Malta on the UHF bands. Some of the important specs are listed below, a frame is the total picture. The frame rate is the number of pictures displayed in one second, but each frame is actually scanned twice interleaving odd and even lines. Each scan is known as a field So field rate is twice the frame rate, in each frame there are 625 lines So line rate is 625 times the frame frequency or 625•25=15625 Hz. The RF parameters of the signal are almost the same as those for System B which is used on the 7.0 MHz wide channels of the VHF bands. The only difference to the RF spectrum of the signal is that the vestigial sideband is 500 kHz wider at 1.25 MHz, due to this and the extra width of the channel allocations at UHF, the width of the guard band between the channels is 650 kHz. Many countries use a variant of system H which is known as System G. System G is similar to system H but the lower side band is 500 kHz narrower. This makes poor use of the 8.0 MHz channels of the UHF bands by merely increasing the width of the guard-band by 500 kHz to 1.15 MHz. The advantage is that the RF spectrum of system G is the same as system B, broadcast television systems Television transmitter Transposer World Analogue Television Standards and Waveforms Fernsehnormen aller Staaten und Gebiete der Welt
41.
CCIR System I
–
CCIR System I is an analog broadcast television system. The UK started its own 625-line television service in 1964 also using System I, since then, System I has been adopted for use by Hong Kong, Macau, the Falkland Islands and South Africa. The Republic of Ireland has extended its use of System I onto the UHF bands, as of late 2012, analog television is no longer transmitted in either the UK or the Republic of Ireland. South Africa expects to discontinue System I in 2013, and Hong Kong by 2015, some of the important specs are listed below. A frame is the total picture, the frame rate is the number of pictures displayed in one second. But each frame is scanned twice interleaving odd and even lines. Each scan is known as a field So field rate is twice the frame rate, in each frame there are 625 lines So line rate is 625 times the frame frequency or 625•25=15625 Hz. The total RF bandwidth of System I was about 7.4 MHz, in specs, sometimes, other parameters such as vestigial sideband characteristics and gamma of display device are also given. System I has only used with the PAL colour systems. However, apart from possible technical tests in the 1960s, this has never been done officially, when used with PAL, the colour subcarrier is 4.43361875 MHz and the sidebands of the PAL signal have to be truncated on the high-frequency side at +1.066 MHz. On the low-frequency side, the full 1.3 MHz sideband width is radiated.0 MHz to 5.9996 MHz and this is such a slight frequency shift that no alterations needed to be made to existing System I television sets when the change was made. No colour encoding system has any effect on the bandwidth of system I as a whole, enhancements have been made to the specification of System Is audio capabilities over the years. Starting in the late 1980s and early 1990s it became possible to add a digital signal carrying NICAM sound, good channel planning means that under normal situations no ill effects are seen or heard. The NICAM system used with System I adds a 700 kHz wide digital signal, VHF Band 1 was already discontinued for TV broadcasting well before Irelands digital switchover. ♥ No longer used for TV broadcasting, UHF takeup in Ireland was slower than in the UK. A written answer in the Dáil Éireann shows that even by mid 1988 Ireland was only transmitting on UHF from four main transmitters and 11 relays, † Officially these channels dont exist, being between UHF Band IV and Band V and were supposed to be reserved for radio astronomy. However, from 1997 until the finish of analog TV in the UK in 2012, § Allocated, but never used in the UK. Broadcast television systems Television transmitter Transposer World Analogue Television Standards and Waveforms Fernsehnormen aller Staaten und Gebiete der Welt