Composite video
Composite video is an analog video transmission that carries standard definition video at 480i or 576i resolution as a single channel. Video information is encoded on one channel, unlike the higher-quality S-video and the higher-quality component video. In all of these video formats, audio is carried on a separate connection. Composite video is known by the initials CVBS for composite video baseband signal or color, video and sync, or is referred to as SD video for the standard-definition television signal it conveys. There are three dominant variants of composite video: NTSC, PAL, SECAM. A composite video signal combines, on one wire, the video information required to recreate a color picture, as well as line and frame synchronization pulses; the color video signal is a linear combination of the luminance of the picture and a modulated subcarrier which carries the chrominance or color information, a combination of hue and saturation. Details of the combining process vary between the PAL and SECAM systems.
The frequency spectrum of the modulated color signal overlaps that of the baseband signal, separation relies on the fact that frequency components of the baseband signal tend to be near harmonics of the horizontal scanning rate, while the color carrier is selected to be an odd multiple of half the horizontal scanning rate. In other words, the combination of luma and chroma is indeed a frequency-division technique, but it is much more complex than typical frequency-division multiplexing systems like the one used to multiplex analog radio stations on both the AM and FM bands. A gated and filtered signal derived from the color subcarrier, called the burst or colorburst, is added to the horizontal blanking interval of each line as a synchronizing signal and amplitude reference for the chrominance signals; the burst signal is inverted in phase from the reference subcarrier. Composite video can be directed to any broadcast channel by modulating the proper RF carrier wave with it. Most home analog video equipment record a signal in composite format: LaserDiscs store a true composite signal, while consumer videotape formats and lesser commercial and industrial tape formats use modified composite signals.
On playback, these devices give the user the option to output the baseband signal or to modulate it onto a VHF or UHF frequency compatible with a TV tuner. The professional television production uncompressed digital video videocassette format known as D-2 directly records and reproduces standard NTSC composite video signals, using PCM encoding of the analog signal on the magnetic tape. In home applications, the composite video signal is connected using an RCA connector yellow, it is accompanied with red and white connectors for right and left audio channels respectively. BNC connectors and higher quality coaxial cable are used in professional television studios and post-production applications. BNC connectors were used for composite video connections on early home VCRs accompanied by either phono connectors or a 5-pin DIN connector for audio; the BNC connector, in turn post dated the PL-259 connector which featured on first generation VCRs. In Europe, SCART connections are used instead of RCA jacks, so where available, RGB is used instead of composite video with computers, video game consoles, DVD players.
Video cables are low in capacitance. Typical values run from 52 pF/m for an HDPE-foamed dielectric precision video cable to 69 pF/m for a solid PE dielectric cable; some devices that connect to a TV, such as VCRs, older video game consoles and home computers of the 1980s, output a composite signal. This may be converted to RF with an external box known as an RF modulator that generates the proper carrier. Sometimes this modulator was built into the product and sometimes it was an external unit powered by the computer or with an independent power supply. In the United States, using an external RF modulator frees the manufacturer from obtaining FCC approval for each variation of a device. Through the early 1980s, electronics that output a television channel signal were required to meet the same shielding requirements as broadcast television equipment, thus forcing manufacturers such as Apple to omit an RF modulator, Texas Instruments to have their RF modulator as an external unit, which they had certified by the FCC without mentioning they were planning to sell it with a computer.
In Europe, while most countries used the same broadcast standard, there were different modulation standards, using an external modulator allowed manufacturers to make a single product and sell it to different countries by changing the modulator. Video game consoles on the other hand were less of an issue with FCC approval because the circuitry was inexpensive enough to allow for channel 3/4 outputs. Modern day devices with analog outputs have omitted channel 3 and 4 outputs in favor of composite and S-video outputs (or have switched to using HDMI or other di
Home theater PC
A home theater PC or media center computer is a convergence device that combines some or all the capabilities of a personal computer with a software application that supports video, audio playback, sometimes video recording functionality. In recent years, other types of consumer electronics, including gaming systems and dedicated media devices have crossed over to manage video and music content; the term "media center" refers to specialized application software designed to run on standard personal computers. An HTPC and other convergence devices integrate components of a home theater into a unit co-located with a home entertainment system. An HTPC system has a remote control and the software interface has a 10-foot user interface design so that it can be comfortably viewed at typical television viewing distances. An HTPC can be purchased pre-configured with the required hardware and software needed to add video programming or music to the PC. Enthusiasts can piece together a system out of discrete components as part of a software-based HTPC.
Since 2007 Digital media player and Smart TV software has been incorporated into consumer electronics through software or hardware changes including video game consoles, Blu-ray players, networked media players and set-top boxes. The increased availability of specialized devices, coupled with paid and free digital online content, now offer alternatives to multipurpose personal computers; the HTPC as a concept is the product of several technology innovations including high-powered home computers, digital media, the shift from standard resolution CRT to high definition monitors and large screen televisions. Integrating televisions and personal computers dates back to the late 1980s with tuner cards that could be added to Commodore Amiga PCs via the Video Toaster; this adaptation would allow a small video window to appear on the screen with broadcast or cable content. Apple Computer developed the Macintosh TV in late 1993 that included a tuner card built into a Macintosh LC 520 chassis but withdrew from the market with only 10,000 units shipped.
In 1996 Gateway Computer unveiled the Destination computer that included a tuner card and video card. The unit cost $4,000 and integrated television viewing and computer functions on one color monitor; the Destination was called a "PC-TV Combo" but by December the term "Home-theater PC" appeared in mainstream media: "The home theater PC will be a combination entertainment and information appliance". By 2000, DVD players had become ubiquitous and consumers were seeking ways to improve the picture; the value of using a computer instead of stand alone DVD player drove more usage of the PC as a home media device. In particular, the desire for progressive scanning DVD players with better video fidelity led some consumers to consider their computers instead of expensive DVD players; as DVD players dropped in price, so did PCs and their related video processing and storage capabilities. In 2000, DVD decryption software using the DeCSS algorithm let DVD owners consolidate their DVD video libraries on hard-drives.
Innovations like TiVo and ReplayTV allowed viewers to store and timeshift broadcast content using specialty designed computers. ReplayTV for instance ran on a VxWorks platform. Incorporating these capabilities into PCs was well within the ability of a computer hobbyist, willing to build and program these systems. Key benefits of these DIY projects included more features. Advancements in hardware identified another weak link: the absence of media management software to make it easy to display and control the video from a distance. By 2002, major software developments facilitated media management, hardware integration, content presentation. MythTV provided a open source solution using Linux; the concept was to combine a digital tuner with digital video recording, program guides, computer capabilities with a 10-foot user interface. XBMC was another free and open software project started with re-purposing the Xbox as a home theater PC but has since been ported to Windows and Macintosh operating systems in various forms including Boxee and Plex.
Mainstream commercial software packages included Microsoft's XP Media Center Edition, bundled with Windows XP and Apple's Front Row software bundled with Mac OS X. By early 2006, commercial examples of this integration included the Mac mini which had the Apple Remote, 5.1 digital audio, an updated Front Row interface that would play shared media. Because of these features and the Mini's small form factor, consumers began using the Mini as a Mac-based home theater PC; as digital cable and satellite became the norm, HTPC software became more dependent on external decoder boxes, the subscription costs that came with them. For instance, MythTV is capable of capturing unencrypted HDTV streams, such as those broadcast over the air or on cable using a QAM tuner. However, most U. S. cable and satellite set-top boxes provide only encrypted HD streams for "non-basic" content, which can be decoded only by OpenCable-approved hardware or software. In September 2009, OEM restrictions were lifted for cableCARD devices, opening up the possibility for HTPC integration.
The advent of the digital HDTV displays helped to complete the value and ease of use of a HTPC system. Digital projectors, plasma and LCD displays came pre-configured to accept computer video outputs including VGA, DVI and Component Video. Furthermore, both the computers and the displays could include video scalers to better conform the image to the screen format and resolutions. Computers included HDMI ports that carry both audio and video sig
Video CD
Video CD is a home video format and the first format for distributing films on standard 120 mm optical discs. The format was adopted in Southeast Asia and superseded the VHS and Betamax systems in the region until DVD became affordable in the region in the late 2000s; the format is a standard digital format for storing video on a compact disc. VCDs are playable in dedicated VCD players and playable in most DVD players, personal computers and some video game consoles. However, they are less playable in some Blu-ray Disc players and video game consoles such as the Sony PlayStation 3/4 due to lack of support for backward compatibility of the older MPEG-1 format; the Video CD standard was created in 1993 by Sony, Matsushita, JVC and is referred to as the White Book standard. Although they have been superseded by other media, VCDs continue to be retailed as a low-cost video format. LaserDisc was first available on the market, in Atlanta, Georgia, on December 15, 1978; this 30 cm disc could hold an hour of analog video on each side.
The Laserdisc provided picture quality nearly double that of VHS tape and analog audio quality far superior to VHS. Philips teamed up with Sony to develop a new type of disc, the compact disc or CD. Introduced in 1982 in Japan, the CD is about 120 mm in diameter, is single-sided; the format was designed to store digitized sound and proved to be a success in the music industry. A few years Philips decided to give CDs the ability to produce video, utilizing the same technology as its LaserDisc counterpart; this led to the creation of CD Video in 1987. However, the disc's small size impeded the ability to store analog video. Therefore, CD-V distribution was limited to featuring music videos, it was soon discontinued by 1991. By the early 1990s engineers were able to digitize and compress video signals improving storage efficiency; because this new format could hold 74/80 minutes of audio and video on a 650/700MB disc, releasing movies on compact discs became a reality. Extra capacity was obtained by sacrificing the error correction.
This format was named Video CD or VCD. VCD enjoyed a brief period of success, with a few major feature films being released in the format; however the introduction of the CD-R disc and associated recorders stopped the release of feature films in their tracks because the VCD format had no means of preventing unauthorized copies from being made. However, VCDs are still being released in several countries in Asia, but now with copy-protection; the development of more sophisticated, higher capacity optical disc formats yielded the DVD format, released only a few years with a copy protection mechanism. DVD players use lasers that are of shorter wavelength than those used on CDs, allowing the recorded pits to be smaller, so that more information can be stored; the DVD was so successful that it pushed VHS out of the video market once suitable recorders became available. VCDs made considerable inroads into developing nations, where they are still in use today due to their cheaper manufacturing and retail costs.
Video CDs comply with the CD-i Bridge format, are authored using tracks in CD-ROM XA mode. The first track of a VCD is in CD-ROM XA Mode 2 Form 1, stores metadata and menu information inside an ISO 9660 filesystem; this track may contain other non-essential files, is shown by operating systems when loading the disc. This track can be absent from a VCD, which would still work but would not allow it to be properly displayed in computers; the rest of the tracks are in CD-ROM XA Mode 2 Form 2 and contain video and audio multiplexed in an MPEG program stream container, but CD audio tracks are allowed. Using Mode 2 Form 2 allows 800 megabytes of VCD data to be stored on one 80 minute CD; this is achieved by sacrificing the error correction redundancy present in Mode 1. It was considered that small errors in the video and audio stream pass unnoticed. This, combined with the net bitrate of VCD video and audio, means that exactly 80 minutes of VCD content can be stored on an 80-minute CD, 74 minutes of VCD content on a 74-minute CD, so on.
This was done in part to ensure compatibility with existing CD drive technology the earliest "1x" speed CD drives. Video specifications Codec: MPEG-1 Resolution: NTSC: 352×240 PAL/SECAM: 352×288 Aspect Ratio: NTSC: 4:3 PAL/SECAM: 4:3 Framerate: NTSC: 29.97 or 23.976 frames per second PAL/SECAM: 25 frames per second Bitrate: 1,150 kilobits per second Rate Control: constant bitrateAlthough many DVD video players support playback of VCDs, VCD video is only compatible with the DVD-Video standard if encoded at 29.97 frames per second or 25 frames per second. The 352×240 and 352×288 resolutions were chosen because it is half the horizontal and vertical resolution of NTSC video, half the horizontal resolution of PAL; this is half the resolution of an analog VHS tape, ~330 horizontal and 480 vertical or 330×576. Audio specifications Codec: MPEG-1 Audio Layer II Sample Frequency: 44,100 hertz Output: Dual channel, stereo, or Dolby Surround Bitrate: 224 kilobits per second Rate Control: Constant
Mobile phone
A mobile phone, cell phone, cellphone, or hand phone, sometimes shortened to mobile, cell or just phone, is a portable telephone that can make and receive calls over a radio frequency link while the user is moving within a telephone service area. The radio frequency link establishes a connection to the switching systems of a mobile phone operator, which provides access to the public switched telephone network. Modern mobile telephone services use a cellular network architecture, therefore, mobile telephones are called cellular telephones or cell phones, in North America. In addition to telephony, 2000s-era mobile phones support a variety of other services, such as text messaging, MMS, Internet access, short-range wireless communications, business applications, video games, digital photography. Mobile phones offering only those capabilities are known as feature phones; the first handheld mobile phone was demonstrated by John F. Mitchell and Martin Cooper of Motorola in 1973, using a handset weighing c. 2 kilograms.
In 1979, Nippon Telegraph and Telephone launched the world's first cellular network in Japan. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. From 1983 to 2014, worldwide mobile phone subscriptions grew to over seven billion—enough to provide one for every person on Earth. In first quarter of 2016, the top smartphone developers worldwide were Samsung and Huawei, smartphone sales represented 78 percent of total mobile phone sales. For feature phones as of 2016, the largest were Samsung and Alcatel. A handheld mobile radio telephone service was envisioned in the early stages of radio engineering. In 1917, Finnish inventor Eric Tigerstedt filed a patent for a "pocket-size folding telephone with a thin carbon microphone". Early predecessors of cellular phones included analog radio communications from trains; the race to create portable telephone devices began after World War II, with developments taking place in many countries. The advances in mobile telephony have been traced in successive "generations", starting with the early zeroth-generation services, such as Bell System's Mobile Telephone Service and its successor, the Improved Mobile Telephone Service.
These 0G systems were not cellular, supported few simultaneous calls, were expensive. The first handheld cellular mobile phone was demonstrated by John F. Mitchell and Martin Cooper of Motorola in 1973, using a handset weighing 2 kilograms; the first commercial automated cellular network analog was launched in Japan by Nippon Telegraph and Telephone in 1979. This was followed in 1981 by the simultaneous launch of the Nordic Mobile Telephone system in Denmark, Finland and Sweden. Several other countries followed in the early to mid-1980s; these first-generation systems could support far more simultaneous calls but still used analog cellular technology. In 1983, the DynaTAC 8000x was the first commercially available handheld mobile phone. In 1991, the second-generation digital cellular technology was launched in Finland by Radiolinja on the GSM standard; this sparked competition in the sector as the new operators challenged the incumbent 1G network operators. Ten years in 2001, the third generation was launched in Japan by NTT DoCoMo on the WCDMA standard.
This was followed by 3.5G, 3G+ or turbo 3G enhancements based on the high-speed packet access family, allowing UMTS networks to have higher data transfer speeds and capacity. By 2009, it had become clear that, at some point, 3G networks would be overwhelmed by the growth of bandwidth-intensive applications, such as streaming media; the industry began looking to data-optimized fourth-generation technologies, with the promise of speed improvements up to ten-fold over existing 3G technologies. The first two commercially available technologies billed as 4G were the WiMAX standard, offered in North America by Sprint, the LTE standard, first offered in Scandinavia by TeliaSonera. 5G is a technology and term used in research papers and projects to denote the next major phase in mobile telecommunication standards beyond the 4G/IMT-Advanced standards. The term 5G is not used in any specification or official document yet made public by telecommunication companies or standardization bodies such as 3GPP, WiMAX Forum or ITU-R.
New standards beyond 4G are being developed by standardization bodies, but they are at this time seen as under the 4G umbrella, not for a new mobile generation. Smartphones have a number of distinguishing features; the International Telecommunication Union measures those with Internet connection, which it calls Active Mobile-Broadband subscriptions. In the developed world, smartphones have now overtaken the usage of earlier mobile systems. However, in the developing world, they account for around 50% of mobile telephony. Feature phone is a term used as a retronym to describe mobile phones which are limited in capabilities in contrast to a modern smartphone. Feature phones provide voice calling and text messaging functionality, in addition to basic multimedia and Internet capabilities, other services offered by the user's wireless service provider. A feature phone has additional functions over and above a basic mobile phone, only capable of voice calling and text messaging. Feature phones and basic mobile phones tend to use a proprietary, custom-designed software and user interface.
By contrast, smartphones use a mobile operating system that shares common traits across devices. There are Orthodox Jewish religious re
Raw image format
A camera raw image file contains minimally processed data from the image sensor of either a digital camera, or motion picture film scanner, or other image scanner. Raw files are named so because they are not yet processed and therefore are not ready to be printed or edited with a bitmap graphics editor; the image is processed by a raw converter in a wide-gamut internal color space where precise adjustments can be made before conversion to a "positive" file format such as TIFF or JPEG for storage, printing, or further manipulation. This encodes the image in a device-dependent color space. There are dozens, if not hundreds, of raw formats in use by different models of digital equipment. Raw image files are sometimes described as "digital negatives"; the process of converting a raw image file into a viewable format is sometimes called "developing" a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading.
Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, the conditions surrounding the capturing of the image. Raw image formats are intended to capture the radiometric characteristics of the scene, that is, physical information about the light intensity and color of the scene, at the best of the camera sensor's performance. Most raw image file formats store information sensed according to the geometry of the sensor's individual photo-receptive elements rather than points in the expected final image: sensors with hexagonal element displacement, for example, record information for each of their hexagonally-displaced cells, which a decoding software will transform into the rectangular geometry during "digital developing". Raw files contain the information required to produce a viewable image from the camera's sensor data.
The structure of raw files follows a common pattern: A short file header which contains an indicator of the byte-ordering of the file, a file identifier and an offset into the main file data Camera sensor metadata, required to interpret the sensor image data, including the size of the sensor, the attributes of the CFA and its color profile Image metadata, required for inclusion in any CMS environment or database. These include the exposure settings, camera/scanner/lens model, date of shoot/scan, authoring information and other; some raw files contain a standardized metadata section with data in Exif format. An image thumbnail Most raw files contain a full size JPEG conversion of the image, used to preview the file on the camera's LCD panel. In the case of motion picture film scans, either the timecode, keycode or frame number in the file sequence which represents the frame sequence in a scanned reel; this item allows the file to be ordered in a frame sequence. The sensor image dataMany raw file formats, including IIQ, 3FR, DCR, K25, KDC, CRW CR2 CR3, ERF, MEF, MOS, NEF, ORF, PEF, RW2 and ARW, SRF, SR2, are based on the TIFF file format.
These files may deviate from the TIFF standard in a number of ways, including the use of a non-standard file header, the inclusion of additional image tags and the encryption of some of the tagged data. Panasonic's raw converter corrects geometric distortion and chromatic aberration on such cameras as the LX3, with necessary correction information included in the raw. Phase One's raw converter Capture One offers corrections for geometrical distortion, chromatic aberration, purple fringing and keystone correction emulating the shift capability of tilt-shift in software and specially designed hardware, on most raw files from over 100 different cameras; the same holds for Canon's DPP application, at least for all more expensive cameras like all EOS DSLRs and the G<n> series of compact cameras. DNG, the Adobe digital negative format, is an extension of the TIFF 6.0 format and is compatible with TIFF/EP, uses various open formats and/or standards, including Exif metadata, XMP metadata, IPTC metadata, CIE XYZ coordinates, ICC profiles, JPEG.
In digital photography, the raw file plays the role that photographic film plays in film photography. Raw files thus contain the full resolution data as read out from each of the camera's image sensor pixels; the camera's sensor is invariably overlaid with a color filter array a Bayer filter, consisting of a mosaic of a 2x2 matrix of red, green and green filters. One variation on the Bayer filter is the RGBE filter of the Sony Cyber-shot DSC-F828, which exchanged the green in the RG rows with "emerald". Other sensors, such as the Foveon X3 sensor, capture information directly in RGB form; these RGB raw data still need to be processed to make an image file, because the raw RGB values correspond to the responses of the sensors, not to a standard color space like sRGB. These data do not need to be demosaiced, however. Flatbed and film scanner sensors are straight narrow RGB or RGBI (where "I" stand
Image scaling
In computer graphics and digital imaging, image scaling refers to the resizing of a digital image. In video technology, the magnification of digital material is known as upscaling or resolution enhancement; when scaling a vector graphic image, the graphic primitives that make up the image can be scaled using geometric transformations, with no loss of image quality. When scaling a raster graphics image, a new image with a higher or lower number of pixels must be generated. In the case of decreasing the pixel number this results in a visible quality loss. From the standpoint of digital signal processing, the scaling of raster graphics is a two-dimensional example of sample-rate conversion, the conversion of a discrete signal from a sampling rate to another. Image scaling can be interpreted as a form of image resampling or image reconstruction from the view of the Nyquist sampling theorem. According to the theorem, downsampling to a smaller image from a higher-resolution original can only be carried out after applying a suitable 2D anti-aliasing filter to prevent aliasing artifacts.
The image is reduced to the information. In the case of up sampling, a reconstruction filter takes the place of the anti-aliasing filter. A more sophisticated approach to upscaling treats the problem as an inverse problem, solving the question of generating a plausible image, when scaled down, would look like the input image. A variety of techniques have been applied for this, including optimization techniques with regularization terms and the use of machine learning from examples. An image size can be changed in several ways. Nearest-neighbor interpolationOne of the simpler ways of increasing image size is nearest-neighbor interpolation, replacing every pixel with the nearest pixel in the output, for upscaling this means multiple pixels of the same color, this can preserve sharp details in pixel art, but introduce jaggedness in smooth images.'Nearest' in nearest-neighbor doesn't have to be the mathematical nearest. One common implementation is to always round towards zero, rounding this way produces fewer artifacts and is faster to calculate.
Bilinear and bicubic algorithmsBilinear interpolation works by interpolating pixel color values, introducing a continuous transition into the output where the original material has discrete transitions. Although this is desirable for continuous-tone images, this algorithm reduces contrast in a way that may be undesirable for line art. Bicubic interpolation yields better results, with only a small increase in computational complexity. Sinc and Lanczos resamplingSinc resampling in theory provides the best possible reconstruction for a bandlimited signal. In practice, the assumptions behind sinc resampling are not met by real-world digital images. Lanczos resampling, an approximation to the sinc method, yields better results. Bicubic interpolation can be regarded as a computationally efficient approximation to Lanczos resampling. Box samplingOne weakness of bilinear and related algorithms is that they sample a specific number of pixels; when down scaling below a certain threshold, such as more than twice for all bi-sampling algorithms, the algorithms will sample non-adjacent pixels, which results in both losing data, causes rough results.
The trivial solution to this issue is box sampling, to consider the target pixel a box on the original image, sample all pixels inside the box. This ensures; the major weakness of this algorithm is. MipmapAnother solution to the downscale problem of bi-sampling scaling are mipmaps. A mipmap is a prescaled set of downscale copies; when downscaling the nearest larger mipmap is used as the origin, to ensure no scaling below the useful threshold of bilinear scaling is used. This algorithm is fast, easy to optimize, it is standard in many frameworks such as OpenGL. The cost is using more image memory one third more in the standard implementation. Fourier-transform methodsSimple interpolation based on Fourier transform pads the frequency domain with zero components. Besides the good conservation of details, notable is the ringing and the circular bleeding of content from the left border to right border. Edge-directed interpolationEdge-directed interpolation algorithms aim to preserve edges in the image after scaling, unlike other algorithms, which can introduce staircase artifacts.
Examples of algorithms for this task include New Edge-Directed Interpolation, Edge-Guided Image Interpolation, Iterative Curvature-Based Interpolation, Directional Cubic Convolution Interpolation. A 2013 analysis found that DCCI had the best scores in SSIM on a series of test images. HqxFor magnifying computer graphics with low resolution and/or few colors, better results can be achieved by hqx or other pixel-art scaling algorithms; these maintain high level of detail. VectorizationVector extraction, or vectorization, offer another approach. Vectorization first creates a resolution-independent vector representation of the graphic to be scaled; the resolution-independent version is rendered as a raster image at the desired resolution. This technique is used by Adobe Illustrator, Live Trace, Inkscape. Scalable Vector Graphics are well suited to simple geometric images, while photographs do not fare well with vectorization due to their complexity. Deep convolutional neural networksThis method uses machine learning for more detailed images such as photographs and complex artwork.
Programs tha
Digital cinema
Digital cinema refers to the use of digital technology to distribute or project motion pictures as opposed to the historical use of reels of motion picture film, such as 35 mm film. Whereas film reels have to be shipped to movie theaters, a digital movie can be distributed to cinemas in a number of ways: over the Internet or dedicated satellite links, or by sending hard drives or optical discs such as Blu-ray discs. Digital movies are projected using a digital video projector instead of a film projector. Digital cinema is distinct from high-definition television and does not use traditional television or other traditional high-definition video standards, aspect ratios, or frame rates. In digital cinema, resolutions are represented by the horizontal pixel count 2K or 4K; as digital-cinema technology improved in the early 2010s, most of the theaters across the world converted to digital video projection. Digital media playback of high-resolution 2K files has at least a 20-year history. Early video data storage units fed custom frame buffer systems with large memories.
In early digital video units, content was restricted to several minutes of material. Transfer of content between remote locations had limited capacity, it was not until the late 1990s that feature-length films could be sent over the "wire". On October 23, 1998, Digital Light Processing projector technology was publicly demonstrated with the release of The Last Broadcast, the first feature-length movie, shot and distributed digitally. In conjunction with Texas Instruments, the movie was publicly demonstrated in five theaters across the United States. In the United States, on June 18, 1999, Texas Instruments' DLP Cinema projector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm's Star Wars Episode I: The Phantom Menace. In Europe, on February 2, 2000, Texas Instruments' DLP Cinema projector technology was publicly demonstrated, by Philippe Binant, on one screen in Paris for the release of Toy Story 2. On January 19, 2000, the Society of Motion Picture and Television Engineers, in the United States, initiated the first standards group dedicated towards developing digital cinema.
By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, 1 in South America. Digital Cinema Initiatives was formed in March 2002 as a joint project of many motion picture studios to develop a system specification for digital cinema. In April 2004, in cooperation with the American Society of Cinematographers, DCI created standard evaluation material for testing of 2K and 4K playback and compression technologies. DCI selected JPEG2000 as the basis for the compression in the system the same year. In China, in June 2005, an e-cinema system called "dMs" was established and was used in over 15,000 screens spread across China's 30 provinces. DMs estimated that the system would expand to 40,000 screens in 2009. In 2005 the UK Film Council Digital Screen Network launched in the UK by Arts Alliance Media creating a chain of 250 2K digital cinema systems; the roll-out was completed in 2006. This was the first mass roll-out in Europe. AccessIT/Christie Digital started a roll-out in the United States and Canada.
By mid 2006, about 400 theaters were equipped with 2K digital projectors with the number increasing every month. Several digital 3D films surfaced in 2006 and several prominent filmmakers committed to making their next productions in stereo 3D. VUE West End was one of the first 3D digital cinemas along with Odeon Printworks Manchester and VUE Cheshire Oaks with the RealD equipment installed. All sites supported at the time by Arts Alliance Media. In August 2006, the Malayalam digital movie Moonnamathoral, produced by Benzy Martin, was distributed via satellite to cinemas, thus becoming the first Indian digital cinema; this was done by Emil and Eric Digital Films, a company based at Thrissur using the end-to-end digital cinema system developed by Singapore-based DG2L Technologies. In January 2007, Guru became the first Indian movie mastered in the DCI-compliant Jpeg2000 Interop format and the first Indian film to be previewed digitally, internationally, at the Elgin Winter Garden in Toronto; this film was digitally mastered at Real Image Media Technologies in India.
In 2007, the UK became home to Europe's first DCI-compliant digital multiplex cinemas. By March 2007, with the release of Disney's Meet the Robinsons, about 600 screens had been equipped with digital projectors. In June 2007, Arts Alliance Media announced the first European commercial digital cinema Virtual Print Fee agreements. In March 2009 AMC Theatres announced that it closed a $315 million deal with Sony to replace all of its movie projectors with 4K digital projectors starting in the second quarter of 2009. In January 2011, the total number of digital screens worldwide was 36,242, up from 16,339 at end 2009 or a growth rate of 121.8 percent during the year. There were 10,083 d-screens in Europe as a whole, 16,522 in the United States and Canada and 7,703 in Asia. Worldwide progress was slower as in some territories Latin America and Africa; as of 31 M