Tseng Labs ET4000
The Tseng Labs ET4000 was a popular line of SVGA graphics controller chips during the early 1990s found in many 386/486 and compatible systems, with some models, notably the ET4000/W32 and chips, offering graphics acceleration. Offering above average host interface throughput coupled with a moderate price, Tseng Labs' ET4000 chipset family were well regarded for their performance, were integrated into many companies' lineups, notably with Hercules' Dynamite series, the Diamond Stealth 32 and several Speedstar cards, on many generic boards; the ET4000AX was a major advancement over Tseng Labs' earlier ET3000 SVGA chipset, featuring a new 16-bit host interface controller with deep FIFO buffering and caching capabilities, an enhanced, variable-width memory interface with support for up to 1MB of memory with a ~16-bit VRAM or ~32-bit DRAM memory data bus width. The FIFO buffers and cache functions had the effect of improving host interface throughput, therefore offering improved redraw performance compared to the ET3000 and most of its contemporaries.
The interface controller offered support for IBM's MCA bus, in addition to an 8 or 16-bit ISA bus. The ET4000AX could support the emerging VESA Local Bus standard with some additional external logic, albeit with a 16-bit host bus width. Neither the ET4000AX or its succeeding family members offered an integrated RAMDAC, which hampered the line's cost/performance competitiveness on. Hardware acceleration via dedicated BitBLT hardware and a hardware cursor sprite was introduced in the ET4000/W32; the W32 offered improved local bus support along with further increased host interface performance, but by the time PCI Windows accelerators became commonplace, high host throughput was no longer a distinguishing feature. As a mid-priced Windows accelerator, the W32 benchmarked favorably against competing mid-range S3 and ATI chips. Configured with 32-bit asynchronous EDO/FPM DRAM, the W32 could sustain a transfer speed of ~56 MB/s; the /W32i revision featured an interleaved 32-bit memory bus to improve memory throughput.
It supports a maximum of 4 MB of video memory, though most boards featuring the chip offer a maximum expansion of 2MB or less. The W32p model offered support for the PCI bus, although earlier revisions of this chip had some design problems that caused sub-optimal or problematic operation when used in PCI implementations, although VLB implementations were unaffected
Graphics processing unit
A graphics processing unit is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. GPUs are used in embedded systems, mobile phones, personal computers and game consoles. Modern GPUs are efficient at manipulating computer graphics and image processing, their parallel structure makes them more efficient than general-purpose CPUs for algorithms that process large blocks of data in parallel. In a personal computer, a GPU can be present on a video card or embedded on the motherboard. In certain CPUs, they are embedded on the CPU die; the term GPU has been used from at least the 1980s. It was popularized by Nvidia in 1999, who marketed the GeForce 256 as "the world's first GPU", it was presented as a "single-chip processor with integrated transform, triangle setup/clipping, rendering engines". Rival ATI Technologies coined the term "visual processing unit" or VPU with the release of the Radeon 9700 in 2002.
Arcade system boards have been using specialized graphics chips since the 1970s. In early video game hardware, the RAM for frame buffers was expensive, so video chips composited data together as the display was being scanned out on the monitor. Fujitsu's MB14241 video shifter was used to accelerate the drawing of sprite graphics for various 1970s arcade games from Taito and Midway, such as Gun Fight, Sea Wolf and Space Invaders; the Namco Galaxian arcade system in 1979 used specialized graphics hardware supporting RGB color, multi-colored sprites and tilemap backgrounds. The Galaxian hardware was used during the golden age of arcade video games, by game companies such as Namco, Gremlin, Konami, Nichibutsu and Taito. In the home market, the Atari 2600 in 1977 used a video shifter called the Television Interface Adaptor; the Atari 8-bit computers had ANTIC, a video processor which interpreted instructions describing a "display list"—the way the scan lines map to specific bitmapped or character modes and where the memory is stored.
6502 machine code subroutines could be triggered on scan lines by setting a bit on a display list instruction. ANTIC supported smooth vertical and horizontal scrolling independent of the CPU; the NEC µPD7220 was one of the first implementations of a graphics display controller as a single Large Scale Integration integrated circuit chip, enabling the design of low-cost, high-performance video graphics cards such as those from Number Nine Visual Technology. It became one of the best known of; the Williams Electronics arcade games Robotron 2084, Joust and Bubbles, all released in 1982, contain custom blitter chips for operating on 16-color bitmaps. In 1985, the Commodore Amiga featured a custom graphics chip, with a blitter unit accelerating bitmap manipulation, line draw, area fill functions. Included is a coprocessor with its own primitive instruction set, capable of manipulating graphics hardware registers in sync with the video beam, or driving the blitter. In 1986, Texas Instruments released the TMS34010, the first microprocessor with on-chip graphics capabilities.
It could run general-purpose code, but it had a graphics-oriented instruction set. In 1990-1992, this chip would become the basis of the Texas Instruments Graphics Architecture Windows accelerator cards. In 1987, the IBM 8514 graphics system was released as one of the first video cards for IBM PC compatibles to implement fixed-function 2D primitives in electronic hardware; the same year, Sharp released the X68000, which used a custom graphics chipset, powerful for a home computer at the time, with a 65,536 color palette and hardware support for sprites and multiple playfields serving as a development machine for Capcom's CP System arcade board. Fujitsu competed with the FM Towns computer, released in 1989 with support for a full 16,777,216 color palette. In 1988, the first dedicated polygonal 3D graphics boards were introduced in arcades with the Namco System 21 and Taito Air System. In 1991, S3 Graphics introduced the S3 86C911, which its designers named after the Porsche 911 as an indication of the performance increase it promised.
The 86C911 spawned a host of imitators: by 1995, all major PC graphics chip makers had added 2D acceleration support to their chips. By this time, fixed-function Windows accelerators had surpassed expensive general-purpose graphics coprocessors in Windows performance, these coprocessors faded away from the PC market. Throughout the 1990s, 2D GUI acceleration continued to evolve; as manufacturing capabilities improved, so did the level of integration of graphics chips. Additional application programming interfaces arrived for a variety of tasks, such as Microsoft's WinG graphics library for Windows 3.x, their DirectDraw interface for hardware acceleration of 2D games within Windows 95 and later. In the early- and mid-1990s, real-time 3D graphics were becoming common in arcade and console games, which led to an increasing public demand for hardware-accelerated 3D graphics. Early examples of mass-market 3D graphics hardware can be found in arcade system boards such as the Sega Model 1, Namco System 22, Sega Model 2, the fifth-generation video game consoles such as the Saturn, PlayStation and Nintendo 64.
Arcade systems such as the Sega Model 2 and Namco Magic Edge Hornet Simulator in 1993 were capable of hardware T&L years before appearing in consu
NEC Corporation is a Japanese multinational provider of information technology services and products, headquartered in Minato, Japan. It provides IT and network solutions to business enterprises, communications services providers and to government agencies, has been the biggest PC vendor in Japan since the 1980s; the company was known as the Nippon Electric Company, before rebranding in 1983 as NEC. NEC was the world's fourth largest PC manufacturer by 1990, its NEC Semiconductors business unit was the worldwide semiconductor sales leader between 1985 and 1990, the second largest in 1995, one of the top three in 2000, one of the top 10 in 2006. It remained one of the top 20 semiconductor sales leaders before merging with Renesas Electronics. NEC is a member of the Sumitomo Group. NEC was #463 on the 2017 Fortune 500 list. Kunihiko Iwadare and Takeshiro Maeda established Nippon Electric Limited Partnership on August 31, 1898 by using facilities that they had bought from Miyoshi Electrical Manufacturing Company.
Iwadare acted as the representative partner. Western Electric, which had an interest in the Japanese phone market, was represented by Walter Tenney Carleton. Carleton was responsible for the renovation of the Miyoshi facilities, it was agreed that the partnership would be reorganized as a joint-stock company when treaty would allow it. On July 17, 1899, the revised treaty between Japan and the United States went into effect. Nippon Electric Company, Limited was organized the same day with Western Electric Company to become the first Japanese joint-venture with foreign capital. Iwadare was named managing director. Ernest Clement and Carleton were named as directors. Maeda and Mototeru Fujii were assigned to be auditors. Iwadare and Carleton handled the overall management; the company started with the production and maintenance of telephones and switches. NEC modernized the production facilities with the construction of the Mita Plant in 1901 at Mita Shikokumachi, it was completed in December 1902. The Japanese Ministry of Communications adopted a new technology in 1903: the common battery switchboard supplied by NEC.
The common battery switchboards powered the subscriber phone, eliminating the need for a permanent magnet generator in each subscriber's phone. The switchboards were imported, but were manufactured locally by 1909. NEC started exporting telephone sets to China in 1904. In 1905, Iwadare visited Western Electric in the U. S. to see their production control. On his return to Japan he discontinued the "oyakata" system of sub-contracting and replaced it with a new system where managers and employees were all direct employees of the company. Inefficiency was removed from the production process; the company paid higher salaries with incentives for efficiency. New accounting and cost controls were put in place, time clocks installed. Between 1899 and 1907 the number of telephone subscribers in Japan rose from 35,000 to 95,000. NEC entered the China market in 1908 with the implementation of the telegraph treaty between Japan and China, they entered the Korean market, setting up an office in Seoul in January 1908.
During the period of 1907 to 1912 sales rose from 1.6 million yen to 2 million yen. The expansion of the Japanese phone service had been a key part of NEC's success during this period; this expansion was about to take a pause. The Ministry of Communications delayed a third expansion plan of the phone service in March, 1913, despite having 120,000 potential telephone-subscribers waiting for phone installations. NEC sales fell sixty percent between 1912 and 1915. During the interim, Iwadare started importing appliances, including electric fans, kitchen appliances, washing machines and vacuum cleaners. Electric fans had never been seen in Japan before; the imports were intended to prop up company sales. In 1916, the government resumed the delayed telephone-expansion plan, adding 75,000 subscribers and 326,000 kilometers of new toll lines. Thanks to this third expansion plan, NEC expanded at a time when much of the rest of Japanese industry contracted. In 1919, NEC started its first association with Sumitomo, engaging Sumitomo Densen Seizosho to manufacture cables.
As part of the venture, NEC provided cable manufacturing equipment to Sumitomo Densen. Rights to Western Electrics duplex cable patents were transferred to Sumitomo Densen; the Great Kantō earthquake struck Japan in 1923. 140,000 people were killed and 3.4 million were left homeless. Four of NEC's factories were destroyed, killing 105 of NEC's workers. Thirteen of Tokyo's telephone offices were destroyed by fire. Telephone and telegraph service was interrupted by damage to telephone cables. In response, the Ministry of Communications accelerated major programs to install automatic telephone switching systems and enter radio broadcasting; the first automatic switching systems were the Strowger-type model made by Automatic Telephone Manufacturing Co. in the United Kingdom. NEC participated in the installation of the automatic switching systems becoming the general sales agent for ATM. NEC developed its own Strowger-type automatic switching system in a first in Japan. One of the plants leveled during the Kanto earthquake, the Mita Plant, was chosen to support expanding production.
A new three-story steel-reinforced concrete building was built, starting in 1925. It was modeled after the Western Electric Hawthorne Works. NEC started its radio communications business in 1924. Japan's first radio broadcaster, Radio Tokyo was founded in 1924 and started broadcasting in 1925. NEC imported the broadcasting equipment from Western Electric; the expansion of radio broadcasting into Osaka and Nagoya marked the emergence of
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
Micro Channel architecture
Micro Channel architecture, or the Micro Channel bus, was a proprietary 16- or 32-bit parallel computer bus introduced by IBM in 1987, used on PS/2 and other computers until the mid-1990s. Its name is abbreviated as "MCA", although not by IBM. In IBM products, it superseded the ISA bus and was itself subsequently superseded by the PCI bus architecture; the development of Micro Channel was driven by both technical and business pressures. The IBM AT bus, which became known as the Industry Standard Architecture bus, had a number of technical design limitations, including: A slow bus speed. A limited number of interrupts, fixed in hardware. A limited number of I/O device addresses fixed in hardware. Hardwired and complex configuration with no conflict resolution. Deep links to the architecture of the 80x86 chip familyIn addition, it suffered from other problems: Poor grounding and power distribution. Undocumented bus interface standards that varied between manufacturers; these limitations became more serious as the range of tasks and peripherals, the number of manufacturers for IBM PC-compatibles, grew.
IBM was investigating the use of RISC processors in desktop machines, could, in theory, save considerable money if a single well-documented bus could be used across their entire computer lineup. It was thought that by creating a new standard, IBM would regain control of standards via the required licensing; as patents can take three years or more to be granted, only those relating to ISA could be licensed when Micro Channel was announced. Patents on important Micro Channel features, such as Plug and Play automatic configuration, were not granted to IBM until after PCI had replaced Micro Channel in the marketplace; the Micro Channel architecture was designed by engineer Chet Heath. A lot of the Micro Channel cards that were developed used the CHIPS P82C612 MCA interface controller; the Micro Channel was a 32-bit bus, but the system supported a 16-bit mode designed to lower the cost of connectors and logic in Intel-based machines like the IBM PS/2. The situation was never that simple, however, as both the 32-bit and 16-bit versions had a number of additional optional connectors for memory cards which resulted in a huge number of physically incompatible cards for bus attached memory.
In time, memory moved to the CPU's local bus. On the upside, signal quality was improved as Micro Channel added ground and power pins and arranged the pins to minimize interference. Another connector extension was included for graphics cards; this extension was used for analog output from the video card, routed through the system board to the system's own monitor output. The advantage of this was that Micro Channel system boards could have a basic VGA or MCGA graphics system on board, higher level graphics could share the same port; the add-on cards were able to be free of'legacy' VGA modes, making use of the on-board graphics system when needed, allowing a single system board connector for graphics that could be upgraded. Micro Channel cards featured a unique, 16-bit software-readable ID, which formed the basis of an early plug and play system; the BIOS and/or OS can read IDs, compare against a list of known cards, perform automatic system configuration to suit. This led to boot failures whereby an older BIOS would not recognize a newer card, causing an error at startup.
In turn, this required IBM to release updated Reference Disks on a regular basis. A complete list of known IDs is available. To accompany these reference disks were ADF files which were read by setup which in turn provided configuration information for the card; the ADF was a simple text file, containing information about the card's memory addressing and interrupts. Although MCA cards cost nearly double the price of comparable non-MCA cards, the marketing stressed that it was simple for any user to upgrade or add more cards to their PC, thus saving the considerable expense of a technician. In this critical area, Micro Channel architecture's biggest advantage was its greatest disadvantage, one of the major reasons for its demise. To add a new card the user plugged in the MCA card and inserted a customized floppy disk to blend the new card into the original hardware automatically, rather than bringing in an expensive trained technician who could manually make all the needed changes. All choices for interrupts and other changes were accomplished automatically by the PC reading the old configuration from the floppy disk, which made necessary changes in software wrote the new configuration to the floppy disk.
In practice, this meant that the user must keep that same floppy disk matched to that PC. For a small company with a few PCs, this was practical, but for large organizations with hundreds or thousands of PCs, permanently matching each PC with its own floppy disk was logistically unlikely or impossible. Without the original, updated floppy disk, no changes could be made to the PC's cards. After this experience repeated itself thousands of times, business leaders realized their dream scenario for upgrade simplicity did not work in the corporate world, they sought a better process; the basic data rate of the Micro Channel was increased from ISA's 8 MHz to 10 MHz. This may have been a modest increase in terms of clock rate, but the greater bus width, coupled with a dedicated bus controller that utilized b
A blitter is a circuit, sometimes as a coprocessor or a logic block on a microprocessor, dedicated to the rapid movement and modification of data within a computer's memory. A blitter can copy large quantities of data from one memory area to another quickly, in parallel with the CPU, while freeing up the CPU's more complex capabilities for other operations. A typical use for a blitter is the movement of a bitmap, such as windows and fonts in a graphical user interface or images and backgrounds in a 2D video game; the name comes from the bit blit operation of the 1973 Xerox Alto, which stands for bit-block transfer. A blit operation is more than a memory copy, because it can involve data that's not byte aligned, handling transparent pixels, various ways of combining the source and destination data. Blitters have been superseded by programmable graphics processing units. In early computers with raster-graphics output, the frame buffer was held in main memory and updated via software running on the CPU.
For many simple graphics routines, like compositing a smaller image into a larger one or drawing a filled rectangle, large amounts of memory had to be manipulated, many CPU cycles were spent fetching and decoding instructions for short repetitive loops of load/store instructions. For CPUs without caches, the bus requirement for instructions was as significant as data. Further, as a single byte held between 2 and 8 pixels, the data was not aligned for the CPU, so extra shifting and masking operations were required. 1973: The Xerox Alto, where the term bit blit originated, has a bit block transfer instruction implemented in microcode, making it much faster than the same operation written on the CPU. The microcode was implemented by Dan Ingalls.1982: The Robotron: 2084 arcade game from Williams Electronics includes two blitter chips which allow the game to have up to 80 moving objects. Performance was measured at 910 KB/second; the blitter operates on 4-bit pixels where color 0 is transparent, allowing for non-rectangular shapes.
Williams used the same blitters in other games from the time period, including Sinistar and Joust.1984: The MS-DOS compatible Mindset contains a custom VLSI chip to move rectangular sections of a bitmap. The hardware handles transparency and eight modes for combining the destination data; the Mindset was claimed to have graphics up to 50x faster than PCs of the time, but the system was not successful. 1985: Commodore's Amiga launches with a blitter co-processor. The first US patent filing to use the term blitter was "Personal computer apparatus for block transfer of bit-mapped image data," assigned to Commodore-Amiga, Inc; the blitter performs a 4 operand boolean operation. 1986: The TMS34010 is a general purpose 32-bit processor with additional blitter-like instructions for manipulating bitmap data. It is optimized for cases that take extra processing on the CPU, such as handling transparent pixels, working with non-byte aligned data, converting between bit depths; the TMS34010 served as both CPU and GPU for a number of arcade games in the late 1980s and early 1990s, including Hard Drivin', Smash TV, Mortal Kombat, NBA Jam, as well as finding use in PC graphics accelerator boards in the 1990s.
1986: The Intel 82786 is a programmable graphics processor which includes a BIT_BLT instruction to move rectangular sections of bitmaps.1987: The IBM 8514/A display adapter, introduced with the IBM Personal System/2 computers in April 1987, includes bit block transfer hardware. 1987: Atari Corporation introduces a blitter co-processor, stylized as the BLiTTER chip, on the Atari Mega ST 2 computer. It was supported on most machines. 1989: The short-lived Atari Transputer Workstation contains blitter hardware as part of its "Blossom" video system.1993: The last console produced by Atari Corporation, the Jaguar has blitter hardware as part of the custom "Tom" chip. A computer program puts information into certain registers describing what memory transfer needs to be completed and the logical operations to perform on the data; the CPU triggers the blitter to begin operating. The CPU is free for other processing while the blitter is working, though the blit running in parallel uses memory bandwidth.
To copy data with transparent portions—such as sprites—a color can be designated to be ignored during the blit. On other systems, a second 1 bit per pixel image may be used as a "mask" to indicate which pixels to transfer and which to leave untouched; the mask operates like a stencil. The logical operation is destination:= OR sprite. Hardware sprites are small bitmaps that can be positioned independently, composited together with the background on-the-fly by the video chip, so no actual modification of the frame buffer occurs. Sprite systems are more efficient for moving graphics requiring 1/3 the memory cycles because only image data—not CPU instructions—need to be fetched, with the subsequent compositing happening on-chip; the downside of sprites is a limit of moving graphics per scanline, which can range from three to eight to higher for 16-bit arcade hardware and consoles, the inability to update a permanent bitmap. Direct Memory Access
Interlaced video is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured at two different times; this enhances motion perception to the viewer, reduces flicker by taking advantage of the phi phenomenon. This doubles the time resolution as compared to non-interlaced footage. Interlaced signals require a display, natively capable of showing the individual fields in a sequential order. CRT displays and ALiS plasma displays are made for displaying interlaced signals. Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen by scanning or displaying each line or row of pixels; this technique uses two fields to create a frame. One field contains all odd-numbered lines in the image. A Phase Alternating Line -based television set display, for example, scans 50 fields every second; the two sets of 25 fields work together to create a full frame every 1/25 of a second, but with interlacing create a new half frame every 1/50 of a second.
To display interlaced video on progressive scan displays, playback applies deinterlacing to the video signal. The European Broadcasting Union has argued against interlaced video in broadcasting, they recommend 720p 50 fps for the current production format—and are working with the industry to introduce 1080p 50 as a future-proof production standard. 1080p 50 offers higher vertical resolution, better quality at lower bitrates, easier conversion to other formats, such as 720p 50 and 1080i 50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be eliminated because some information is lost between frames. Despite arguments against it, television standards organizations continue to support interlacing, it is still included in digital video transmission formats such as DV, DVB, ATSC. New video compression standards like High Efficiency Video Coding are optimized for progressive scan video, but sometimes do support interlaced video.
Progressive scan captures and displays an image in a path similar to text on a page—line by line, top to bottom. The interlaced scan pattern in a CRT display completes such a scan, but in two passes; the first pass displays the first and all odd numbered lines, from the top left corner to the bottom right corner. The second pass displays the second and all numbered lines, filling in the gaps in the first scan; this scan of alternate lines is called interlacing. A field is an image. Persistence of vision makes. In the days of CRT displays, the afterglow of the display's phosphor aided this effect. Interlacing provides full vertical detail with the same bandwidth that would be required for a full progressive scan, but with twice the perceived frame rate and refresh rate. To prevent flicker, all analog broadcast television systems used interlacing. Format identifiers like 576i 50 and 720p 50 specify the frame rate for progressive scan formats, but for interlaced formats they specify the field rate.
This can lead to confusion, because industry-standard SMPTE timecode formats always deal with frame rate, not field rate. To avoid confusion, SMPTE and EBU always use frame rate to specify interlaced formats, e.g. 480i 60 is 480i/30, 576i 50 is 576i/25, 1080i 50 is 1080i/25. This convention assumes that one complete frame in an interlaced signal consists of two fields in sequence. One of the most important factors in analog television is signal bandwidth, measured in megahertz; the greater the bandwidth, the more expensive and complex the entire production and broadcasting chain. This includes cameras, storage systems, broadcast systems—and reception systems: terrestrial, satellite and end-user displays. For a fixed bandwidth, interlace provides a video signal with twice the display refresh rate for a given line count; the higher refresh rate improves the appearance of an object in motion, because it updates its position on the display more and when an object is stationary, human vision combines information from multiple similar half-frames to produce the same perceived resolution as that provided by a progressive full frame.
This technique is only useful though. Cinema movies are recorded at 24fps, therefore don't benefit from interlacing, a solution which reduces the maximum video bandwidth to 5MHz without reducing the effective picture scan rate of 60 Hz. Given a fixed bandwidth and high refresh rate, interlaced video can provide a higher spatial resolution than progressive scan. For instance, 1920×1080 pixel resolution interlaced HDTV with a 60 Hz field rate has a similar bandwidth to 1280×720 pixel progressive scan HDTV with a 60 Hz frame rate, but achieves twice the spatial resolution for low-motion scenes. However, bandwidth benefits only apply to uncompressed digital video signal. With digital video compression, as used in all current digital TV standards, interlacing introduces additional inef