Super Video Graphics Array or Ultra Video Graphics Array always abbreviated to Super VGA, Ultra VGA or just SVGA or UVGA is a broad term that covers a wide range of computer display standards. It was an extension to the VGA standard first released by IBM in 1987. Unlike VGA—a purely IBM-defined standard—Super VGA was never formally defined; the closest to an "official" definition was in the VBE extensions defined by the Video Electronics Standards Association, an open consortium set up to promote interoperability and define standards. In this document, there was a footnote stating that "The term'Super VGA' is used in this document for a graphics display controller implementing any superset of the standard IBM VGA display adapter." When used as a resolution specification, in contrast to VGA or XGA for example, the term SVGA refers to a resolution of 800x600 pixels. Though Super VGA cards appeared in the same year as VGA, it was not until 1989 that a standard for programming Super VGA modes was defined by VESA.
In that first version, it defined support for a maximum resolution of 800 × 600 4-bit pixels. Each pixel could therefore be any of 16 different colors, it was extended to 1024 × 768 8-bit pixels, well beyond that in the following years. Although the number of colors is defined in the VBE specification, this is irrelevant when referring to Super VGA monitors as the interface between the video card and the VGA or Super VGA monitor uses simple analog voltages to indicate the desired color. In consequence, so far as the monitor is concerned, there is no theoretical limit to the number of different colors that can be displayed; this applies to any Super VGA monitor. While the output of a VGA or Super VGA video card is analog, the internal calculations the card performs in order to arrive at these output voltages are digital. To increase the number of colors a Super VGA display system can reproduce, no change at all is needed for the monitor, but the video card needs to handle much larger numbers and may well need to be redesigned from scratch.
So, the leading graphics chip vendors were producing parts for high-color video cards within just a few months of Super VGA's introduction. On paper, the original Super VGA was to be succeeded by Super XGA, but in practice the industry soon abandoned the attempt to provide a unique name for each higher display standard, all display systems made between the late 1990s and the early 2000s are classed as Super VGA. Monitor manufacturers sometimes advertise their products as XGA or Super XGA. In practice this means little, since all Super VGA monitors manufactured since the 1990s have been capable of at least XGA and considerably higher performance. SVGA uses the same DE-15 as the original standard. See Digital Visual Interface, a common non-analog cable for SVGA and other resolutions; some of the early SuperVGA manufacturers were: Ahead Technologies Not related with Nero AG Amdek AST Research, Inc. ATI Technologies Chips and Technologies Cirrus Logic Compaq Everex Genoa Systems Orchid Technology Western Digital's Paradise Inc.
Sigma Designs STB Systems V-RAM VGA ) Willow Trident Microsystems SVGA supported resolutions like 1280x800 and higher
NEC Corporation is a Japanese multinational provider of information technology services and products, headquartered in Minato, Japan. It provides IT and network solutions to business enterprises, communications services providers and to government agencies, has been the biggest PC vendor in Japan since the 1980s; the company was known as the Nippon Electric Company, before rebranding in 1983 as NEC. NEC was the world's fourth largest PC manufacturer by 1990, its NEC Semiconductors business unit was the worldwide semiconductor sales leader between 1985 and 1990, the second largest in 1995, one of the top three in 2000, one of the top 10 in 2006. It remained one of the top 20 semiconductor sales leaders before merging with Renesas Electronics. NEC is a member of the Sumitomo Group. NEC was #463 on the 2017 Fortune 500 list. Kunihiko Iwadare and Takeshiro Maeda established Nippon Electric Limited Partnership on August 31, 1898 by using facilities that they had bought from Miyoshi Electrical Manufacturing Company.
Iwadare acted as the representative partner. Western Electric, which had an interest in the Japanese phone market, was represented by Walter Tenney Carleton. Carleton was responsible for the renovation of the Miyoshi facilities, it was agreed that the partnership would be reorganized as a joint-stock company when treaty would allow it. On July 17, 1899, the revised treaty between Japan and the United States went into effect. Nippon Electric Company, Limited was organized the same day with Western Electric Company to become the first Japanese joint-venture with foreign capital. Iwadare was named managing director. Ernest Clement and Carleton were named as directors. Maeda and Mototeru Fujii were assigned to be auditors. Iwadare and Carleton handled the overall management; the company started with the production and maintenance of telephones and switches. NEC modernized the production facilities with the construction of the Mita Plant in 1901 at Mita Shikokumachi, it was completed in December 1902. The Japanese Ministry of Communications adopted a new technology in 1903: the common battery switchboard supplied by NEC.
The common battery switchboards powered the subscriber phone, eliminating the need for a permanent magnet generator in each subscriber's phone. The switchboards were imported, but were manufactured locally by 1909. NEC started exporting telephone sets to China in 1904. In 1905, Iwadare visited Western Electric in the U. S. to see their production control. On his return to Japan he discontinued the "oyakata" system of sub-contracting and replaced it with a new system where managers and employees were all direct employees of the company. Inefficiency was removed from the production process; the company paid higher salaries with incentives for efficiency. New accounting and cost controls were put in place, time clocks installed. Between 1899 and 1907 the number of telephone subscribers in Japan rose from 35,000 to 95,000. NEC entered the China market in 1908 with the implementation of the telegraph treaty between Japan and China, they entered the Korean market, setting up an office in Seoul in January 1908.
During the period of 1907 to 1912 sales rose from 1.6 million yen to 2 million yen. The expansion of the Japanese phone service had been a key part of NEC's success during this period; this expansion was about to take a pause. The Ministry of Communications delayed a third expansion plan of the phone service in March, 1913, despite having 120,000 potential telephone-subscribers waiting for phone installations. NEC sales fell sixty percent between 1912 and 1915. During the interim, Iwadare started importing appliances, including electric fans, kitchen appliances, washing machines and vacuum cleaners. Electric fans had never been seen in Japan before; the imports were intended to prop up company sales. In 1916, the government resumed the delayed telephone-expansion plan, adding 75,000 subscribers and 326,000 kilometers of new toll lines. Thanks to this third expansion plan, NEC expanded at a time when much of the rest of Japanese industry contracted. In 1919, NEC started its first association with Sumitomo, engaging Sumitomo Densen Seizosho to manufacture cables.
As part of the venture, NEC provided cable manufacturing equipment to Sumitomo Densen. Rights to Western Electrics duplex cable patents were transferred to Sumitomo Densen; the Great Kantō earthquake struck Japan in 1923. 140,000 people were killed and 3.4 million were left homeless. Four of NEC's factories were destroyed, killing 105 of NEC's workers. Thirteen of Tokyo's telephone offices were destroyed by fire. Telephone and telegraph service was interrupted by damage to telephone cables. In response, the Ministry of Communications accelerated major programs to install automatic telephone switching systems and enter radio broadcasting; the first automatic switching systems were the Strowger-type model made by Automatic Telephone Manufacturing Co. in the United Kingdom. NEC participated in the installation of the automatic switching systems becoming the general sales agent for ATM. NEC developed its own Strowger-type automatic switching system in a first in Japan. One of the plants leveled during the Kanto earthquake, the Mita Plant, was chosen to support expanding production.
A new three-story steel-reinforced concrete building was built, starting in 1925. It was modeled after the Western Electric Hawthorne Works. NEC started its radio communications business in 1924. Japan's first radio broadcaster, Radio Tokyo was founded in 1924 and started broadcasting in 1925. NEC imported the broadcasting equipment from Western Electric; the expansion of radio broadcasting into Osaka and Nagoya marked the emergence of
Frequency is the number of occurrences of a repeating event per unit of time. It is referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency; the period is the duration of time of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example: if a newborn baby's heart beats at a frequency of 120 times a minute, its period—the time interval between beats—is half a second. Frequency is an important parameter used in science and engineering to specify the rate of oscillatory and vibratory phenomena, such as mechanical vibrations, audio signals, radio waves, light. For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineering disciplines, such as optics and radio, frequency is denoted by a Latin letter f or by the Greek letter ν or ν; the relation between the frequency and the period T of a repeating event or oscillation is given by f = 1 T.
The SI derived unit of frequency is the hertz, named after the German physicist Heinrich Hertz. One hertz means. If a TV has a refresh rate of 1 hertz the TV's screen will change its picture once a second. A previous name for this unit was cycles per second; the SI unit for period is the second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated r/min or rpm. 60 rpm equals one hertz. As a matter of convenience and slower waves, such as ocean surface waves, tend to be described by wave period rather than frequency. Short and fast waves, like audio and radio, are described by their frequency instead of period; these used conversions are listed below: Angular frequency denoted by the Greek letter ω, is defined as the rate of change of angular displacement, θ, or the rate of change of the phase of a sinusoidal waveform, or as the rate of change of the argument to the sine function: y = sin = sin = sin d θ d t = ω = 2 π f Angular frequency is measured in radians per second but, for discrete-time signals, can be expressed as radians per sampling interval, a dimensionless quantity.
Angular frequency is larger than regular frequency by a factor of 2π. Spatial frequency is analogous to temporal frequency, but the time axis is replaced by one or more spatial displacement axes. E.g.: y = sin = sin d θ d x = k Wavenumber, k, is the spatial frequency analogue of angular temporal frequency and is measured in radians per meter. In the case of more than one spatial dimension, wavenumber is a vector quantity. For periodic waves in nondispersive media, frequency has an inverse relationship to the wavelength, λ. In dispersive media, the frequency f of a sinusoidal wave is equal to the phase velocity v of the wave divided by the wavelength λ of the wave: f = v λ. In the special case of electromagnetic waves moving through a vacuum v = c, where c is the speed of light in a vacuum, this expression becomes: f = c λ; when waves from a monochrome source travel from one medium to another, their frequency remains the same—only their wavelength and speed change. Measurement of frequency can done in the following ways, Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period dividing the count by the length of the time period.
For example, if 71 events occur within 15 seconds the frequency is: f = 71 15 s ≈ 4.73 Hz If the number of counts is not large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time. The latter method introduces a random error into the count of between zero and one count, so on average half a count; this is called gating error and causes an average error in the calculated frequency of Δ f = 1 2 T
A power-on self-test is a process performed by firmware or software routines after a computer or other digital electronic device is powered on. This article deals with POSTs on personal computers, but many other embedded systems such as those in major appliances, communications, or medical equipment have self-test routines which are automatically invoked at power-on; the results of the POST may be displayed on a panel, part of the device, output to an external device, or stored for future retrieval by a diagnostic tool. Since a self-test might detect that the system's usual human-readable display is non-functional, an indicator lamp or a speaker may be provided to show error codes as a sequence of flashes or beeps. In addition to running tests, the POST process may set the initial state of the device from firmware. In the case of a computer, the POST routines are part of a device's pre-boot sequence. In IBM PC compatible computers, the main duties of POST are handled by the BIOS, which may hand some of these duties to other programs designed to initialize specific peripheral devices, notably for video and SCSI initialization.
These other duty-specific programs are known collectively as option ROMs or individually as the video BIOS, SCSI BIOS, etc. The principal duties of the main BIOS during POST are as follows: verify CPU registers verify the integrity of the BIOS code itself verify some basic components like DMA, interrupt controller find and verify system main memory initialize BIOS pass control to other specialized extension BIOSes identify and select which devices are available for bootingThe functions above are served by the POST in all BIOS versions back to the first. In BIOS versions, POST will also: discover and catalog all system buses and devices provide a user interface for system's configuration construct whatever system environment is required by the target operating system The BIOS begins its POST when the CPU is reset; the first memory location the CPU tries to execute is known as the reset vector. In the case of a hard reboot, the northbridge will direct this code fetch to the BIOS located on the system flash memory.
For a warm boot, the BIOS will be located in the proper place in RAM and the northbridge will direct the reset vector call to the RAM. During the POST flow of a contemporary BIOS, one of the first things a BIOS should do is determine the reason it is executing. For a cold boot, for example, it may need to execute all of its functionality. If, the system supports power saving or quick boot methods, the BIOS may be able to circumvent the standard POST device discovery, program the devices from a preloaded system device table; the POST flow for the PC has developed from a simple, straightforward process to one, complex and convoluted. During the POST, the BIOS must integrate a plethora of competing and mutually exclusive standards and initiatives for the matrix of hardware and OSes the PC is expected to support, although at most only simple memory tests and the setup screen are displayed. In earlier BIOSes, up to around the turn of the millennium, the POST would perform a thorough test of all devices, including a complete memory test.
This design by IBM was modeled after their larger systems, which would perform a complete hardware test as part of their cold-start process. As the PC platform evolved into more of a commodity consumer device, the mainframe- and minicomputer-inspired high-reliability features such as parity memory and the thorough memory test in every POST were dropped from most models; the exponential growth of PC memory sizes, driven by the exponential drop in memory prices, was a factor in this, as the duration of a memory test using a given CPU is directly proportional to the memory size. The original IBM PC could be equipped with as little as 16 KB of RAM and had between 64 and 640 KB. Beginning with the IBM XT, a memory count was displayed during POST instead of a blank screen. A modern PC with a bus rate of around 1 GHz and a 32-bit bus might be 2000x or 5000x faster, but it might have more than 3 GB of memory—5000x more. With people being more concerned with boot times now than in the 1980s, the 30 to 60 second memory test adds undesirable delay for a benefit of confidence, not perceived to be worth that cost by most users.
Most clone PC BIOSes allowed the user to skip the POST RAM check by pressing a key, more modern machines performed no RAM test at all unless it was enabled via the BIOS setup. In addition, modern DRAM is more reliable than DRAM was in the 1980s; as part of the starting sequence the POST routines may display a prompt to the user for a key press to access built-in setup functions of the BIOS. This allows the user to set various options particular to the mother board before the operating system is loaded. If no key is pressed, the POST will proceed on to the boot sequence required to load the installed operating system; the original IBM BIOS made POST diagnostic information avail
A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
A computer monitor is an output device that displays information in pictorial form. A monitor comprises the display device, circuitry and power supply; the display device in modern monitors is a thin film transistor liquid crystal display with LED backlighting having replaced cold-cathode fluorescent lamp backlighting. Older monitors used a cathode ray tube. Monitors are connected to the computer via VGA, Digital Visual Interface, HDMI, DisplayPort, low-voltage differential signaling or other proprietary connectors and signals. Computer monitors were used for data processing while television receivers were used for entertainment. From the 1980s onwards, computers have been used for both data processing and entertainment, while televisions have implemented some computer functionality; the common aspect ratio of televisions, computer monitors, has changed from 4:3 to 16:10, to 16:9. Modern computer monitors are interchangeable with conventional television sets. However, as computer monitors do not include components such as a television tuner and speakers, it may not be possible to use a computer monitor as a television without external components.
Early electronic computers were fitted with a panel of light bulbs where the state of each particular bulb would indicate the on/off state of a particular register bit inside the computer. This allowed the engineers operating the computer to monitor the internal state of the machine, so this panel of lights came to be known as the'monitor'; as early monitors were only capable of displaying a limited amount of information and were transient, they were considered for program output. Instead, a line printer was the primary output device, while the monitor was limited to keeping track of the program's operation; as technology developed engineers realized that the output of a CRT display was more flexible than a panel of light bulbs and by giving control of what was displayed in the program itself, the monitor itself became a powerful output device in its own right. Computer monitors were known as visual display units, but this term had fallen out of use by the 1990s. Multiple technologies have been used for computer monitors.
Until the 21st century most used cathode ray tubes but they have been superseded by LCD monitors. The first computer monitors used cathode ray tubes. Prior to the advent of home computers in the late 1970s, it was common for a video display terminal using a CRT to be physically integrated with a keyboard and other components of the system in a single large chassis; the display was monochrome and far less sharp and detailed than on a modern flat-panel monitor, necessitating the use of large text and limiting the amount of information that could be displayed at one time. High-resolution CRT displays were developed for the specialized military and scientific applications but they were far too costly for general use; some of the earliest home computers were limited to monochrome CRT displays, but color display capability was a standard feature of the pioneering Apple II, introduced in 1977, the specialty of the more graphically sophisticated Atari 800, introduced in 1979. Either computer could be connected to the antenna terminals of an ordinary color TV set or used with a purpose-made CRT color monitor for optimum resolution and color quality.
Lagging several years behind, in 1981 IBM introduced the Color Graphics Adapter, which could display four colors with a resolution of 320 x 200 pixels, or it could produce 640 x 200 pixels with two colors. In 1984 IBM introduced the Enhanced Graphics Adapter, capable of producing 16 colors and had a resolution of 640 x 350. By the end of the 1980s color CRT monitors that could display 1024 x 768 pixels were available and affordable. During the following decade, maximum display resolutions increased and prices continued to fall. CRT technology remained dominant in the PC monitor market into the new millennium because it was cheaper to produce and offered to view angles close to 180 degrees. CRTs still offer some image quality advantages over LCDs but improvements to the latter have made them much less obvious; the dynamic range of early LCD panels was poor, although text and other motionless graphics were sharper than on a CRT, an LCD characteristic known as pixel lag caused moving graphics to appear noticeably smeared and blurry.
There are multiple technologies. Throughout the 1990s, the primary use of LCD technology as computer monitors was in laptops where the lower power consumption, lighter weight, smaller physical size of LCDs justified the higher price versus a CRT; the same laptop would be offered with an assortment of display options at increasing price points: monochrome, passive color, or active matrix color. As volume and manufacturing capability have improved, the monochrome and passive color technologies were dropped from most product lines. TFT-LCD is a variant of LCD, now the dominant technology used for computer monitors; the first standalone LCDs appeared in the mid-1990s selling for high prices. As prices declined over a period of years they became more popular, by 1997 were competing with CRT monitors. Among the first desktop LCD computer monitors was the Eizo L66 in the mid-1990s, the Apple Studio Display in 1998, the Apple Cinema Display in 1999. In 2003, TFT-LCDs outsold CRTs for the first time, becoming the primary technology used for computer monitors.
The main advantages of LCDs over CRT displays are that LC
Color Graphics Adapter
The Color Graphics Adapter also called the Color/Graphics Adapter or IBM Color/Graphics Monitor Adapter, introduced in 1981, was IBM's first graphics card and first color display card for the IBM PC. For this reason, it became that computer's first color computer display standard; the standard IBM CGA graphics card was equipped with 16 kilobytes of video memory and could be connected either to a dedicated direct-drive CRT monitor using a 4-bit digital RGBI interface, such as the IBM 5153 color display, or to an NTSC-compatible television or composite video monitor via an RCA connector. The RCA connector provided only baseband video, so to connect the CGA card to a standard television set required a separate RF modulator unless the TV had an RCA jack though with the former combined with an amplifier sometimes was more practical since one could hook up an antenna to the amplifier and get wireless video. Built around the Motorola MC6845 display controller, the CGA card featured several graphics and text modes.
The highest display resolution of any mode was 640×200, the highest color depth supported was 4-bit. CGA supports: 320×200 in 4 colors from a 16 color hardware palette. Pixel aspect ratio of 1:1.2. 640×200 in 2 colors. Pixel aspect ratio of 1:2.4 Text modes: 40×25 with 8×8 pixel font 80×25 with 8×8 pixel font Extended graphics modes: 160×100 16 color mode Artifact colors using a NTSC monitor IBM intended that CGA be compatible with a home television set. The 40×25 text and 320×200 graphics modes are usable with a television, the 80×25 text and 640×200 graphics modes are intended for a monitor. Despite varying bit depths among the CGA graphics modes, CGA processes colors in its palette in four bits, yielding 24 = 16 different colors; the four color bits are arranged according to the RGBI color model: the lower three bits represent red and blue color components. In graphics modes, colors are set per-pixel; these four bits are passed on unmodified to the DE-9 connector at the back of the card, leaving all color processing to the RGBI monitor connected to it.
With respect to the RGBI color model described above, the monitor would use the following formula to process the digital four-bit color number to analog voltages ranging from 0.0 to 1.0: red:= 2/3×/4 + 1/3×/8 green:= 2/3×/2 + 1/3×/8 blue:= 2/3×/1 + 1/3×/8 Color 6 is treated differently. For the composite output, these four-bit color numbers are encoded by the CGA's onboard hardware into an NTSC-compatible signal fed to the card's RCA output jack. For cost reasons, this is not done using an RGB-to-YIQ converter as called for by the NTSC standard, but by a series of flip-flops and delay lines; the hues seen are lacking in purity. The relative luminances of the colors produced by the composite color-generating circuit differ between CGA revisions: they are identical for colors 1-6 and 9-14 with early CGAs produced until 1983, are different for CGAs due to the addition of additional resistors; as noted however, this method only works on NTSC television sets, PAL TVs do not display the colors as expected when connected to the composite output, as PAL's superior color separation prevents artifacting from occurring.
When the CGA was introduced in 1981, IBM did not offer an RGBI monitor. Instead, customers were supposed to use the RCA output with an RF modulator to connect the CGA to their television set; the IBM 5153 Personal Computer Color Display would not be introduced until March 1983. Resulting from the lack of available RGBI monitors in 1981 and 1982, many users would use simpler RGB monitors, reducing the number of available colors to eight, displaying both colors 6 and 14 as yellow; this is relevant insofar as if an application or game programmer used either one of these configurations, they will have expected color 6 to look dark yellow instead of brown. CGA offers four BIOS text modes: 40×25 characters in up to 16 colors; each character is a pattern of 8×8 dots. The effective screen resolution in this mode is 320×200 pixels, though individual pixels cannot be addressed independently; the choice of patterns for any location is thus limited to one of the 256 available characters, the patterns for which are stored in a ROM chip on the card itself.
The display font in text mode i