The PlayStation is a home video game console developed and marketed by Sony Computer Entertainment. The console was released on 3 December 1994 in Japan, 9 September 1995 in North America, 29 September 1995 in Europe, 15 November 1995 in Australia; the console was the first of the PlayStation lineup of home video game consoles. It competed with the Nintendo 64 and the Sega Saturn as part of the fifth generation of video game consoles; the PlayStation is the first "computer entertainment platform" to ship 100 million units, which it had reached 9 years and 6 months after its initial launch. In July 2000, a redesigned, slim version called the PS one was released, replacing the original grey console and named appropriately to avoid confusion with its successor, the PlayStation 2; the PlayStation 2, backwards compatible with the PlayStation's DualShock controller and games, was announced in 1999 and launched in 2000. The last PS one units were sold in late 2006 to early 2007 shortly after it was discontinued, for a total of 102 million units shipped since its launch 11 years earlier.
Games for the PlayStation continued to sell until Sony ceased production of both the PlayStation and PlayStation games on 23 March 2006 – over 11 years after it had been released, less than a year before the debut of the PlayStation 3. On 19 September 2018, Sony unveiled the PlayStation Classic, to mark the 24th anniversary of the original console; the new console is a miniature recreation of the original PlayStation, preloaded with 20 titles released on the original console, was released on 3 December 2018, the exact date the console was released in Japan in 1994. The inception of what would become the released PlayStation dates back to 1986 with a joint venture between Nintendo and Sony. Nintendo had produced floppy disk technology to complement cartridges, in the form of the Family Computer Disk System, wanted to continue this complementary storage strategy for the Super Famicom. Nintendo approached Sony to develop a CD-ROM add-on, tentatively titled the "Play Station" or "SNES-CD". A contract was signed, work began.
Nintendo's choice of Sony was due to a prior dealing: Ken Kutaragi, the person who would be dubbed "The Father of the PlayStation", was the individual who had sold Nintendo on using the Sony SPC-700 processor for use as the eight-channel ADPCM sound set in the Super Famicom/SNES console through an impressive demonstration of the processor's capabilities. Kutaragi was nearly fired by Sony because he was working with Nintendo on the side without Sony's knowledge, it was then-CEO, Norio Ohga, who recognised the potential in Kutaragi's chip, in working with Nintendo on the project. Ohga kept Kutaragi on at Sony, it was not until Nintendo cancelled the project that Sony decided to develop its own console. Sony planned to develop a Super NES-compatible, Sony-branded console, but one which would be more of a home entertainment system playing both Super NES cartridges and a new CD format which Sony would design; this was to be the format used in SNES-CDs, giving a large degree of control to Sony despite Nintendo's leading position in the video gaming market.
The product, dubbed the "Play Station" was to be announced at the May 1991 Consumer Electronics Show. However, when Nintendo's Hiroshi Yamauchi read the original 1988 contract between Sony and Nintendo, he realised that the earlier agreement handed Sony complete control over any and all titles written on the SNES CD-ROM format. Yamauchi decided that the contract was unacceptable and he secretly cancelled all plans for the joint Nintendo-Sony SNES CD attachment. Instead of announcing a partnership between Sony and Nintendo, at 9 am the day of the CES, Nintendo chairman Howard Lincoln stepped onto the stage and revealed that Nintendo was now allied with Philips, Nintendo was planning on abandoning all the previous work Nintendo and Sony had accomplished. Lincoln and Minoru Arakawa had, unbeknownst to Sony, flown to Philips' global headquarters in the Netherlands and formed an alliance of a decidedly different nature—one that would give Nintendo total control over its licenses on Philips machines.
After the collapse of the joint-Nintendo project, Sony considered allying itself with Sega to produce a stand-alone console. The Sega CEO at the time, Tom Kalinske, took the proposal to Sega's Board of Directors in Tokyo, who promptly vetoed the idea. Kalinske, in a 2013 interview recalled them saying "that’s a stupid idea, Sony doesn't know how to make hardware, they don't know. Why would we want to do this?". This prompted Sony into halting their research, but the company decided to use what it had developed so far with both Nintendo and Sega to make it into a complete console based upon the Super Famicom; as a result, Nintendo filed a lawsuit claiming breach of contract and attempted, in US federal court, to obtain an injunction against the release of what was christened the "Play Station", on the grounds that Nintendo owned the name. The federal judge presiding over the case denied the injunction and, in October 1991, the first incarnation of the aforementioned brand new game system was revealed.
However, it is theorised that only 200 or so of these machines were produced. By the end of 1992, Sony and Nintendo reached a deal whereby the "Play Station" would still have a port for SNES games, but Nintendo would own the rights and receive the bulk of the profits from the games, the SNES would continue to use the Sony-designed audio chip. However, Sony decided in early 1993 to begin reworking the "Play Station" concept to target a new generation of hardware and softw
LSI Corporation was an American company based in San Jose, California which designed semiconductors and software that accelerate storage and networking in data centers, mobile networks and client computing. On May 6, 2014, LSI Corporation was acquired by Avago Technologies for $6.6 billion. LSI Stockholders voted in favor of the proposal in April 2014, merging the company into its parent, continuing with the LSI brand. In 1981, Wilfred Corrigan, Bill O'Meara, Rob Walker and Mitchell "Mick" Bohn founded LSI under the name LSI Logic in Milpitas, California. Wilfred Corrigan served as the CEO from 1981 until 2005. LSI was funded by venture capitalists, including Sequoia Capital with $6 million. In March 1982, a second round of financing brought in another $16 million. LSI Logic went public with Nasdaq as LSI in May 1983 with the largest IPO to date of $153 million. In 1985, the firm entered into a joint venture with Kawasaki Steel—Japan's third largest steel manufacturer—to build a $100 million wafer fabrication plant in Tsukuba, Japan.
In 1987, SEMATECH was incorporated in result of the 1984 National Cooperative Research Act, which reduced potential antitrust liabilities of research joint ventures. SEMATECH is a development consortium to advance semiconductor and chip manufacturing. LSI Logic was among the 14 founding members, but withdrew from SEMATECH in January 1992. In July 1991, LSI entered into an agreement with Sanyo Electric of Japan to make a set of chips that translate an HDTV signal into a television image. LSI Logic started developing its CoreWare technology in 1992. In 1993, Sony Computer Entertainment chose LSI Logic as their ASIC partner, charged with fitting the PlayStation CPU on a single chip. LSI's CoreWare could do it. Sony worked with LSI’s engineers develop the graphics engine, DMA controller, I/O and bus controllers. In 1995, LSI Logic acquired the remaining shares its Canadian subsidiary held, which amounted to 45%. In 1997, Mint Technology, an engineering services company, was acquired by LSI. In August 1998 it bought Symbios Logic from Hyundai Electronic for $760 million cash.
In February 1999, LSI acquired Seeq Technology, adding physical-layer based Ethernet technology to LSI’s product line. In May 2000, LSI acquired IntraServer for $70 million, with expectations to add their expanding customer base to LSI’s own. In November 2000, LSI acquired Syntax Systems, in August 2001 the groups merged to become LSI Logic Storage Systems, Engenio Information Technologies. In March 2001 LSI acquired C-Cube for $878 million in stock. In that same quarter, LSI introduced a flexible process technology. In September 2001 LSI acquired a RAID adapter division from American Megatrends in a $221 million cash transaction. Included in this deal, LSI received AMI’s MegaRAID software intellectual property, host bus adapter products and 200 RAID employees. LSI and Storage Technology Corporation entered an alliance making StorageTek the distributor of their co-branded storage products in January 2002. In August 2002 LSI acquired Mylex from IBM. In November 2003, LSI sold its Tsukuba, Japan facility to Ltd..
The Engenio division of LSI filed for its own IPO in 2004, but withdrew citing adverse market conditions after the burst of the dot-com bubble. In 2005, Abhi Talwalkar joined the company as president and CEO, was appointed to the board of directors. Talwalkar was an executive at Intel Corporation before joining LSI, began a program of acquisitions and divestitures. In October 2005, LSI Logic opened a semiconductor design and engineering development center at the Dubai Silicon Oasis Microelectronics Innovation Center. In 2006, LSI Logic sold the Oregon design and manufacturing facility to ON Semiconductor. In October of that same year it agreed to an all-stock merger with Agere Systems worth about $4 billion. In March 2007, LSI acquired SiliconStor Inc. a provider of semiconductor solutions for enterprise storage networks, for $55 million in cash. In April 2007, LSI completed its merger with Agere Systems Inc. who owned LSI’s Mobility Products Group, rebranded the firm LSI Corporation. Magnum Semiconductor Inc. a spin-off of Cirrus Logic Inc. acquired LSI’s consumer products business and 13 percent of LSI’s workforce in July 2007.
These lines included architectures named DoMiNo and Zevio, evolutions of the C-Cube Microsystems technology. August 2007, LSI signed an agreement with STATS ChipPAC Ltd to sell its Pathumthani, Thailand semiconductor assembly and test operations for $100 million. In October 2007, LSI acquired a maker of silicon and software, for $85 million in cash. Tarari’s products integrated into LSI’s NSPG organization. In October 2007 LSI completed its sale of its Mobility Division to Infineon Technologies AG for $450 million in cash. 700 LSI employees transferred to Infineon in the deal. In July 2009, LSI agreed to acquire Inc. for $25 million. LSI put ONStor into its Engenio storage division, a NAS vendor. LSI bought the 3ware RAID adapter business of Applied Micro Circuits Corporation in April that same year. In March 2011, LSI announced its sale of its Engenio external storage systems business to NetApp for $480 million in cash; the sale of the Engenio division, which generated revenues of $705 million in 2010, completed in May.
In January 2012, LSI completed the acquisition of SandForce, which produced flash memory controllers. LSI started producing its own PCIe cards for data center servers, using SandForce’s flash controller chips, under their new Nytro product line that April
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing; the processor responds by suspending its current activities, saving its state, executing a function called an interrupt handler to deal with the event. This interruption is temporary, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, either a part of the computer itself, such as a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position.
Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction set which causes an interrupt when it is executed; the former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, a divide-by-zero exception will be thrown if the processor's arithmetic logic unit is commanded to divide a number by zero as this instruction is an error and impossible; the operating system will catch this exception, can decide what to do about it: aborting the process and displaying an error message. Software interrupt instructions can function to subroutine calls and are used for a variety of purposes, such as to request services from device drivers, like interrupts sent to and from a disk controller to request reading or writing of data to and from the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request lines to the processor, but there may be hundreds of different software interrupts. Interrupts are a used technique for computer multitasking in real-time computing; such a system is said to be interrupt-driven. Interrupts are similar to signals, the difference being that signals are used for inter-process communication, mediated by the kernel and handled by processes, while interrupts are mediated by the processor and handled by the kernel; the kernel may pass an interrupt as a signal to the process. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events; the first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. Interrupts may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem.
If implemented in hardware, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. Interrupts can be categorized into these different types: Maskable interrupt: a hardware interrupt that may be ignored by setting a bit in an interrupt mask register's bit-mask. Non-maskable interrupt: a hardware interrupt that lacks an associated bit-mask, so that it can never be ignored. NMIs are used for the highest priority tasks such as timers watchdog timers. Inter-processor interrupt: a special case of interrupt, generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt: an interrupt generated within a processor by executing an instruction. Software interrupts are used to implement system calls because they result in a subroutine call with a CPU ring level change.
Spurious interrupt: a hardware interrupt, unwanted. They are generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware. Processors have an internal interrupt mask which allows software to ignore all external hardware interrupts while it is set. Setting or clearing this mask may be faster than accessing an interrupt mask register in a PIC or disabling interrupts in the device itself. In some cases, such as the x86 architecture and enabling interrupts on the processor itself act as a memory barrier. An interrupt that leaves the machine in a well-defined state is called a precise interrupt; such an interrupt has four properties: The Program Counter is saved in a known place. All instructions before the one pointed to by the PC have executed. No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt; the execution state of the instruction pointed to by the PC is known.
An interrupt that does not meet these requirements is called an impr
Discrete cosine transform
A discrete cosine transform expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies. DCTs are important to numerous applications in science and engineering, from lossy compression of audio and images, to spectral methods for the numerical solution of partial differential equations; the use of cosine rather than sine functions is critical for compression, since it turns out that fewer cosine functions are needed to approximate a typical signal, whereas for differential equations the cosines express a particular choice of boundary conditions. In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform, but using only real numbers; the DCTs are related to Fourier Series coefficients of a periodically and symmetrically extended sequence whereas DFTs are related to Fourier Series coefficients of a periodically extended sequence. DCTs are equivalent to DFTs of twice the length, operating on real data with symmetry, whereas in some variants the input and/or output data are shifted by half a sample.
There are eight standard DCT variants. The most common variant of discrete cosine transform is the type-II DCT, called "the DCT", its inverse, the type-III DCT, is correspondingly called "the inverse DCT" or "the IDCT". Two related transforms are the discrete sine transform, equivalent to a DFT of real and odd functions, the modified discrete cosine transform, based on a DCT of overlapping data. Multidimensional DCTs are developed to extend the concept of DCT on MD Signals. There are several algorithms to compute MD DCT. A new variety of fast algorithms are developed to reduce the computational complexity of implementing DCT; the DCT, in particular the DCT-II, is used in signal and image processing for lossy compression, because it has a strong "energy compaction" property: in typical applications, most of the signal information tends to be concentrated in a few low-frequency components of the DCT. For correlated Markov processes, the DCT can approach the compaction efficiency of the Karhunen-Loève transform.
As explained below, this stems from the boundary conditions implicit in the cosine functions. A related transform, the modified discrete cosine transform, or MDCT, is used in AAC, Vorbis, WMA, MP3 audio compression. DCTs are widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to different even/odd boundary conditions at the two ends of the array. DCTs are closely related to Chebyshev polynomials, fast DCT algorithms are used in Chebyshev approximation of arbitrary functions by series of Chebyshev polynomials, for example in Clenshaw–Curtis quadrature; the DCT is used in JPEG image compression, MJPEG, MPEG, DV, Theora video compression. There, the two-dimensional DCT-II of N × N blocks are computed and the results are quantized and entropy coded. In this case, N is 8 and the DCT-II formula is applied to each row and column of the block; the result is an 8 × 8 transform coefficient array in which the element is the DC component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies.
Multidimensional DCTs have several applications 3-D DCT-II has several new applications like Hyperspectral Imaging coding systems, variable temporal length 3-D DCT coding, video coding algorithms, adaptive video coding and 3-D Compression. Due to enhancement in the hardware and introduction of several fast algorithms, the necessity of using M-D DCTs is increasing. DCT-IV has gained popularity for its applications in fast implementation of real-valued polyphase filtering banks, lapped orthogonal transform and cosine-modulated wavelet bases. Like any Fourier-related transform, discrete cosine transforms express a function or a signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the discrete Fourier transform, a DCT operates on a function at a finite number of discrete data points; the obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sines. However, this visible difference is a consequence of a deeper distinction: a DCT implies different boundary conditions from the DFT or other related transforms.
The Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as implicitly defining an extension of that function outside the domain. That is, once you write a function f as a sum of sinusoids, you can evaluate that sum at any x for x where the original f was not specified; the DFT, like the Fourier series, implies a periodic extension of the original function. A DCT, like a cosine transform, implies an extension of the original function. However, because DCTs operate on finite, discrete sequences, two issues arise that
Dynamic random-access memory
Dynamic random-access memory is a type of random access semiconductor memory that stores each bit of data in a separate tiny capacitor within an integrated circuit. The capacitor can either be discharged; the electric charge on the capacitors leaks off, so without intervention the data on the chip would soon be lost. To prevent this, DRAM requires an external memory refresh circuit which periodically rewrites the data in the capacitors, restoring them to their original charge; this refresh process is the defining characteristic of dynamic random-access memory, in contrast to static random-access memory which does not require data to be refreshed. Unlike flash memory, DRAM is volatile memory, since it loses its data when power is removed. However, DRAM does exhibit limited data remanence. DRAM is used in digital electronics where low-cost and high-capacity memory is required. One of the largest applications for DRAM is the main memory in modern graphics cards, it is used in many portable devices and video game consoles.
In contrast, SRAM, faster and more expensive than DRAM, is used where speed is of greater concern than cost and size, such as the cache memories in processors. Due to its need of a system to perform refreshing, DRAM has more complicated circuitry and timing requirements than SRAM, but it is much more used; the advantage of DRAM is the structural simplicity of its memory cells: only one transistor and a capacitor are required per bit, compared to four or six transistors in SRAM. This allows DRAM to reach high densities, making DRAM much cheaper per bit; the transistors and capacitors used are small. Due to the dynamic nature of its memory cells, DRAM consumes large amounts of power, with different ways for managing the power consumption. DRAM had a 47% increase in the price-per-bit in 2017, the largest jump in 30 years since the 45% percent jump in 1988, while in recent years the price has been going down; the cryptanalytic machine code-named "Aquarius" used at Bletchley Park during World War II incorporated a hard-wired dynamic memory.
Paper tape was read and the characters on it "were remembered in a dynamic store.... The store used a large bank of capacitors, which were either charged or not, a charged capacitor representing cross and an uncharged capacitor dot. Since the charge leaked away, a periodic pulse was applied to top up those still charged". In 1964, Arnold Farber and Eugene Schlig, working for IBM, created a hard-wired memory cell, using a transistor gate and tunnel diode latch, they replaced the latch with two transistors and two resistors, a configuration that became known as the Farber-Schlig cell. In 1965, Benjamin Agusta and his team at IBM created a 16-bit silicon memory chip based on the Farber-Schlig cell, with 80 transistors, 64 resistors, 4 diodes. In 1966, DRAM was invented by Dr. Robert Dennard at the IBM Thomas J. Watson Research Center, he was granted U. S. patent number 3,387,286 in 1968. Capacitors had been used for earlier memory schemes such as the drum of the Atanasoff–Berry Computer, the Williams tube and the Selectron tube.
The Toshiba "Toscal" BC-1411 electronic calculator, introduced in November 1966, used a form of DRAM built from discrete components. The first DRAM was introduced in 1969 by Advanced Memory system, Inc of Sunnyvale, CA; this 1000 bit chip was sold to Honeywell, Wang Computer, others. In 1969 Honeywell asked Intel to make a DRAM using a three-transistor cell; this became the Intel 1102 in early 1970. However, the 1102 had many problems, prompting Intel to begin work on their own improved design, in secrecy to avoid conflict with Honeywell; this became the first commercially available DRAM, the Intel 1103, in October 1970, despite initial problems with low yield until the fifth revision of the masks. The 1103 was laid out by Pat Earhart; the masks were cut by Judy Garcia. The first DRAM with multiplexed row and column address lines was the Mostek MK4096 4 Kbit DRAM designed by Robert Proebsting and introduced in 1973; this addressing scheme uses the same address pins to receive the low half and the high half of the address of the memory cell being referenced, switching between the two halves on alternating bus cycles.
This was a radical advance halving the number of address lines required, which enabled it to fit into packages with fewer pins, a cost advantage that grew with every jump in memory size. The MK4096 proved to be a robust design for customer applications. At the 16 Kbit density, the cost advantage increased. However, as density increased to 64 Kbit in the early 1980s, Mostek and other US manufacturers were overtaken by Japanese DRAM manufacturers dumping DRAMs on the US market. DRAM is arranged in a rectangular array of charge storage cells consisting of one capacitor and transistor per data bit; the figure to the right shows a simple example with a four-by-four cell matrix. Some DRAM matrices are many thousands of cells in width; the long horizontal lines connecting each row are known as word-lines. Each column of cells is composed of two bit-lines, each connected to every other storage cell in the column, they are known as the "+" and "−" bit lines. A sense amplifier is essent
RGB color model
The RGB color model is an additive color model in which red and blue light are added together in various ways to reproduce a broad array of colors. The name of the model comes from the initials of the three additive primary colors, red and blue; the main purpose of the RGB color model is for the sensing and display of images in electronic systems, such as televisions and computers, though it has been used in conventional photography. Before the electronic age, the RGB color model had a solid theory behind it, based in human perception of colors. RGB is a device-dependent color model: different devices detect or reproduce a given RGB value differently, since the color elements and their response to the individual R, G, B levels vary from manufacturer to manufacturer, or in the same device over time, thus an RGB value does not define the same color across devices without some kind of color management. Typical RGB input devices are color TV and video cameras, image scanners, digital cameras. Typical RGB output devices are TV sets of various technologies and mobile phone displays, video projectors, multicolor LED displays and large screens such as JumboTron.
Color printers, on the other hand subtractive color devices. This article discusses concepts common to all the different color spaces that use the RGB color model, which are used in one implementation or another in color image-producing technology. To form a color with RGB, three light beams must be superimposed; each of the three beams is called a component of that color, each of them can have an arbitrary intensity, from off to on, in the mixture. The RGB color model is additive in the sense that the three light beams are added together, their light spectra add, wavelength for wavelength, to make the final color's spectrum; this is opposite to the subtractive color model that applies to paints, inks and other substances whose color depends on reflecting the light under which we see them. Because of properties, these three colours create white, this is in stark contrast to physical colours, such as dyes which create black when mixed. Zero intensity for each component gives the darkest color, full intensity of each gives a white.
When the intensities for all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and weakest of the intensities of the primary colors employed; when one of the components has the strongest intensity, the color is a hue near this primary color, when two components have the same strongest intensity the color is a hue of a secondary color. A secondary color is formed by the sum of two primary colors of equal intensity: cyan is green+blue, magenta is red+blue, yellow is red+green; every secondary color is the complement of one primary color. The RGB color model itself does not define what is meant by red and blue colorimetrically, so the results of mixing them are not specified as absolute, but relative to the primary colors; when the exact chromaticities of the red and blue primaries are defined, the color model becomes an absolute color space, such as sRGB or Adobe RGB.
The choice of primary colors is related to the physiology of the human eye. The normal three kinds of light-sensitive photoreceptor cells in the human eye respond most to yellow and violet light; the difference in the signals received from the three kinds allows the brain to differentiate a wide gamut of different colors, while being most sensitive to yellowish-green light and to differences between hues in the green-to-orange region. As an example, suppose that light in the orange range of wavelengths enters the eye and strikes the retina. Light of these wavelengths would activate both the medium and long wavelength cones of the retina, but not equally—the long-wavelength cells will respond more; the difference in the response can be detected by the brain, this difference is the basis of our perception of orange. Thus, the orange appearance of an object results from light from the object entering our eye and stimulating the different cones but to different degrees. Use of the three primary colors is not sufficient to reproduce all colors.
The RGB color model is based on the Young–Helmholtz theory of trichromatic color vision, developed by Thomas Young and Hermann Helmholtz in the early to mid nineteenth century, on James Clerk Maxwell's c