Reduced instruction set computer
A reduced instruction set computer, or RISC, is one whose instruction set architecture allows it to have fewer cycles per instruction than a complex instruction set computer. Various suggestions have been made regarding a precise definition of RISC, but the general concept is that such a computer has a small set of simple and general instructions, rather than a large set of complex and specialized instructions. Another common RISC trait is their load/store architecture, in which memory is accessed through specific instructions rather than as a part of most instructions. Although a number of computers from the 1960s and'70s have been identified as forerunners of RISCs, the modern concept dates to the 1980s. In particular, two projects at Stanford University and the University of California, Berkeley are most associated with the popularization of this concept. Stanford's MIPS would go on to be commercialized as the successful MIPS architecture, while Berkeley's RISC gave its name to the entire concept and was commercialized as the SPARC.
Another success from this era was IBM's effort that led to the IBM POWER instruction set architecture, PowerPC, Power ISA. As these projects matured, a wide variety of similar designs flourished in the late 1980s and the early 1990s, representing a major force in the Unix workstation market as well as for embedded processors in laser printers and similar products; the many varieties of RISC designs include ARC, Alpha, Am29000, ARM, Atmel AVR, Blackfin, i860, i960, M88000, MIPS, PA-RISC, Power ISA, RISC-V, SuperH, SPARC. In the 21st century, the use of ARM architecture processors in smartphones and tablet computers such as the iPad and Android devices provided a wide user base for RISC-based systems. RISC processors are used in supercomputers such as Summit, which, as of November 2018, is the world's fastest supercomputer as ranked by the TOP500 project. Alan Turing's 1946 Automatic Computing Engine design had many of the characteristics of a RISC architecture. A number of systems, going back to the 1960s, have been credited as the first RISC architecture based on their use of load/store approach.
The term RISC was coined by David Patterson of the Berkeley RISC project, although somewhat similar concepts had appeared before. The CDC 6600 designed by Seymour Cray in 1964 used a load/store architecture with only two addressing modes and 74 operation codes, with the basic clock cycle being 10 times faster than the memory access time. Due to the optimized load/store architecture of the CDC 6600, Jack Dongarra says that it can be considered a forerunner of modern RISC systems, although a number of other technical barriers needed to be overcome for the development of a modern RISC system. Michael J. Flynn views the first RISC system as the IBM 801 design, which began in 1975 by John Cocke and was completed in 1980; the 801 was produced in a single-chip form as the IBM ROMP in 1981, which stood for'Research OPD Micro Processor'. As the name implies, this CPU was designed for "mini" tasks, was used in the IBM RT PC in 1986, which turned out to be a commercial failure, but the 801 inspired several research projects, including new ones at IBM that would lead to the IBM POWER instruction set architecture.
The most public RISC designs, were the results of university research programs run with funding from the DARPA VLSI Program. The VLSI Program unknown today, led to a huge number of advances in chip design and computer graphics; the Berkeley RISC project started in 1980 under the direction of David Patterson and Carlo H. Sequin. Berkeley RISC was based on gaining performance through the use of pipelining and an aggressive use of a technique known as register windowing. In a traditional CPU, one has a small number of registers, a program can use any register at any time. In a CPU with register windows, there are a huge number of registers, e.g. 128, but programs can only use a small number of them, e.g. eight, at any one time. A program that limits itself to eight registers per procedure can make fast procedure calls: The call moves the window "down" by eight, to the set of eight registers used by that procedure, the return moves the window back; the Berkeley RISC project delivered the RISC-I processor in 1982.
Consisting of only 44,420 transistors RISC-I had only 32 instructions, yet outperformed any other single-chip design. They followed this up with the 40,760 transistor, 39 instruction RISC-II in 1983, which ran over three times as fast as RISC-I; the MIPS project grew out of a graduate course by John L. Hennessy at Stanford University in 1981, resulted in a functioning system in 1983, could run simple programs by 1984; the MIPS approach emphasized an aggressive clock cycle and the use of the pipeline, making sure it could be run as "full" as possible. The MIPS system was followed by the MIPS-X and in 1984 Hennessy and his colleagues formed MIPS Computer Systems; the commercial venture resulted in a new architecture, called MIPS and the R2000 microprocessor in 1985. In the early 1980s, significant uncertainties surrounded the RISC concept, it was uncertain if it could have a commercial future, but by the mid-1980s the concepts had matured enough to be seen as commercially viable. In 1986 Hewlett Packard started using an early implementation of their PA-RISC in some of their computers.
In the meantime, the Berkeley RISC effort had become so well known that it became the name for the entire concept and in 1987 Sun Microsystems began shipping systems with the SPARC processor
S-Video is a signaling standard for standard definition video 480i or 576i. By separating the black-and-white and coloring signals, it achieves better image quality than composite video, but has lower color resolution than component video. Standard analog television signals go through several processing steps on their way to being broadcast, each of which discards information and lowers the quality of the resulting images; the image is captured in RGB form and processed into three signals known as YPbPr. The first of these signals is called Y, created from all three original signals based on a formula that produces an overall brightness of the image, or luma; this signal matches a traditional black and white television signal and the Y/C method of encoding was key to offering backward compatibility. Once the Y signal is produced, it is subtracted from the blue signal to produce Pb and from the red signal to produce Pr. To recover the original RGB information for display, the signals are mixed with the Y to produce the original blue and red, the sum of those is mixed with the Y to recover the green.
A signal with three components is no easier to broadcast than the original three-signal RGB, so additional processing is required. The first step is to combine the Pr to form the C signal, for chrominance; the phase and amplitude of the signal represent the two original signals. This signal is bandwidth-limited to comply with requirements for broadcasting; the resulting Y and C signals are mixed together to produce composite video. To play back composite video, the Y and C signals must be separated, this is difficult to do without adding artifacts; each of these steps is subject to unavoidable loss of quality. To retain that quality in the final image, it is desirable to eliminate as many of the encoding/decoding steps as possible. S-Video is an approach to this problem, it eliminates the final subsequent separation at playback time. The S-video cable carries video using two synchronized signal and ground pairs, termed Y and C. Y is the luma signal, which carries the luminance – or black-and-white – of the picture, including synchronization pulses.
C is the chroma signal. This signal contains the hue of the video; the luminance signal carries horizontal and vertical sync pulses in the same way as a composite video signal. Luma is a signal carrying luminance after gamma correction, is therefore termed "Y" because of the similarity to the lower-case Greek letter gamma. In composite video, the signals co-exist on different frequencies. To achieve this, the luminance signal must be low-pass filtered; as S-Video maintains the two as separate signals, such detrimental low-pass filtering for luminance is unnecessary, although the chrominance signal still has limited bandwidth relative to component video. Compared with component video, which carries the identical luminance signal but separates the color-difference signals into Cb/Pb and Cr/Pr, the color resolution of S-Video is limited by the modulation on a subcarrier frequency of 3.57 to 4.43 megahertz, depending on the standard. This difference is meaningless on home videotape systems, as the chrominance is severely constrained by both VHS and Betamax.
Carrying the color information as one signal means that the color has to be encoded in some way in accord with NTSC, PAL, or SECAM, depending on the applicable local standard. S-Video suffers from low color resolution. NTSC S-Video color resolution is 120 lines horizontal, versus 250 lines horizontal for the Rec. 601-encoded signal of a DVD, or 30 lines horizontal for standard VCRs. In many European Union countries, S-Video was less common because of the dominance of SCART connectors, which are present on most existing televisions, it is possible for a player to output S-Video over SCART, but televisions' SCART connectors are not wired to accept it, if not the display would show only a monochrome image. In this case it is sometimes possible to modify the SCART adapter cable to make it work; some game consoles. Early consoles came with RF adapters, the uncommon composite video on the classic RCA type video jack. Instead of S-Video, consoles like the GameCube had RGB output. In the US and some other NTSC countries, S-Video was provided on some video equipment, including most televisions and game consoles.
The primary exceptions were Beta VCRs. The European usage of RGB video is because the RGB quality of most retro computers and consoles is better than S-Video; the Atari 800 introduced separate Chroma/Luma output in late 1979. The signals were put on pin 5 of a 5-pin 180 degree DIN Connector socket. Atari did not sell a monitor for its 8-bit computer line, however; the Commodore 64 released in 1982 offers separate chroma and luma signals using a different connector. Although Commodore Business Machines did not use the term "S-Video" as the standard did not formally exist until 1987, a simple adapter connects the computer's "LCA" 8-pin DIN socket to a S-Video display, or an S-Video device to the Commodore 1702 monitor's LCA jacks; the four-pin mini-DIN connector is the most common of several S-Video connector types. The same mini-DIN connector is used in the Apple Desktop Bus for Macintosh computers and the two cable types can be interchanged. Other connector variants include seven-pin locking "dub" connectors used on many professional S-VHS machines, dual "Y" and "C" BNC connectors used fo
JPEG is a used method of lossy compression for digital images for those images produced by digital photography. The degree of compression can be adjusted, allowing a selectable tradeoff between storage size and image quality. JPEG achieves 10:1 compression with little perceptible loss in image quality. JPEG compression is used in a number of image file formats. JPEG/Exif is the most common image format used by digital cameras and other photographic image capture devices; these format variations are not distinguished, are called JPEG. The term "JPEG" is an initialism/acronym for the Joint Photographic Experts Group, which created the standard; the MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions, which provides a MIME type of image/pjpeg when uploading JPEG images. JPEG files have a filename extension of.jpg or.jpeg. JPEG/JFIF supports a maximum image size of 65,535×65,535 pixels, hence up to 4 gigapixels for an aspect ratio of 1:1. "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and other still picture coding standards.
The "Joint" stood for ISO TC97 WG8 and CCITT SGVIII. In 1987, ISO TC 97 became ISO/IEC JTC1 and, in 1992, CCITT became ITU-T. On the JTC1 side, JPEG is one of two sub-groups of ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 1 – titled as Coding of still pictures. On the ITU-T side, ITU-T SG16 is the respective body; the original JPEG Group was organized in 1986, issuing the first JPEG standard in 1992, approved in September 1992 as ITU-T Recommendation T.81 and, in 1994, as ISO/IEC 10918-1. The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream; the Exif and JFIF standards define the used file formats for interchange of JPEG-compressed images. JPEG standards are formally named as Information technology – Digital compression and coding of continuous-tone still images. ISO/IEC 10918 consists of the following parts: Ecma International TR/98 specifies the JPEG File Interchange Format.
The JPEG compression algorithm operates at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where reducing the amount of data used for an image is important for responsive presentation, JPEG's compression benefits make JPEG popular. JPEG/Exif is the most common format saved by digital cameras. However, JPEG is not well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts; such images are better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard includes a lossless coding mode; as the typical use of JPEG is a lossy compression method, which reduces the image fidelity, it is inappropriate for exact reproduction of imaging data. JPEG is not well suited to files that will undergo multiple edits, as some image quality is lost each time the image is recompressed if the image is cropped or shifted, or if encoding parameters are changed – see digital generation loss for details.
To prevent image information loss during sequential and repetitive editing, the first edit can be saved in a lossless format, subsequently edited in that format finally published as JPEG for distribution. JPEG uses a lossy form of compression based on the discrete cosine transform; this mathematical operation converts each frame/field of the video source from the spatial domain into the frequency domain. A perceptual model based loosely on the human psychovisual system discards high-frequency information, i.e. sharp transitions in intensity, color hue. In the transform domain, the process of reducing information is called quantization. In simpler terms, quantization is a method for optimally reducing a large number scale into a smaller one, the transform-domain is a convenient representation of the image because the high-frequency coefficients, which contribute less to the overall picture than other coefficients, are characteristically small-values with high compressibility; the quantized coefficients are sequenced and losslessly packed into the output bitstream.
Nearly all software implementations of JPEG permit user control over the compression ratio, allowing the user to trade off picture-quality for smaller file size. In embedded applications, the parameters are fixed for the application; the compression method is lossy, meaning that some original image information is lost and cannot be restored affecting image quality. There is an optional lossless mode defined in the JPEG standard. However, this mode is not supported in products. There is an interlaced progressive JPEG format, in which data is compressed in multiple passes of progressively higher detail; this is ideal for large images that will be displayed while downloading over a slow connection, allowing a reasonable preview after receiving only a portion of the data. However, support for progressive JPEGs is not universal; when progressive JPEGs are received by programs that do not support them (such
The Philips CD-i is an interactive multimedia CD player developed and marketed by Dutch company Philips, who supported it from December 1991 to late 1998. It was created to provide more functionality than an audio CD player or game console, but at a lower price than a personal computer with a CD-ROM drive; the cost savings were due to the lack of a floppy drive, keyboard and monitor, less operating system software. "CD-i" refers to the multimedia Compact Disc standard used by the CD-i console known as Green Book, co-developed by Philips and Sony. In addition to games and multimedia reference titles were produced, such as interactive encyclopedias and museum tours, which were popular before public Internet access was widespread; the CD-i was one of the earliest game systems to implement Internet features, including subscriptions, web browsing, downloading, e-mail, online play. This was facilitated by the use of an additional hardware modem and "CD-Online" disc, which Philips released in Britain in 1995 for $150 US.
Development of the CD-i format began in 1984 and it was first publicly announced in 1986. The first Philips CD-i player, released in 1991 and priced around US$1,000, was capable of playing interactive CD-i discs, Audio CDs, CD+G, Karaoke CDs, Photo CDs and Video CDs, though the latter required an optional "Digital Video Card" to provide MPEG-1 decoding. Philips licensed the CD-i format to other manufacturers for use, there were CD-i players by Sony under the "Intelligent Discman" brand. Philips marketed the CD-i as a "home entertainment system" in Europe, but more as a games and educational machine in the U. S; the CD-i was abandoned by 1996 and was a commercial failure, estimated to have lost Philips as much as one billion U. S. dollars in the American market. Philips at first marketed CD-i as a family entertainment product, avoided mentioning video games to not compete against game consoles. Early software releases focused on educational and self-improvement titles, with only a few games, many of them adaptations of board games such as Connect Four.
However, the system was handily beaten in the market for multimedia devices by cheap low-end PCs, the games were the best-selling software. By 1993 Philips encouraged MS-DOS and console developers to create games, introduced a $250 peripheral with more memory and support for full-motion video, added to new consoles a second controller port for multiplayer games; the attempts to develop a foothold in the games market were unsuccessful, as the system was designed as a multimedia player and thus was under-powered compared to other gaming platforms on the market in most respects. Earlier CD-i games included entries in popular Nintendo franchises, although those games were not developed by Nintendo. A Mario game, three Legend of Zelda games were released: Zelda: The Wand of Gamelon, Link: The Faces of Evil and Zelda's Adventure. Nintendo and Philips had established an agreement to co-develop a CD-ROM enhancement for the Super Nintendo Entertainment System due to licensing disagreements with Nintendo's previous partner Sony.
While Philips and Nintendo never released such a CD-ROM add-on, Philips was still contractually allowed to continue using Nintendo characters. Applications were developed using authoring software produced by OptImage; this included OptImage's Balboa Runtime MediaMogul. The second company that produced authoring software was Script Systems. Philips released several versions of popular TV game shows for the CD-i, including versions of Jeopardy!, Name That Tune, two versions of The Joker's Wild. All CD-i games in North America had Charlie O'Donnell as announcer; the Netherlands released its version of Lingo on the CD-i in 1994. In 1993, American musician Todd Rundgren created the first music-only interactive CD, No World Order, for the CD-i; this application allows the user to arrange the whole album in their own personal way with over 15,000 points of customization. CD-i has a series of learning games targeted at children from infancy to adolescence; those intended for a younger audience included Busytown, The Berenstain Bears and various others which had vivid cartoon-like settings accompanied by music and logic puzzles.
Although extensively marketed by Philips, notably via infomercial, consumer interest in CD-i titles remained low. By 1994, sales of CD-i systems had begun to slow, in 1998 the product line was dropped. Philips had by already sold its gaming subsidiary, Philips Media BV, to French publisher Infogrames in 1997. Dutch eurodance duo 2 Unlimited released a CD-i compilation album in 1994 called "Beyond Limits" which contains standard CD tracks as well as CD-i-exclusive media on the disc. A large number of full motion video titles such as Dragon's Lair and Mad Dog McCree appeared on the system. One of these, Burn:Cycle, is considered one of the stronger CD-i titles and was ported to PC; the February 1994 issue of Electronic Gaming Monthly remarked that the CD-i's full motion video capabilities were its strongest point, that nearly all of its best software required the MPEG upgrade card. By mid-1996 the U. S. market for CD-i software had dried up and Philips had given up on releasing titles there, but continued to publish CD-i games in Europe, where the console still held some popularity.
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr written as YCBCR or Y'CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ is distinguished from Y, luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries. Y′CbCr color spaces are defined by a mathematical coordinate transformation from an associated RGB color space. If the underlying RGB color space is absolute, the Y′CbCr color space is an absolute color space as well. Cathode ray tube displays are driven by red and blue voltage signals, but these RGB signals are not efficient as a representation for storage and transmission, since they have a lot of redundancy. YCbCr and Y′CbCr are a practical approximation to color processing and perceptual uniformity, where the primary colors corresponding to red and blue are processed into perceptually meaningful information.
By doing this, subsequent image/video processing and storage can do operations and introduce errors in perceptually meaningful ways. Y′CbCr is used to separate out a luma signal that can be stored with high resolution or transmitted at high bandwidth, two chroma components that can be bandwidth-reduced, compressed, or otherwise treated separately for improved system efficiency. One practical example would be decreasing the bandwidth or resolution allocated to "color" compared to "black and white", since humans are more sensitive to the black-and-white information; this is called chroma subsampling. YCbCr is sometimes abbreviated to YCC. Y′CbCr is called YPbPr when used for analog component video, although the term Y′CbCr is used for both systems, with or without the prime. Y′CbCr is confused with the YUV color space, the terms YCbCr and YUV are used interchangeably, leading to some confusion; the main difference is that YUV is analog and YCbCr is digital. Y′CbCr signals are called YPbPr, are created from the corresponding gamma-adjusted RGB source using three defined constants KR, KG, KB as follows: Y ′ = K R ⋅ R ′ + K G ⋅ G ′ + K B ⋅ B ′ P B = 1 2 ⋅ B ′ − Y ′ 1 − K B P R = 1 2 ⋅ R ′ − Y ′ 1 − K R where KR, KG, KB are ordinarily derived from the definition of the corresponding RGB space, required to satisfy K R + K G + K B = 1.
Here, the prime ′ symbols mean. The resulting luma value will have a nominal range from 0 to 1, the chroma values will have a nominal range from -0.5 to +0.5. The reverse conversion process can be derived by inverting the above equations; when representing the signals in digital form, the results are scaled and rounded, offsets are added. For example, the scaling and offset applied to the Y′ component per specification results in the value of 16 for black and the value of 235 for white when using an 8-bit representation; the standard has 8-bit digitized versions of CB and CR scaled to a different range of 16 to 240. Rescaling by the fraction / = 219/224 is sometimes required when doing color matrixing or processing in YCbCr space, resulting in quantization distortions when the subsequent processing is not performed using higher bit depths; the scaling that results in the use of a smaller range of digital values than what might appear to be desirable for representation of the nominal range of the input data allows for some "overshoot" and "undershoot" during processing without necessitating undesirable clipping.
This "head-room" and "toe-room" can be used for extension of the nominal color gamut, as specified by xvYCC. The value 235 accommodates a maximum black-to-white overshoot of 255 - 235 = 20, or 20 / = 9.1%, larger than the theoretical maximum overshoot of about 8.9% of the maximum step. The toe-room is smaller, allowing only 16 / 219 = 7.3% overshoot, less than t
A motherboard is the main printed circuit board found in general purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit and memory, provides connectors for other peripherals. Unlike a backplane, a motherboard contains significant sub-systems such as the central processor, the chipset's input/output and memory controllers, interface connectors, other components integrated for general purpose use and applications. Motherboard refers to a PCB with expansion capability and as the name suggests, this board is referred to as the "mother" of all components attached to it, which include peripherals, interface cards, daughtercards: sound cards, video cards, network cards, hard drives, or other forms of persistent storage; the term mainboard is applied to devices with a single board and no additional expansions or capability, such as controlling boards in laser printers, washing machines, mobile phones and other embedded systems with limited expansion abilities.
Prior to the invention of the microprocessor, the digital computer consisted of multiple printed circuit boards in a card-cage case with components connected by a backplane, a set of interconnected sockets. In old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice; the Central Processing Unit and peripherals were housed on individual printed circuit boards, which were plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system; the most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1981s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard.
In the late 1980s, personal computer motherboards began to include single ICs capable of supporting a set of low-speed peripherals: keyboard, floppy disk drive, serial ports, parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video and networking functions without the need for any expansion cards at all. Business PCs, servers were more to need expansion cards, either for more robust functions, or for higher speeds. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals; this included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century. Memory, network controllers, power source, storage would be integrated into some systems. A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, peripheral devices may be attached to the motherboard as plug-in cards or via cables. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components; this chipset determines, to an extent, the capabilities of the motherboard. Modern motherboards include: Sockets. In the case of CPUs in ball grid array packages, such as the VIA C3, the CPU is directly soldered to the motherboard. Memory Slots into which the system's main memory is to be installed in the form of DIMM modules containing DRAM chips A chipset which forms an interface between the CPU's front-side bus, main memory, peripheral buses Non-volatile memory chips containing the system's firmware or BIOS A clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, main memory, expansion cards.
As of 2007, some graphics cards require more power than the motherboard can provide, thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives SATA only. Disk drives connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support used input devices, such as USB for mouse devices and keyboards. Early personal computers
The RCA Corporation was a major American electronics company, founded as the Radio Corporation of America in 1919. It was a wholly owned subsidiary of General Electric. An innovative and progressive company, RCA was the dominant electronics and communications firm in the United States for over five decades. RCA was at the forefront of the mushrooming radio industry in the early 1920s, as a major manufacturer of radio receivers, the exclusive manufacturer of the first superheterodyne models. RCA created the first American radio network, the National Broadcasting Company; the company was a pioneer in the introduction and development of television, both black-and-white and color. During this period, RCA was identified with the leadership of David Sarnoff, he was general manager at the company's founding, became president in 1930, remained active, as chairman of the board, until the end of 1969. RCA's impregnable stature began to weaken in the mid-1970s, as it attempted to diversify and expand into a multifaceted conglomerate.
The company suffered enormous financial losses in the mainframe computer industry and other failed projects such as the CED videodisc. In 1986, RCA was reacquired by General Electric, which over the next few years liquidated most of the corporation's assets. Today, RCA exists as a brand name only. RCA originated as a reorganization of the Marconi Wireless Telegraph Company of America. In 1897, the Wireless Telegraph and Signal Company, was founded in London to promote the radio inventions of Guglielmo Marconi; as part of worldwide expansion, in 1899 American Marconi was organized as a subsidiary company, holding the rights to use the Marconi patents in the United States and Cuba. In 1912 it took over the assets of the bankrupt United Wireless Telegraph Company, from that point forward it had been the dominant radio communications company in the United States. With the entry of the United States into World War One in April 1917, the government took over most civilian radio stations, to use them for the war effort.
Although the overall U. S. government plan was to restore civilian ownership of the seized radio stations once the war ended, many Navy officials hoped to retain a monopoly on radio communication after the war. Defying instructions to the contrary, the Navy began purchasing large numbers of stations outright. With the conclusion of the conflict, Congress turned down the Navy's efforts to have peacetime control of the radio industry, instructed the Navy to make plans to return the commercial stations it controlled, including the ones it had improperly purchased, to the original owners. Due to national security considerations, the Navy was concerned about returning the high-powered international stations to American Marconi, since a majority of its stock was in foreign hands, the British largely controlled the international undersea cables; this concern was increased by the announcement in late 1918 of the formation of the Pan-American Wireless Telegraph and Telephone Company, a joint venture between American Marconi and the Federal Telegraph Company, with plans to set up service between the United States and South America.
The Navy had installed a high-powered Alexanderson alternator, built by General Electric, at the American Marconi transmitter site in New Brunswick, New Jersey. It proved to be superior for transatlantic transmissions to the spark transmitters, traditionally used by the Marconi companies. Marconi officials were so impressed by the capabilities of the Alexanderson alternators that they began making preparations to adopt them as their standard transmitters for international communication. A tentative plan made with General Electric proposed that over a two-year period the Marconi companies would purchase most of GE's alternator production. However, this proposal was met with disapproval, on national security grounds, by the U. S. Navy, concerned that this would guarantee British domination of international radio communication; the Navy, claiming it was acting with the support of President Wilson, looked for an alternative that would result in an "all-American" company taking over the American Marconi assets.
In April 1919 two naval officers, Admiral H. G. Bullard and Commander S. C. Hooper, met with GE's president, Owen D. Young, asking that he suspend the pending alternator sales to the Marconi companies; this move would leave General Electric without a buyer for its transmitters, so the officers proposed that GE purchase American Marconi, use the assets to form its own radio communications subsidiary. Young consented to this proposal, effective November 20, 1919, transformed American Marconi into the Radio Corporation of America; the new company was promoted as being a patriotic gesture. RCA's incorporation papers required that its officers needed to be U. S. citizens, with a majority of its stock held by Americans. RCA retained most of the American Marconi staff, although Owen Young became the new company's head as the chairman of the board. Former American Marconi vice president and general manager E. J. Nally become RCA's first president. Nally's term ended on December 31, 1922, he was succeeded the next day by Major General James G. Harbord.