A parallel port is a type of interface found on computers for connecting peripherals. The name refers to the way. To do this, parallel ports require multiple data lines in their cables and port connectors, tend to be larger than contemporary serial ports which only require one data line. There are many types of parallel ports, but the term has become most associated with the printer port or Centronics port found on most personal computers from the 1970s through the 2000s, it was an industry de facto standard for many years, was standardized as IEEE 1284 in the late 1990s, which defined the Enhanced Parallel Port and Extended Capability Port bi-directional versions. Today, the parallel port interface is non-existent because of the rise of Universal Serial Bus devices, along with network printing using Ethernet and Wi-Fi connected printers; the parallel port interface was known as the Parallel Printer Adapter on IBM PC-compatible computers. It was designed to operate printers that used IBM's eight-bit extended ASCII character set to print text, but could be used to adapt other peripherals.
Graphical printers, along with a host of other devices, have been designed to communicate with the system. An Wang, Robert Howard and Prentice Robinson began development of a low-cost printer at Centronics, a subsidiary of Wang Laboratories that produced specialty computer terminals; the printer used the dot matrix printing principle, with a print head consisting of a vertical row of seven metal pins connected to solenoids. When power was applied to the solenoids, the pin was pulled forward to strike the paper and leave a dot. To make a complete character glyph, the print head would receive power to specified pins to create a single vertical pattern the print head would move to the right by a small amount, the process repeated. On their original design, a typical glyph was printed as a matrix seven high and five wide, while the "A" models used a print head with 9 pins and formed glyphs that were 9 by 7; this left the problem of sending the ASCII data to the printer. While a serial port does so with the minimum of pins and wires, it requires the device to buffer up the data as it arrives bit by bit and turn it back into multi-bit values.
A parallel port makes this simpler. In addition to the seven data pins, the system needed various control pins as well as electrical grounds. Wang happened to have a surplus stock of 20,000 Amphenol 36-pin micro ribbon connectors that were used for one of their early calculators; the interface only required 21 of these pins, the rest were not connected. The connector has become so associated with Centronics that it is now popularly known as the "Centronics connector"; the Centronics Model 101 printer, featuring this connector, was released in 1970. The host sent ASCII characters to the printer using 7 of 8 data pins, pulling them high to +5V to represent a 1; when the data was ready, the host pulled the STROBE pin low, to 0 V. The printer responded by pulling the BUSY line high, printing the character, returning BUSY to low again; the host could send another character. Control characters in the data caused other actions, like the CR or EOF; the host could have the printer automatically start a new line by pulling the AUTOFEED line high, keeping it there.
The host had to watch the BUSY line to ensure it did not feed data to the printer too especially given variable-time operations like a paper feed. The printer side of the interface became an industry de facto standard, but manufacturers used various connectors on the system side, so a variety of cables were required. For example, NCR used the 36-pin micro ribbon connector on both ends of the connection, early VAX systems used a DC-37 connector, Texas Instruments used a 25-pin card edge connector and Data General used a 50-pin micro ribbon connector; when IBM implemented the parallel interface on the IBM PC, they used the DB-25F connector at the PC-end of the interface, creating the now familiar parallel cable with a DB25M at one end and a 36 pin micro ribbon connector at the other. In theory, the Centronics port could transfer data as as 75,000 characters per second; this was far faster than the printer, which averaged about 160 characters per second, meaning the port spent much of its time idle.
The performance was defined by how the host could respond to the printer's BUSY signal asking for more data. To improve performance, printers began incorporating buffers so the host could send them data more in bursts; this not only reduced delays due to latency waiting for the next character to arrive from the host, but freed the host to perform other operations without causing a loss of performance. Performance was further improved by using the buffer to store several lines and printing in both directions, eliminating the delay while the print head returned to the left side of the page; such changes more than doubled the performance of an otherwise unchanged printer, as was the case on Centronics models like the 102 and 308. IBM released the IBM Personal Computer in 1981 and included a variant of the Centronics interface— only IBM logo printers could be used with the IBM PC. IBM standardized the parallel cable with a DB25F connector on the PC side and the 36-pin Centronics connector on the printer side.
Vendors soon released printers compatible with the IBM implementation. The original IBM parallel printer adapter
Printed circuit board
A printed circuit board mechanically supports and electrically connects electronic components or electrical components using conductive tracks and other features etched from one or more sheet layers of copper laminated onto and/or between sheet layers of a non-conductive substrate. Components are soldered onto the PCB to both electrically connect and mechanically fasten them to it. Printed circuit boards are used in all but the simplest electronic products, they are used in some electrical products, such as passive switch boxes. Alternatives to PCBs include wire wrap and point-to-point construction, both once popular but now used. PCBs require additional design effort to lay out the circuit, but manufacturing and assembly can be automated. Specialized CAD software is available to do much of the work of layout. Mass-producing circuits with PCBs is cheaper and faster than with other wiring methods, as components are mounted and wired in one operation. Large numbers of PCBs can be fabricated at the same time, the layout only has to be done once.
PCBs can be made manually in small quantities, with reduced benefits. PCBs can be double-sided, or multi-layer. Multi-layer PCBs allow for much higher component density, because circuit traces on the inner layers would otherwise take up surface space between components; the rise in popularity of multilayer PCBs with more than two, with more than four, copper planes was concurrent with the adoption of surface mount technology. However, multilayer PCBs make repair and field modification of circuits much more difficult and impractical; the world market for bare PCBs exceeded $60.2 billion in 2014. In 2018, the Global Single Sided Printed Circuit Board Market Analysis Report estimated that the PCB market would reach $79 billion by 2024. Before the development of printed circuit boards electrical and electronic circuits were wired point-to-point on a chassis; the chassis was a sheet metal frame or pan, sometimes with a wooden bottom. Components were attached to the chassis by insulators when the connecting point on the chassis was metal, their leads were connected directly or with jumper wires by soldering, or sometimes using crimp connectors, wire connector lugs on screw terminals, or other methods.
Circuits were large, bulky and fragile, production was labor-intensive, so the products were expensive. Development of the methods used in modern printed circuit boards started early in the 20th century. In 1903, a German inventor, Albert Hanson, described flat foil conductors laminated to an insulating board, in multiple layers. Thomas Edison experimented with chemical methods of plating conductors onto linen paper in 1904. Arthur Berry in 1913 patented a print-and-etch method in the UK, in the United States Max Schoop obtained a patent to flame-spray metal onto a board through a patterned mask. Charles Ducas in 1927 patented a method of electroplating circuit patterns; the Austrian engineer Paul Eisler invented the printed circuit as part of a radio set while working in the UK around 1936. In 1941 a multi-layer printed circuit was used in German magnetic influence naval mines. Around 1943 the USA began to use the technology on a large scale to make proximity fuses for use in World War II. After the war, in 1948, the USA released the invention for commercial use.
Printed circuits did not become commonplace in consumer electronics until the mid-1950s, after the Auto-Sembly process was developed by the United States Army. At around the same time in the UK work along similar lines was carried out by Geoffrey Dummer at the RRDE; as circuit boards became available, the point-to-point chassis construction method remained in common use in industry into at least the late 1960s. Printed circuit boards were introduced to reduce the size and cost of parts of the circuitry. In 1960, a small consumer radio receiver might be built with all its circuitry on one circuit board, but a TV set would contain one or more circuit boards. Predating the printed circuit invention, similar in spirit, was John Sargrove's 1936–1947 Electronic Circuit Making Equipment which sprayed metal onto a Bakelite plastic board; the ECME could produce three radio boards per minute. During World War II, the development of the anti-aircraft proximity fuse required an electronic circuit that could withstand being fired from a gun, could be produced in quantity.
The Centralab Division of Globe Union submitted a proposal which met the requirements: a ceramic plate would be screenprinted with metallic paint for conductors and carbon material for resistors, with ceramic disc capacitors and subminiature vacuum tubes soldered in place. The technique proved viable, the resulting patent on the process, classified by the U. S. Army, was assigned to Globe Union, it was not until 1984 that the Institute of Electrical and Electronics Engineers awarded Harry W. Rubinstein the Cledo Brunetti Award for early key contributions to the development of printed components and conductors on a common insulating substrate. Rubinstein was honored in 1984 by his alma mater, the University of Wisconsin-Madison, for his innovations in the technology of printed electronic circuits and the fabrication of capacitors; this invention represents a step in the development of integrated circuit technology, as not only wiring but passive components were fabricated on the ceramic substrate.
Every electronic component had
The VIA C3 is a family of x86 central processing units for personal computers designed by Centaur Technology and sold by VIA Technologies. The different CPU cores are built following the design methodology of Centaur Technology. In addition to x86 instructions, VIA C3 CPUs contain an undocumented Alternate Instruction Set allowing lower-level access to the CPU and in some cases privilege escalation. VIA Cyrix III was renamed VIA C3 with the switch to the advanced "Samuel 2" core; the addition of an on-die L2 cache improved performance somewhat. As it was not built upon Cyrix technology at all, the new name was just a logical step. To improve power consumption and reduce manufacturing costs, Samuel 2 was produced with 150 nm process technology; the VIA C3 processor continued an emphasis on minimizing power consumption with the next die shrink to a mixed 130/150 nm process. "Ezra" and "Ezra-T" were only new revisions of the "Samuel 2" core with some minor modifications to the bus protocol of "Ezra-T" to match compatibility with Intel's Pentium III "Tualatin" cores.
VIA enjoyed the lowest power usage in the x86 CPU market for several years. Performance, fell behind due to the lack of improvements to the design. Uniquely, the retail C3 CPU shipped inside a metal package; the "Nehemiah" was a major core revision. At the time, VIA's marketing efforts did not reflect the changes that had taken place; the company addressed numerous design shortcomings of the older cores, including the half-speed FPU. The number of pipeline stages was increased from 12 to 16, to allow for continued increases in clock speed. Additionally, it implemented the cmov instruction; the Linux kernel refers to this core as the C3-2. It removes 3DNow! Instructions in favour of implementing SSE. However, it was still based upon the aging Socket 370, running the single data rate front side bus at just 133 MHz; because the embedded system marketplace prefers low-power, low-cost CPU designs, VIA began targeting this segment more aggressively because the C3 fit those traits rather well. Centaur Technology concentrated on adding features attractive to the embedded marketplace.
An example built into the first "Nehemiah" core were the twin hardware random number generators. The "Nehemiah+" revision brought a few more advancements, including a high-performance AES encryption engine along with a notably small ball grid array chip package the size of a US 1 cent coin; when this architecture was marketed it was referred to as the "VIA C5". While slower than x86 CPUs being sold by AMD and Intel, both in absolute terms and on a clock-for-clock basis, VIA's chips are much smaller, cheaper to manufacture, lower power; this makes them attractive in the embedded marketplace, in the mobile sector as well. This has enabled VIA to continue to scale the frequencies of their chips with each manufacturing process die shrink, while competitive products from Intel have encountered severe thermal management issues, although the new Intel Core generation of chips are cooler. To this extent, the performance gap that used to exist between VIA and competing x86 chips is still wide, but starting to narrow.
Some of the design trade offs made by the VIA design team are worthy of study, as they run contrary to accepted wisdom. Because memory performance is the limiting factor in many benchmarks, VIA processors implement large primary caches, large TLBs, aggressive prefetching, among other enhancements. While these features are not unique to VIA, memory access optimization is one area where they have not dropped features to save die space. Clock frequency is in general terms favored over increasing instructions per cycle. Complex features such as out-of-order instruction execution are deliberately not implemented, because they impact the ability to increase the clock rate, require a lot of extra die space and power, have little impact on performance in several common application scenarios; the pipeline is arranged to provide one-clock execution of the used register–memory and memory–register forms of x86 instructions. Several used instructions require fewer pipeline clocks than on other x86 processors.
Infrequently used x86 instructions are emulated. This reduces power consumption; the impact upon the majority of real-world application scenarios is minimized. These design guidelines are derivative from the original RISC advocates, who stated a smaller set of instructions, better optimized, would deliver faster overall CPU performance; as it makes heavy use of memory operands, both as source and destination, the C3 design itself cannot qualify as RISC however. VIA's embedded platform products have been adopted in Nissan's car series, the Lafesta and Presage; these and other high volume industrial applications are starting to generate big profits for VIA as the small form factor and low power advantages close embedded deals. On the basis of the IDT Centaur acquisition, VIA appears to have come into possession of at least three patents, which cover key aspects of processor technology used by Intel. On the basis of the negotiating leverage these patents offered, in 2003 VIA arrived at an agreement with Intel that allowed for a ten-year patent cross license, enabling VIA to continue to design and manufacture x86 compatible CPUs.
VIA was granted a three-year period of grace in which it could continue to use Intel socket infrastructure. List of VIA C3 microprocessors List of VIA Eden microprocessors Li
A DIMM or dual in-line memory module comprises a series of dynamic random-access memory integrated circuits. These modules are mounted on a printed circuit board and designed for use in personal computers and servers. DIMMs began to replace SIMMs as the predominant type of memory module as Intel P5-based Pentium processors began to gain market share. While the contacts on SIMMs on both sides are redundant, DIMMs have separate electrical contacts on each side of the module. Another difference is that standard SIMMs have a 32-bit data path, while standard DIMMs have a 64-bit data path. Since Intel's Pentium, many processors have a 64-bit bus width, requiring SIMMs installed in matched pairs in order to populate the data bus; the processor would access the two SIMMs in parallel. DIMMs were introduced to eliminate this disadvantage. Variants of DIMM slots support DDR, DDR2, DDR3 and DDR4 RAM. Common types of DIMMs include the following: 70 to 200 pins 72-pin SO-DIMM, used for FPM DRAM and EDO DRAM 100-pin DIMM, used for printer SDRAM 144-pin SO-DIMM, used for SDR SDRAM 168-pin DIMM, used for SDR SDRAM 172-pin MicroDIMM, used for DDR SDRAM 184-pin DIMM, used for DDR SDRAM 200-pin SO-DIMM, used for DDR SDRAM and DDR2 SDRAM 200-pin DIMM, used for FPM/EDO DRAM in some Sun workstations and servers.201 to 300 pins 204-pin SO-DIMM, used for DDR3 SDRAM 214-pin MicroDIMM, used for DDR2 SDRAM 240-pin DIMM, used for DDR2 SDRAM, DDR3 SDRAM and FB-DIMM DRAM 244-pin MiniDIMM, used for DDR2 SDRAM 260-pin SO-DIMM, used for DDR4 SDRAM 260-pin SO-DIMM, with different notch position than on DDR4 SO-DIMMs, used for UniDIMMs that can carry either DDR3 or DDR4 SDRAM 278-pin DIMM, used for HP high density SDRAM.
288-pin DIMM, used for DDR4 SDRAM On the bottom edge of 168-pin DIMMs there are two notches, the location of each notch determines a particular feature of the module. The first notch is the DRAM key position, which represents RFU, unbuffered DIMM types; the second notch is the voltage key position, which represents 5.0 V, 3.3 V, RFU DIMM types. DDR, DDR2, DDR3 and DDR4 all have different pin counts, different notch positions; as of August, 2014, DDR4 SDRAM is a modern emerging type of dynamic random access memory with a high-bandwidth interface, has been in use since 2013. It is the higher-speed successor to DDR, DDR2 and DDR3. DDR4 SDRAM is neither forward nor backward compatible with any earlier type of random access memory because of different signalling voltages, timings, as well as other differing factors between the technologies and their implementation. A DIMM's capacity and other operational parameters may be identified with serial presence detect, an additional chip which contains information about the module type and timing for the memory controller to be configured correctly.
The SPD EEPROM connects to the System Management Bus and may contain thermal sensors. ECC DIMMs are those that have extra data bits which can be used by the system memory controller to detect and correct errors. There are numerous ECC schemes, but the most common is Single Error Correct, Double Error Detect which uses an extra byte per 64-bit word. ECC modules carry a multiple of 9 instead of a multiple of 8 chips. Sometimes memory modules are designed with two or more independent sets of DRAM chips connected to the same address and data buses. Ranks that share the same slot, only one rank may be accessed at any given time; the other ranks on the module are deactivated for the duration of the operation by having their corresponding CS signals deactivated. DIMMs are being manufactured with up to four ranks per module. Consumer DIMM vendors have begun to distinguish between single and dual ranked DIMMs. After a memory word is fetched, the memory is inaccessible for an extended period of time while the sense amplifiers are charged for access of the next cell.
By interleaving the memory, sequential memory accesses can be performed more because sense amplifiers have 3 cycles of idle time for recharging, between accesses. DIMMs are referred to as "single-sided" or "double-sided" to describe whether the DRAM chips are located on one or both sides of the module's printed circuit board. However, these terms may cause confusion, as the physical layout of the chips does not relate to how they are logically organized or accessed. JEDEC decided that the terms "dual-sided", "double-sided", or "dual-banked" were not correct when applied to registered DIMMs. Most DIMMs are built" × 8" memory chips with nine chips per side. In the case of "×4" registered DIMMs, the data width per side is 36 bits. In this case, the two-sided module is single-ranked. For "×8" registered DIMMs, each side is 72 bits wide, so the memory controller only addresses one side at a time; the above example applies to ECC memory that stores 72 bits instead of the more common 64. There would be one extra chip per group of eight, not counted.
For various technologies, there are certain bus and device clock frequencies that are standa
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
USB is an industry standard that establishes specifications for cables and protocols for connection and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 2.0 and USB 3.x. USB was designed to standardize the connection of peripherals like keyboards, pointing devices, digital still and video cameras, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, it has replaced interfaces such as serial ports and parallel ports, has become commonplace on a wide range of devices. USB connectors have been replacing other types for battery chargers of portable devices; this section is intended to allow fast identification of USB receptacles on equipment. Further diagrams and discussion of plugs and receptacles can be found in the main article above; the Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, when compared with existing standard or ad-hoc proprietary interfaces.
From the computer user's perspective, the USB interface improved ease of use in several ways. The USB interface is self-configuring, so the user need not adjust settings on the device and interface for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels. USB connectors are standardized at the host, so any peripheral can use any available receptacle. USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves; the USB interface is "hot pluggable", meaning devices can be exchanged without rebooting the host computer. Small devices can be powered directly from displacing extra power supply cables; because use of the USB logos is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration. Installation of a device relying on the USB standard requires minimal operator action.
When a device is plugged into a port on a running personal computer system, it is either automatically configured using existing device drivers, or the system prompts the user to locate a driver, installed and configured automatically. For hardware manufacturers and software developers, the USB standard eliminates the requirement to develop proprietary interfaces to new peripherals; the wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces. A USB interface can be designed to provide the best available latency for time-critical functions, or can be set up to do background transfers of bulk data with little impact on system resources; the USB interface is generalized with no signal lines dedicated to only one function of one device. USB cables are limited in length, as the standard was meant to connect to peripherals on the same table-top, not between rooms or between buildings. However, a USB port can be connected to a gateway.
USB has "master-slave" protocol for addressing peripheral devices. Some extension to this limitation is possible through USB On-The-Go. A host cannot "broadcast" signals to all peripherals at once, each must be addressed individually; some high speed peripheral devices require sustained speeds not available in the USB standard. While converters exist between certain "legacy" interfaces and USB, they may not provide full implementation of the legacy hardware. For a product developer, use of USB requires implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale must obtain a USB ID which requires a fee paid to the Implementers' Forum. Developers of products that use the USB specification must sign an agreement with Implementer's Forum. Use of the USB logos on the product require annual fees and membership in the organization. A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Microsoft, NEC, Nortel.
The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. Ajay Bhatt and his team worked on the standard at Intel; the original USB 1.0 specification, introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first used version of USB was 1.1, released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, the lower 1.5 Mbit/s rate for low data
Computer graphics are pictures and films created using computers. The term refers to computer-generated image data created with the help of specialized graphical hardware and software, it is a vast and developed area of computer science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing, it is abbreviated as CG, though sometimes erroneously referred to as computer-generated imagery. Some topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, implicit surface visualization with ray tracing, computer vision, among others; the overall methodology depends on the underlying sciences of geometry and physics. Computer graphics is responsible for displaying art and image data and meaningfully to the consumer, it is used for processing image data received from the physical world. Computer graphics development has had a significant impact on many types of media and has revolutionized animation, advertising, video games, graphic design in general.
The term computer graphics has been used in a broad sense to describe "almost everything on computers, not text or sound". The term computer graphics refers to several different things: the representation and manipulation of image data by a computer the various technologies used to create and manipulate images the sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content, see study of computer graphicsToday, computer graphics is widespread; such imagery is found in and on television, weather reports, in a variety of medical investigations and surgical procedures. A well-constructed graph can present complex statistics in a form, easier to understand and interpret. In the media "such graphs are used to illustrate papers, theses", other presentation material. Many tools have been developed to visualize data. Computer generated imagery can be categorized into several different types: two dimensional, three dimensional, animated graphics; as technology has improved, 3D computer graphics have become more common, but 2D computer graphics are still used.
Computer graphics has emerged as a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Over the past decade, other specialized fields have been developed like information visualization, scientific visualization more concerned with "the visualization of three dimensional phenomena, where the emphasis is on realistic renderings of volumes, illumination sources, so forth with a dynamic component"; the precursor sciences to the development of modern computer graphics were the advances in electrical engineering and television that took place during the first half of the twentieth century. Screens could display art since the Lumiere brothers' use of mattes to create special effects for the earliest films dating from 1895, but such displays were limited and not interactive; the first cathode ray tube, the Braun tube, was invented in 1897 – it in turn would permit the oscilloscope and the military control panel – the more direct precursors of the field, as they provided the first two-dimensional electronic displays that responded to programmatic or user input.
Computer graphics remained unknown as a discipline until the 1950s and the post-World War II period – during which time the discipline emerged from a combination of both pure university and laboratory academic research into more advanced computers and the United States military's further development of technologies like radar, advanced aviation, rocketry developed during the war. New kinds of displays were needed to process the wealth of information resulting from such projects, leading to the development of computer graphics as a discipline. Early projects like the Whirlwind and SAGE Projects introduced the CRT as a viable display and interaction interface and introduced the light pen as an input device. Douglas T. Ross of the Whirlwind SAGE system performed a personal experiment in which a small program he wrote captured the movement of his finger and displayed its vector on a display scope. One of the first interactive video games to feature recognizable, interactive graphics – Tennis for Two – was created for an oscilloscope by William Higinbotham to entertain visitors in 1958 at Brookhaven National Laboratory and simulated a tennis match.
In 1959, Douglas T. Ross innovated again while working at MIT on transforming mathematic statements into computer generated 3D machine tool vectors by taking the opportunity to create a display scope image of a Disney cartoon character. Electronics pioneer Hewlett-Packard went public in 1957 after incorporating the decade prior, established strong ties with Stanford University through its founders, who were alumni; this began the decades-long transformation of the southern San Francisco Bay Area into the world's leading computer technology hub - now known as Silicon Valley. The field of computer graphics developed with the emergence of computer graphics hardware. Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT's Lincoln Laboratory; the TX-2 integrated a number of new man-machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland's revolutionary Sketchpad software.
Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and recall them later. The light pen itself had a small photoelectric cell in its tip. T