Network interface controller
A network interface controller is a computer hardware component that connects a computer to a computer network. Early network interface controllers were implemented on expansion cards that plugged into a computer bus; the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. Modern network interface controllers offer advanced features such as interrupt and DMA interfaces to the host processors, support for multiple receive and transmit queues, partitioning into multiple logical interfaces, on-controller network traffic processing such as the TCP offload engine; the network controller implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or Wi-Fi. This provides a base for a full network protocol stack, allowing communication among computers on the same local area network and large-scale network communications through routable protocols, such as Internet Protocol.
The NIC allows computers to communicate over a computer network, either by using cables or wirelessly. The NIC is both a physical layer and data link layer device, as it provides physical access to a networking medium and, for IEEE 802 and similar networks, provides a low-level addressing system through the use of MAC addresses that are uniquely assigned to network interfaces. Network controllers were implemented as expansion cards that plugged into a computer bus; the low cost and ubiquity of the Ethernet standard means that most new computers have a network interface controller built into the motherboard. Newer server motherboards may have multiple network interfaces built-in; the Ethernet capabilities are either integrated into the motherboard chipset or implemented via a low-cost dedicated Ethernet chip. A separate network card is no longer required unless additional independent network connections are needed or some non-Ethernet type of network is used. A general trend in computer hardware is towards integrating the various components of systems on a chip, this is applied to network interface cards.
An Ethernet network controller has an 8P8C socket where the network cable is connected. Older NICs supplied BNC, or AUI connections. Ethernet network controllers support 10 Mbit/s Ethernet, 100 Mbit/s Ethernet, 1000 Mbit/s Ethernet varieties; such controllers are designated as 10/100/1000, meaning that they can support data rates of 10, 100 or 1000 Mbit/s. 10 Gigabit Ethernet NICs are available, and, as of November 2014, are beginning to be available on computer motherboards. Modular designs like SFP and SFP+ are popular for fiber-optic communication; these define a standard receptacle for media-dependent transceivers, so users can adapt the network interface to their needs. LEDs adjacent to or integrated into the network connector inform the user of whether the network is connected, when data activity occurs; the NIC may use one or more of the following techniques to indicate the availability of packets to transfer: Polling is where the CPU examines the status of the peripheral under program control.
Interrupt-driven I/O is. NICs may use one or more of the following techniques to transfer packet data: Programmed input/output, where the CPU moves the data to or from the NIC to memory. Direct memory access, where a device other than the CPU assumes control of the system bus to move data to or from the NIC to memory; this requires more logic on the card. In addition, a packet buffer on the NIC latency can be reduced. Multiqueue NICs provide multiple transmit and receive queues, allowing packets received by the NIC to be assigned to one of its receive queues; the NIC may distribute incoming traffic between the receive queues using a hash function. Each receive; the hardware-based distribution of the interrupts, described above, is referred to as receive-side scaling. Purely software implementations exist, such as the receive packet steering and receive flow steering. Further performance improvements can be achieved by routing the interrupt requests to the CPUs or cores executing the applications that are the ultimate destinations for network packets that generated the interrupts.
This technique improves Locality of reference and results in higher overall performance, reduced latency and better hardware utilization because of the higher utilization of CPU caches and fewer required context switches. Examples of such implementations are Intel Flow Director. With multi-queue NICs, additional performance improvements can be achieved by distributing outgoing traffic among different transmit queues. By assigning different transmit queues to different CPUs or CPU cores, internal operating system contentions can be avoided; this approach is referred to as transmit packet steering. Some NICs support transmit and receive queues without kernel support allowing the NIC to execute when the functionality of the operating system of a critical system has been compromised; those NICs support: Accessing local and remote memory without involving the remote CPU. Accessing local and remote I/O devices without involving local/remote CPU; this capability is supported by device-to-device communication over the I/O bus
A sound card is an internal expansion card that provides input and output of audio signals to and from a computer under control of computer programs. The term sound card is applied to external audio interfaces used for professional audio applications. Sound functionality can be integrated onto the motherboard, using components similar to those found on plug-in cards; the integrated sound system is still referred to as a sound card. Sound processing hardware is present on modern video cards with HDMI to output sound along with the video using that connector. Typical uses of sound cards or sound card functionality include providing the audio component for multimedia applications such as music composition, editing video or audio, presentation and entertainment and video projection. Sound cards are used for computer-based communication such as voice over IP and teleconferencing. Sound cards use a digital-to-analog converter, which converts recorded or generated digital signal data into an analog format.
The output signal is connected to an amplifier, headphones, or external device using standard interconnects, such as a TRS phone connector. If the number and size of connectors is too large for the space on the backplate, the connectors will be off-board using a breakout box, an auxiliary backplate, or a panel mounted at the front; some cards include a sound chip to support production of synthesized sounds for real-time generation of music and sound effects using minimal data and CPU time. A common external connector is the microphone connector, for signals from a microphone or other low-level input device. Input through a microphone jack can be used, for example, by speech recognition or voice over IP applications. Most sound cards have a line in connector for an analog input from a cassette tape or other sound source that has higher voltage levels than a microphone. In either case, the sound card uses an analog-to-digital converter to digitize this signal; the card may use direct memory access to transfer the samples to the main memory, from where a recording software may write it to the hard disk for storage, editing, or further processing.
An important sound card characteristic is polyphony, which refers to its ability to process and output multiple independent voices or sounds simultaneously. These distinct channels are seen as the number of audio outputs, which may correspond to a speaker configuration such as 2.0, 2.1, 5.1, or other configuration. Sometimes, the terms voice and channel are used interchangeably to indicate the degree of polyphony, not the output speaker configuration. For example, many older sound chips could accommodate three voices, but only one audio channel for output, requiring all voices to be mixed together. Cards, such as the AdLib sound card, had a 9-voice polyphony combined in 1 mono output channel. For some years, most PC sound cards have had multiple FM synthesis voices which were used for MIDI music; the full capabilities of advanced cards are not used. Modern low-cost integrated sound cards such as audio codecs like those meeting the AC'97 standard and some lower-cost expansion sound cards still work this way.
These devices may provide more than two sound output channels, but they have no actual hardware polyphony for either sound effects or MIDI reproduction – these tasks are performed in software. This is similar to the way inexpensive softmodems perform modem tasks in software rather than in hardware. In the early days of'wavetable' sample-based synthesis, some sound card manufacturers advertised polyphony on the MIDI capabilities alone. In this case, the card's output channel is irrelevant. Instead, the polyphony measurement applies to the number of MIDI instruments the sound card is capable of producing at one given time. Today, a sound card providing actual hardware polyphony, regardless of the number of output channels, is referred to as a "hardware audio accelerator", although actual voice polyphony is not the sole prerequisite, with other aspects such as hardware acceleration of 3D sound, positional audio and real-time DSP effects being more important. Since digital sound playback has become available and single and provided better performance than synthesis, modern sound cards with hardware polyphony do not use DACs with as many channels as voices.
The final playback stage is performed by an external DAC with fewer channels than voices. The Tandy 1000 and the PCjr used the same soundchip, but the Tandy 1000 utilesed the Audio IN pin, whereas the PCjr did not; this allowed the tandy to produce the speaker sound at the same time as the SN74689 Connectors on the sound cards are color-coded as per the PC System Design Guide. They will have symbols with arrows and soundwaves that are associated with each jack position, the meaning of each is given below
A patch is a set of changes to a computer program or its supporting data designed to update, fix, or improve it. This includes fixing security vulnerabilities and other bugs, with such patches being called bugfixes or bug fixes, improving the usability or performance. Although meant to fix problems, poorly designed patches can sometimes introduce new problems. In some special cases updates may knowingly break the functionality or disable a device, for instance, by removing components for which the update provider is no longer licensed. Patch management is a part of lifecycle management, is the process of using a strategy and plan of what patches should be applied to which systems at a specified time. Patches for proprietary software are distributed as executable files instead of source code; this type of patch modifies the program executable—the program the user runs—either by modifying the binary file to include the fixes or by replacing it. On early 8-bit microcomputers, for example the Radio Shack TRS-80, the operating system included a PATCH utility which accepted patch data from a text file and applied the fixes to the target program's executable binary file.
Small in-memory patches could be manually applied with the system debug utility, such as CP/M's DDT or MS-DOS's DEBUG debuggers. Programmers working in interpreted BASIC used the POKE command to temporarily alter the functionality of a system service routine. Patches can circulate in the form of source code modifications. In this case, the patches consist of textual differences between two source code files, called "diffs"; these types of patches come out of open-source software projects. In these cases, developers expect users to compile the changed files themselves; because the word "patch" carries the connotation of a small fix, large fixes may use different nomenclature. Bulky patches or patches that change a program may circulate as "service packs" or as "software updates". Microsoft Windows NT and its successors use the "service pack" terminology. IBM used the terms "FixPaks" and "Corrective Service Diskette" to refer to these updates. Software suppliers distributed patches on paper tape or on punched cards, expecting the recipient to cut out the indicated part of the original tape, patch in the replacement segment.
Patch distributions used magnetic tape. After the invention of removable disk drives, patches came from the software developer via a disk or CD-ROM via mail. With the available Internet access, downloading patches from the developer's web site or through automated software updates became available to the end-users. Starting with Apple's Mac OS 9 and Microsoft's Windows ME, PC operating systems gained the ability to get automatic software updates via the Internet. Computer programs can coordinate patches to update a target program. Automation simplifies the end-user's task – they need only to execute an update program, whereupon that program makes sure that updating the target takes place and correctly. Service packs for Microsoft Windows NT and its successors and for many commercial software products adopt such automated strategies; some programs can update themselves via the Internet with little or no intervention on the part of users. The maintenance of server software and of operating systems takes place in this manner.
In situations where system administrators control a number of computers, this sort of automation helps to maintain consistency. The application of security patches occurs in this manner; the size of patches may vary from a few bytes to hundreds of megabytes. In particular, patches can become quite large when the changes add or replace non-program data, such as graphics and sounds files; such situations occur in the patching of computer games. Compared with the initial installation of software, patches do not take long to apply. In the case of operating systems and computer server software, patches have the important role of fixing security holes; some critical patches involve issues with drivers. Patches may require prior application of other patches, or may require prior or concurrent updates of several independent software components. To facilitate updates, operating systems provide automatic or semi-automatic updating facilities. Automatic updates have not succeeded in gaining widespread popularity in corporate computing environments because of the aforementioned glitches, but because administrators fear that software companies may gain unlimited control over their computers.
Package management systems can offer various degrees of patch automation. Usage of automatic updates has become far more widespread in the consumer market, due to the fact that Microsoft Windows added support for them, Service Pack 2 of Windows XP enabled them by default. Cautious users system administrators, tend to put off applying patches until they can verify the stability of the fixes. Microsoft SUS supports this. In the cases of large patches or of significant changes, distributors limit availability of patches to qualified developers as a beta test. Applying patches to firmware poses special challenges, as it involves the provisioning of new firmware images, rather than applying only the differences from the previous version; the patch consists of a firmware image in form of binary d
RISC-V is an open-source hardware instruction set architecture based on established reduced instruction set computer principles. The project began in 2010 at the University of California, but many contributors are volunteers not affiliated with the university; as of March 2019, version 2.2 of the user-space ISA is frozen, permitting most software development to proceed. The privileged ISA is available as draft version 1.10. A debug specification is available as a draft version 0.13.1. Usable new ISAs are very expensive. Computer-designers cannot afford to work for free. Developing a CPU requires design expertise in several specialties: Electronic digital logic and operating systems, it is rare to find such a team outside of a professional engineering organization. The team is paid from money charged for their designs. Therefore, commercial vendors of computer designs, such as ARM Holdings and MIPS Technologies charge royalties for the use of their designs and copyrights, they often require non-disclosure agreements before releasing documents that describe their designs' detailed advantages and instruction set.
In many cases, they never describe the reasons for their design choices. This expense and secrecy make the development of new software much more difficult, it prevents security audits. Another result is that modern, high-quality general-purpose computer instruction sets have not been explained or available except in academic settings. RISC-V was started to solve these problems; the goal was to make a practical ISA, open-sourced, usable in any hardware or software design without royalties. The rationales for every part of the project are explained, at least broadly; the RISC-V authors have substantial experience in computer design. The RISC-V ISA is a direct development from a series of academic computer-design projects, it was originated in part to aid such projects. To address the cost of design, the project started as academic research funded by DARPA. In order to build a large, continuing community of users and therefore accumulate designs and software, the RISC-V ISA designers planned to support a wide variety of practical uses: Small and low-power real-world implementations, without over-architecting for a particular microarchitecture.
A need for a large base of contributors is part of the reason why RISC-V was engineered to fit so many uses. Therefore, many RISC-V contributors see the project as a unified community effort; the term RISC dates from about 1980. Before this, there was some knowledge that simpler computers could be effective, but the design principles were not described. Simple, effective computers have always been of academic interest. Academics created the RISC instruction set DLX for the first edition of Computer Architecture: A Quantitative Approach in 1990. David Patterson was an author, assisted RISC-V. DLX was for educational use. Academics and hobbyists implemented it using field-programmable gate arrays, it was not a commercial success. ARM CPUs, versions 2 and earlier, had a public-domain instruction set, it is still supported by the GNU Compiler Collection, a popular free-software compiler. Three open-source cores exist for this ISA. OpenRISC is an open-source ISA based with associated RISC designs, it is supported with GCC and Linux implementations.
However, it has few commercial implementations. Krste Asanović at the University of California, found many uses for an open-source computer system. In 2010, he decided to develop and publish one in a "short, three-month project over the summer"; the plan was to help both industrial users. David Patterson at Berkeley aided the effort, he identified the properties of Berkeley RISC, RISC-V is one of his long series of cooperative RISC research projects. At this stage, students inexpensively provided initial software, CPU designs; the RISC-V authors and their institution provided the ISA documents and several CPU designs under BSD licenses, which allow derivative works—such as RISC-V chip designs—to be either open and free, or closed and proprietary. Early funding was from DARPA. Commercial concerns require an ISA to be stable before they can utilize it in a product that might last many years. To address this issue, the RISC-V foundation was formed to own and publish intellectual property related to RISC-V's definition.
The original authors and owners have surrendered their rights to the foundation. As of 2019 the foundation publishes the documents defining RISC-V and permits unrestricted utilization of the ISA for both software and hardware design. However, only paid members of the RISC-V foundation can vote to approve changes or utilize the trademarked compatibility logo. 2017: The Linley Group's Analyst's Choice Award for Best Technology The designers say that the instruction set is the main interface in a computer because it lies between the hardware and the software. If a good instruction set was open, available for use by all, it should reduce the cost of software by permitting far more reuse, it should increase competition among hardware providers, who can use more resources for design and less for software support. The designers assert that new principles are becoming rare in instruction set design, as the most successful designs of the last forty years have become similar. Of those that failed, most did so because their sponsoring companies failed commercially, not because the instruction sets were poor technically.
So, a well-designed open instruction set designed using well-established principles should attract long-term sup
Debian is a Unix-like operating system consisting of free software. Ian Murdock started the Debian Project on August 16, 1993. Debian 0.01 was released on September 15, 1993, the first stable version, 1.1, was released on June 17, 1996. The Debian stable branch is the most popular edition for personal computers and network servers, is used as the basis for many other distributions. Debian is one of the earliest operating systems based on the Linux kernel; the project's work is carried out over the Internet by a team of volunteers guided by the Debian Project Leader and three foundational documents: the Debian Social Contract, the Debian Constitution, the Debian Free Software Guidelines. New distributions are updated continually, the next candidate is released after a time-based freeze. Debian has been developed and distributed according to the principles of the GNU Project, this drew the support of the Free Software Foundation which sponsored the project from November 1994 to November 1995; when the sponsorship ended, the Debian Project formed the nonprofit Software in the Public Interest to continue financially supporting development.
Debian has access to online repositories that contain over 51,000 packages Debian contains only free software, but non-free software can be downloaded and installed from the Debian repositories. Debian includes popular free programs such as LibreOffice, Firefox web browser, Evolution mail, K3b disc burner, VLC media player, GIMP image editor, Evince document viewer. Debian is a popular choice for servers, for example as the operating system component of a LAMP stack. Debian supports Linux having offered kFreeBSD for version 7 but not 8, GNU Hurd unofficially. GNU/kFreeBSD was released as a technology preview for IA-32 and x86-64 architectures, lacked the amount of software available in Debian's Linux distribution. Official support for kFreeBSD was removed for version 8, which did not provide a kFreeBSD-based distribution. Several flavors of the Linux kernel exist for each port. For example, the i386 port has flavors for IA-32 PCs supporting Physical Address Extension and real-time computing, for older PCs, for x86-64 PCs.
The Linux kernel does not contain firmware without sources, although such firmware is available in non-free packages and alternative installation media. Debian offers CD images built for Xfce, the default desktop on CD, DVD images for GNOME, KDE and others. MATE is supported, while Cinnamon support was added with Debian 8.0 Jessie. Less common window managers such as Enlightenment, Fluxbox, IceWM, Window Maker and others are available; the default desktop environment of version 7.0 Wheezy was temporarily switched to Xfce, because GNOME 3 did not fit on the first CD of the set. The default for the version 8.0 Jessie was changed again to Xfce in November 2013, back to GNOME in September 2014. Several parts of Debian are translated into languages other than American English, including package descriptions, configuration messages and the website; the level of software localization depends on the language, ranging from the supported German and French to the hardly translated Creek and Samoan. The installer is available in 73 languages.
Debian offers CD images for installation that can be downloaded using BitTorrent or jigdo. Physical disks can be bought from retailers; the full sets are made up of several discs, but only the first disc is required for installation, as the installer can retrieve software not contained in the first disc image from online repositories. Debian offers different network installation methods. A minimal install of Debian is available via the netinst CD, whereby Debian is installed with just a base and added software can be downloaded from the Internet. Another option is to boot the installer from the network. Installation images can be used to create a bootable USB drive; the default bootstrap loader is GNU GRUB version 2, though the package name is grub, while version 1 was renamed to grub-legacy. This conflicts with e.g. Fedora, where grub version 2 is named grub2; the default desktop may be chosen from the DVD boot menu among GNOME, KDE Plasma, Xfce and LXDE, from special disc 1 CDs. Debian releases live install images for CDs, DVDs and USB thumb drives, for IA-32 and x86-64 architectures, with a choice of desktop environments.
These Debian Live images allow users to boot from removable media and run Debian without affecting the contents of their computer. A full install of Debian to the computer's hard drive can be initiated from the live image environment. Personalized images can be built with the live-build tool for discs, USB drives and for network booting purposes. Debian was first announced on August 16, 1993, by Ian Murdock, who called the system "the Debian Linux Release"; the word "Debian" was formed as a portmanteau of the first name of his then-girlfriend Debra Lynn and his own first name. Before Debian's release, the Softlanding Linux System had been a popular Linux distribution and the basis for Slackware; the perceived poor maintenance and prevalence of bugs in SLS motivated Murdock to launch a new distribution. Debian 0.01, released on September 15, 1993, was the first of several internal releases. Version 0.90 was the first public release, providing support through mailing lists hosted at Pixar. The release included the Debian Linux Manifesto, outlining Murdock's view for the new operating system.
In it he called for the creation of a distribution to be maintained in the spirit of Linux and GNU. The Debian project released the 0.9x versions in 1994 and 1995. During this time it was sponso
Trisquel is a computer operating system, a Linux distribution, derived from another distribution, Ubuntu. The project aims for a free software system without proprietary software or firmware and uses a version of Ubuntu's modified kernel, with the non-free code removed. Trisquel relies on user donations, its logo is a Celtic symbol. Trisquel is listed by the Free Software Foundation as a distribution that contains only free software. Four basic versions are available; the standard Trisquel distribution includes the MATE desktop environment and graphical user interface, English, Spanish and 48 other localizations, 50 in total, on a 2.5GB live DVD image. Other translations can be downloaded. Trisquel Mini is an alternative to mainline Trisquel, designed to run well on netbooks and older hardware, it uses the low-resource environment LXDE and lightweight GTK+ and X Window System alternatives to GNOME and Qt-KDE applications. The LXDE desktop includes English and Spanish localizations, can install from a 500 MB CD image.
If an Internet connection is enabled while installing Trisquel or Trisquel Mini, the software will download and install itself, including user menus and all available documentation, in any one or more of the languages in which it has been localized. Sugar is a free and open source desktop environment designed with the goal of being used by children for interactive learning. Sugar replaces the standard MATE desktop environment available with Trisquel. Consists of a 25MB CD iso image with just the minimal amount of software to start the installation via a text based network installer and fetch the remaining packages over the Internet; the full installation includes 51 languages pre-installed in a downloadable 1.2-gigabyte DVD image. Full source code for the full Trisquel installation is available in a downloadable 3-gigabyte DVD image; the project began in 2004 with sponsorship of the University of Vigo for Galician language support in education software and was presented in April 2005 with Richard Stallman, founder of the GNU Project, as a special guest.
According to project director Rubén Rodríguez, the support for Galician has created interest in South American and Mexican communities of emigrants from the Province of Ourense. By December 2008, Trisquel was included by the Free Software Foundation in its list of Linux distributions endorsed by the Free Software Foundation; the releases that use GNOME 3.x use GNOME Classic/Flashback, rather than the default GNOME Shell. All Trisquel releases starting with version 6 are only based on Ubuntu LTS releases. Current versions include this common software: Abrowser, a rebranded version of Firefox that never suggests non-free add-ons, includes no trademarked art or names, it features privacy enhancing modifications such as not starting network connections on its own. It is rebranded because the Mozilla Trademark Policy forbids modifications that include their trademark without consent. Gnash, a SWF viewer, instead of Adobe Flash Player, proprietary software. Prior editions: Trisquel Pro was small, it was part of the Trisquel 2.0 LTS Robur.
Trisquel Edu was education-oriented, for universities. Like Trisquel Pro, no other release followed Trisquel 2.0 Robur. Trisquel on Sugar was education-oriented, based on the Sugar desktop environment for interactive learning for children, it was released at the same time as Trisquel 7. Trisquel Gamer was an independent edition maintained by David Zaragoza, it could boot from a live DVD or USB drive. It was released with Trisquel 3.5, no longer supported. Jesse Smith of DistroWatch reviewed the 4.0 release and described it as refined and dependable. He portrayed difficulty with removing software as his main problem with the release, he complimented it as an operating system that showcased utility instead of mere compliance with free software criteria. Jesse Smith reviewed Trisquel 7.0 in 2014, writing "Whenever I boot up Trisquel I find myself wondering whether the free software only distribution will be able to hold its own when it comes to hardware drivers, multimedia support and productivity software.
The answer I came to when running Trisquel 7.0 is that, the distribution appears to be nearly as capable as operating systems that do not stick to the FSF's definition of free software. Some people who use hardware that requires binary blobs or non-free drivers may face problems and Flash support isn't perfect when using the free Gnash player, but otherwise Trisquel appears to be every bit as functional as other mainstream Linux distributions; the software Trisquel ships with appears to be stable and user friendly. The distribution is easy to install, I found it pleasant to use and I didn't encounter any problems. People who value or wish to promote free software should try running Trisquel, it's an excellent example of what can be accomplished with free software."Jim Lynch of Desktop Linux Reviews reviewed the 5.5 release and described it as "well-ordered and well developed" and recommended it to users whether they care about only usin
A video card is an expansion card which generates a feed of output images to a display device. These are advertised as discrete or dedicated graphics cards, emphasizing the distinction between these and integrated graphics. At the core of both is the graphics processing unit, the main part that does the actual computations, but should not be confused as the video card as a whole, although "GPU" is used to refer to video cards. Most video cards are not limited to simple display output, their integrated graphics processor can perform additional processing, removing this task from the central processor of the computer. For example, Nvidia and AMD produced cards render the graphics pipeline OpenGL and DirectX on the hardware level. In the 2010s, there has been a tendency to use the computing capabilities of the graphics processor to solve non-graphic tasks; the graphics card is made in the form of a printed circuit board and inserted into an expansion slot, universal or specialized. Some have been made using dedicated enclosures, which are connected to the computer via a docking station or a cable.
Standards such as MDA, CGA, HGC, Tandy, PGC, EGA, VGA, MCGA, 8514 or XGA were introduced from 1982 to 1990 and supported by a variety of hardware manufacturers. 3dfx Interactive was one of the first companies to develop a GPU with 3D acceleration and the first to develop a graphical chipset dedicated to 3D, but without 2D support. Now the majority of modern video cards are built with either AMD-sourced or Nvidia-sourced graphics chips; until 2000, 3dfx Interactive was an important, groundbreaking, manufacturer. Most video cards offer various functions such as accelerated rendering of 3D scenes and 2D graphics, MPEG-2/MPEG-4 decoding, TV output, or the ability to connect multiple monitors. Video cards have sound card capabilities to output sound – along with the video for connected TVs or monitors with integrated speakers. Within the industry, video cards are sometimes called graphics add-in-boards, abbreviated as AIBs, with the word "graphics" omitted; as an alternative to the use of a video card, video hardware can be integrated into the motherboard, CPU, or a system-on-chip.
Both approaches can be called integrated graphics. Motherboard-based implementations are sometimes called "on-board video". All desktop computer motherboards with integrated graphics allow the disabling of the integrated graphics chip in BIOS, have a PCI, or PCI Express slot for adding a higher-performance graphics card in place of the integrated graphics; the ability to disable the integrated graphics sometimes allows the continued use of a motherboard on which the on-board video has failed. Sometimes both the integrated graphics and a dedicated graphics card can be used to feed separate displays; the main advantages of integrated graphics include cost, compactness and low energy consumption. The performance disadvantage of integrated graphics arises because the graphics processor shares system resources with the CPU. A dedicated graphics card has its own random access memory, its own cooling system, dedicated power regulators, with all components designed for processing video images. Upgrading to a dedicated graphics card offloads work from the CPU and system RAM, so not only will graphics processing be faster, but the computer's overall performance may improve.
Both AMD and Intel have introduced CPUs and motherboard chipsets which support the integration of a GPU into the same die as the CPU. AMD markets CPUs with integrated graphics under the trademark Accelerated Processing Unit, while Intel markets similar technology under the "Intel HD Graphics and Iris" brands. With the 8th Generation Processors, Intel announced the Intel UHD series of Integrated Graphics for better support of 4K Displays. Although they are still not equivalent to the performance of discrete solutions, Intel's HD Graphics platform provides performance approaching discrete mid-range graphics, AMD APU technology has been adopted by both the PlayStation 4 and Xbox One video game consoles; as the processing power of video cards has increased, so has their demand for electrical power. Current high-performance video cards tend to consume a great deal of power. For example, the thermal design power for the GeForce GTX TITAN is 250 watts; when tested while gaming, the GeForce GTX 1080 Ti Founder's Edition averaged 227 watts of power consumption.
While CPU and power supply makers have moved toward higher efficiency, power demands of GPUs have continued to rise, so video cards may have the largest power consumption in a computer. Although power supplies are increasing their power too, the bottleneck is due to the PCI-Express connection, limited to supplying 75 watts. Modern video cards with a power consumption of over 75 watts include a combination of six-pin or eight-pin sockets that connect directly to the power supply. Providing adequate cooling becomes a challenge in such computers. Computers with multiple video cards may need power supplies in the 1000–1500 W range. Heat extraction becomes a major design consideration for computers with two or more high-end video cards. Video cards for desktop computers come in one of two size profiles, which can allow a graphics card to be added to small-sized PCs; some video cards are not of usual size, are thus categorized as being low profile. Video card profiles are based on height only, with low-profile cards taking up less than the height of a