In computing, memory refers to the computer hardware integrated circuits that store information for immediate use in a computer. Computer memory operates at a high speed, for example random-access memory, as a distinction from storage that provides slow-to-access information but offers higher capacities. If needed, contents of the computer memory can be transferred to secondary storage. An archaic synonym for memory is store; the term "memory", meaning "primary storage" or "main memory", is associated with addressable semiconductor memory, i.e. integrated circuits consisting of silicon-based transistors, used for example as primary storage but other purposes in computers and other digital electronic devices. There are two main kinds of semiconductor memory and non-volatile. Examples of non-volatile memory are ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are primary storage, dynamic random-access memory, fast CPU cache memory, static random-access memory, fast but energy-consuming, offering lower memory areal density than DRAM.
Most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes multiple bits per cell; the memory cells are grouped into words of fixed word length, for example 1, 2, 4, 8, 16, 32, 64 or 128 bit. Each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory; this implies that processor registers are not considered as memory, since they only store one word and do not include an addressing mechanism. Typical secondary storage devices are solid-state drives. In the early 1940s, memory technology permitted a capacity of a few bytes; the first electronic programmable digital computer, the ENIAC, using thousands of octal-base radio vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits which were held in the vacuum tube accumulators. The next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s.
Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through mercury, with the quartz crystals acting as transducers to read and write bits. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient. Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, which would be the first random-access computer memory; the Williams tube would prove less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory. Jay Forrester, Jan A. Rajchman and An Wang developed magnetic-core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s.
Developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term "memory" when used with reference to computers refers to random-access memory. Volatile memory is computer memory. Most modern semiconductor volatile memory is either static RAM or dynamic RAM. SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs. SRAM is not worthwhile for desktop system memory, where DRAM dominates, but is used for their cache memories. SRAM is commonplace in small embedded systems. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM. Non-volatile memory is computer memory that can retain the stored information when not powered.
Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices, optical discs, early computer storage methods such as paper tape and punched cards. Forthcoming non-volatile memory technologies include FERAM, CBRAM, PRAM, STT-RAM, SONOS, RRAM, racetrack memory, NRAM, 3D XPoint, millipede memory. A third category of memory is "semi-volatile"; the term is used to describe a memory which has some limited non-volatile duration after power is removed, but data is lost. A typical goal when using a semi-volatile memory is to provide high performance/durability/etc. Associated with volatile memories, while providing some benefits of a true non-volatile memory. For example, some non-volatile memory types can wear out, where a "worn" cell has increased volatility but otherwise continues to work. Data locations which are written
A personal computer is a multi-purpose computer whose size and price make it feasible for individual use. Personal computers are intended to be operated directly by an end user, rather than by a computer expert or technician. Unlike large costly minicomputer and mainframes, time-sharing by many people at the same time is not used with personal computers. Institutional or corporate computer owners in the 1960s had to write their own programs to do any useful work with the machines. While personal computer users may develop their own applications these systems run commercial software, free-of-charge software or free and open-source software, provided in ready-to-run form. Software for personal computers is developed and distributed independently from the hardware or operating system manufacturers. Many personal computer users no longer need to write their own programs to make any use of a personal computer, although end-user programming is still feasible; this contrasts with mobile systems, where software is only available through a manufacturer-supported channel, end-user program development may be discouraged by lack of support by the manufacturer.
Since the early 1990s, Microsoft operating systems and Intel hardware have dominated much of the personal computer market, first with MS-DOS and with Microsoft Windows. Alternatives to Microsoft's Windows operating systems occupy a minority share of the industry; these include free and open-source Unix-like operating systems such as Linux. Advanced Micro Devices provides the main alternative to Intel's processors; the advent of personal computers and the concurrent Digital Revolution have affected the lives of people in all countries. "PC" is an initialism for "personal computer". The IBM Personal Computer incorporated the designation in its model name, it is sometimes useful to distinguish personal computers of the "IBM Personal Computer" family from personal computers made by other manufacturers. For example, "PC" is used in contrast with "Mac", an Apple Macintosh computer.. Since none of these Apple products were mainframes or time-sharing systems, they were all "personal computers" and not "PC" computers.
The "brain" may one day come down to our level and help with our income-tax and book-keeping calculations. But this is speculation and there is no sign of it so far. In the history of computing, early experimental machines could be operated by a single attendant. For example, ENIAC which became operational in 1946 could be run by a single, albeit trained, person; this mode pre-dated the batch programming, or time-sharing modes with multiple users connected through terminals to mainframe computers. Computers intended for laboratory, instrumentation, or engineering purposes were built, could be operated by one person in an interactive fashion. Examples include such systems as the Bendix G15 and LGP-30of 1956, the Programma 101 introduced in 1964, the Soviet MIR series of computers developed from 1965 to 1969. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person.
In what was to be called the Mother of All Demos, SRI researcher Douglas Engelbart in 1968 gave a preview of what would become the staples of daily working life in the 21st century: e-mail, word processing, video conferencing, the mouse. The demonstration required technical support staff and a mainframe time-sharing computer that were far too costly for individual business use at the time; the development of the microprocessor, with widespread commercial availability starting in the mid 1970's, made computers cheap enough for small businesses and individuals to own. Early personal computers—generally called microcomputers—were sold in a kit form and in limited volumes, were of interest to hobbyists and technicians. Minimal programming was done with toggle switches to enter instructions, output was provided by front panel lamps. Practical use required adding peripherals such as keyboards, computer displays, disk drives, printers. Micral N was the earliest commercial, non-kit microcomputer based on a microprocessor, the Intel 8008.
It was built starting in 1972, few hundred units were sold. This had been preceded by the Datapoint 2200 in 1970, for which the Intel 8008 had been commissioned, though not accepted for use; the CPU design implemented in the Datapoint 2200 became the basis for x86 architecture used in the original IBM PC and its descendants. In 1973, the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP based on the IBM PALM processor with a Philips compact cassette drive, small CRT, full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL/1130. In 1973, APL was available only on mainframe computers, most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC; because SCAMP was the first to emulate APL/1130 performance on a portable, single user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". This seminal, single user portable computer now resides in the Smithsonian Institution, Washington, D.
C.. Successful demonstrations of the 1973 SCAMP prototype led to the IBM 5100 portable microcomputer launched in 1975 with the ability to be programmed in both APL and BASIC for engineers, analysts and other business problem-solvers. In the late 1960s such a machine would have been nearly as large as two desks and would have weigh
USB is an industry standard that establishes specifications for cables and protocols for connection and power supply between personal computers and their peripheral devices. Released in 1996, the USB standard is maintained by the USB Implementers Forum. There have been three generations of USB specifications: USB 2.0 and USB 3.x. USB was designed to standardize the connection of peripherals like keyboards, pointing devices, digital still and video cameras, portable media players, disk drives and network adapters to personal computers, both to communicate and to supply electric power, it has replaced interfaces such as serial ports and parallel ports, has become commonplace on a wide range of devices. USB connectors have been replacing other types for battery chargers of portable devices; this section is intended to allow fast identification of USB receptacles on equipment. Further diagrams and discussion of plugs and receptacles can be found in the main article above; the Universal Serial Bus was developed to simplify and improve the interface between personal computers and peripheral devices, when compared with existing standard or ad-hoc proprietary interfaces.
From the computer user's perspective, the USB interface improved ease of use in several ways. The USB interface is self-configuring, so the user need not adjust settings on the device and interface for speed or data format, or configure interrupts, input/output addresses, or direct memory access channels. USB connectors are standardized at the host, so any peripheral can use any available receptacle. USB takes full advantage of the additional processing power that can be economically put into peripheral devices so that they can manage themselves; the USB interface is "hot pluggable", meaning devices can be exchanged without rebooting the host computer. Small devices can be powered directly from displacing extra power supply cables; because use of the USB logos is only permitted after compliance testing, the user can have confidence that a USB device will work as expected without extensive interaction with settings and configuration. Installation of a device relying on the USB standard requires minimal operator action.
When a device is plugged into a port on a running personal computer system, it is either automatically configured using existing device drivers, or the system prompts the user to locate a driver, installed and configured automatically. For hardware manufacturers and software developers, the USB standard eliminates the requirement to develop proprietary interfaces to new peripherals; the wide range of transfer speeds available from a USB interface suits devices ranging from keyboards and mice up to streaming video interfaces. A USB interface can be designed to provide the best available latency for time-critical functions, or can be set up to do background transfers of bulk data with little impact on system resources; the USB interface is generalized with no signal lines dedicated to only one function of one device. USB cables are limited in length, as the standard was meant to connect to peripherals on the same table-top, not between rooms or between buildings. However, a USB port can be connected to a gateway.
USB has "master-slave" protocol for addressing peripheral devices. Some extension to this limitation is possible through USB On-The-Go. A host cannot "broadcast" signals to all peripherals at once, each must be addressed individually; some high speed peripheral devices require sustained speeds not available in the USB standard. While converters exist between certain "legacy" interfaces and USB, they may not provide full implementation of the legacy hardware. For a product developer, use of USB requires implementation of a complex protocol and implies an "intelligent" controller in the peripheral device. Developers of USB devices intended for public sale must obtain a USB ID which requires a fee paid to the Implementers' Forum. Developers of products that use the USB specification must sign an agreement with Implementer's Forum. Use of the USB logos on the product require annual fees and membership in the organization. A group of seven companies began the development of USB in 1994: Compaq, DEC, IBM, Microsoft, NEC, Nortel.
The goal was to make it fundamentally easier to connect external devices to PCs by replacing the multitude of connectors at the back of PCs, addressing the usability issues of existing interfaces, simplifying software configuration of all devices connected to USB, as well as permitting greater data rates for external devices. Ajay Bhatt and his team worked on the standard at Intel; the original USB 1.0 specification, introduced in January 1996, defined data transfer rates of 1.5 Mbit/s Low Speed and 12 Mbit/s Full Speed. Microsoft Windows 95, OSR 2.1 provided OEM support for the devices. The first used version of USB was 1.1, released in September 1998. The 12 Mbit/s data rate was intended for higher-speed devices such as disk drives, the lower 1.5 Mbit/s rate for low data
Small Computer System Interface is a set of standards for physically connecting and transferring data between computers and peripheral devices. The SCSI standards define commands, electrical and logical interfaces. SCSI is most used for hard disk drives and tape drives, but it can connect a wide range of other devices, including scanners and CD drives, although not all controllers can handle all devices; the SCSI standard defines command sets for specific peripheral device types. The ancestral SCSI standard, X3.131-1986 referred to as SCSI-1, was published by the X3T9 technical committee of the American National Standards Institute in 1986. SCSI-2 was published in August 1990 as X3. T9.2 / 86-109, with subsequent adoption of a multitude of interfaces. Further refinements have resulted in improvements in performance and support for ever-increasing storage data capacity. SCSI is derived from "SASI", the "Shugart Associates System Interface", developed circa 1978 and publicly disclosed in 1981. Larry Boucher is considered to be the "father" of SASI and SCSI due to his pioneering work first at Shugart Associates and at Adaptec.
A SASI controller provided a bridge between a hard disk drive's low-level interface and a host computer, which needed to read blocks of data. SASI controller boards were the size of a hard disk drive and were physically mounted to the drive's chassis. SASI, used in mini- and early microcomputers, defined the interface as using a 50-pin flat ribbon connector, adopted as the SCSI-1 connector. SASI is a compliant subset of SCSI-1 so that many, if not all, of the then-existing SASI controllers were SCSI-1 compatible; until at least February 1982, ANSI developed the specification as "SASI" and "Shugart Associates System Interface. A full day was devoted to agreeing to name the standard "Small Computer System Interface", which Boucher intended to be pronounced "sexy", but ENDL's Dal Allan pronounced the new acronym as "scuzzy" and that stuck. A number of companies such as NCR Corporation and Optimem were early supporters of SCSI; the NCR facility in Wichita, Kansas is thought to have developed the industry's first SCSI controller chip.
The "small" reference in "small computer system interface" is historical. Since its standardization in 1986, SCSI has been used in the Amiga, Apple Macintosh and Sun Microsystems computer lines and PC server systems. Apple started using the less-expensive parallel ATA for its low-end machines with the Macintosh Quadra 630 in 1994, added it to its high-end desktops starting with the Power Macintosh G3 in 1997. Apple dropped on-board SCSI in favor of IDE and FireWire with the Power Mac G3 in 1999, while still offering a PCI SCSI host adapter as an option on up to the Power Macintosh G4 models. Sun switched its lower-end range to Serial ATA. Commodore included SCSI on the Amiga 3000/3000T systems and it was an add-on to previous Amiga 500/2000 models. Starting with the Amiga 600/1200/4000 systems Commodore switched to the IDE interface. Atari included SCSI as standard in its Atari MEGA Atari TT and Atari Falcon computer models. SCSI has never been popular in the low-priced IBM PC world, owing to the lower cost and adequate performance of ATA hard disk standard.
However, SCSI drives and SCSI RAIDs became common in PC workstations for video or audio production. Recent physical versions of SCSI—Serial Attached SCSI, SCSI-over-Fibre Channel Protocol, USB Attached SCSI —break from the traditional parallel SCSI bus and perform data transfer via serial communications using point-to-point links. Although much of the SCSI documentation talks about the parallel interface, all modern development efforts use serial interfaces. Serial interfaces have a number of advantages over parallel SCSI, including higher data rates, simplified cabling, longer reach, improved fault isolation and full-duplex capability; the primary reason for the shift to serial interfaces is the clock skew issue of high speed parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems caused by cabling and termination. The non-physical iSCSI preserves the basic SCSI paradigm the command set unchanged, through embedding of SCSI-3 over TCP/IP. Therefore, iSCSI uses logical connections instead of physical links and can run on top of any network supporting IP.
The actual physical links are realized on lower network layers, independently from iSCSI. Predominantly, Ethernet is used, of serial nature. SCSI is popular on high-performance workstations and storage appliances. All RAID subsystems on servers have used some kind of SCSI hard disk drives for decades, though a number of manufacturers offer SATA-based RAID subsystems as a cheaper option. Moreover, SAS offers compatibility with SATA devices, creating a much broader range of options for RAID subsystems together with the existence of nearline SAS drives. Instead of SCSI, modern desktop computers and notebooks use SATA interfaces for internal hard disk drives, with M.2 and PCIe gaining popularity as SATA can bottleneck modern solid-state drives. SCSI is available in a variety of int
A motherboard is the main printed circuit board found in general purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit and memory, provides connectors for other peripherals. Unlike a backplane, a motherboard contains significant sub-systems such as the central processor, the chipset's input/output and memory controllers, interface connectors, other components integrated for general purpose use and applications. Motherboard refers to a PCB with expansion capability and as the name suggests, this board is referred to as the "mother" of all components attached to it, which include peripherals, interface cards, daughtercards: sound cards, video cards, network cards, hard drives, or other forms of persistent storage; the term mainboard is applied to devices with a single board and no additional expansions or capability, such as controlling boards in laser printers, washing machines, mobile phones and other embedded systems with limited expansion abilities.
Prior to the invention of the microprocessor, the digital computer consisted of multiple printed circuit boards in a card-cage case with components connected by a backplane, a set of interconnected sockets. In old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice; the Central Processing Unit and peripherals were housed on individual printed circuit boards, which were plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system; the most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1981s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard.
In the late 1980s, personal computer motherboards began to include single ICs capable of supporting a set of low-speed peripherals: keyboard, floppy disk drive, serial ports, parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video and networking functions without the need for any expansion cards at all. Business PCs, servers were more to need expansion cards, either for more robust functions, or for higher speeds. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals; this included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century. Memory, network controllers, power source, storage would be integrated into some systems. A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it contains the central processing unit and hosts other subsystems and devices.
A typical desktop computer has its microprocessor, main memory, other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, peripheral devices may be attached to the motherboard as plug-in cards or via cables. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components; this chipset determines, to an extent, the capabilities of the motherboard. Modern motherboards include: Sockets. In the case of CPUs in ball grid array packages, such as the VIA C3, the CPU is directly soldered to the motherboard. Memory Slots into which the system's main memory is to be installed in the form of DIMM modules containing DRAM chips A chipset which forms an interface between the CPU's front-side bus, main memory, peripheral buses Non-volatile memory chips containing the system's firmware or BIOS A clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, main memory, expansion cards.
As of 2007, some graphics cards require more power than the motherboard can provide, thus dedicated connectors have been introduced to attach them directly to the power supply. Connectors for hard drives SATA only. Disk drives connect to the power supply. Additionally, nearly all motherboards include logic and connectors to support used input devices, such as USB for mouse devices and keyboards. Early personal computers
Silicon Integrated Systems
Silicon Integrated Systems is a company that manufactures, among other things, motherboard chipsets. The company was founded in 1987 in Taiwan. In the late 1990s, SiS made the decision to invest in their own chip fabrication facilities. At the end of 1999, SiS acquired that company's mP6 x86 core technology. One of the most famous chipsets produced by SiS was the late 486-age chipset 496/497 which supported PCI bus among older ISA- and VLB-buses. Mainboards using this chipset and equipped with CPUs such as the Intel 80486DX4, AMD 5x86 or Cyrix Cx5x86 processors had performance and compatibility comparable with early Intel Pentium systems in addition to a lower price. After this late success, SiS continued positioning itself as a budget chipset producer; the company emphasized high integration to minimize the cost to implement their solutions. As such, SiS one-chip mainboard chipsets that included integrated video, such as the Socket 7-based SiS 5596, SiS 5598, SiS 530 along with the Slot 1-based SiS 620.
These were some of the first PC chipsets with such high integration. They allowed entire system solutions to be built with just a mainboard, system RAM, a CPU. SiS 310,320,320 "Rabbit" SiS 401/402 ISA SiS 406/411 EISA, Vesa Local Bus SiS 460 ISA, Vesa Local Bus SiS 461 ISA, Vesa Local Bus SiS 471 ISA, Vesa Local Bus SiS 496/497 ISA, VLB, PCI SiS 501/502/503 PCI, ISA SiS 5511/5512/5513 ISA, PCI SiS 5571 ISA, PCI SiS 5581/5582 ISA, PCI, AGP SiS 5591/5595 ISA, PCI, AGPThe SiS 530 with SiS 5595 southbridge supported Socket 7, SDRAM 1.5GB max. A bus frequency from 66 MHz to 124 MHz, can have from 2 to 8 MiB shared memory for an integrated AGP SiS 6306 2D/3D graphics controller. Includes integrated UDMA66 IDE controller. Mainboards using the SiS 530 were positioned as cheap office platforms and paired with low-cost chips from Intel competitors, such as the AMD K6 series or Cyrix 6x86; the graphics controller had Direct3D 6.0 and OpenGL support, although it was a low-performance product for 3D acceleration.
SiS 540 integrates SiS 300 graphics controller. SiS 600/SiS 5595 SiS 620/SiS 5595 SiS 630 - includes North- and South bridges and 2D/3D graphics controller on one chip SiS 633 SiS 635 SiS and ALi were the only two companies awarded licenses to produce third party chipsets for the Pentium 4. SiS developed the 648 chipset with this license. SiS 755FX SiS 760 SiS 760GX SiS 761GX SiS 756 SiS 771 SiS created a multimedia chipset for the Xbox 360. Paired with SiS chipsets, such as the 661GX/761GX, which adopt a standard two-chip chipset design. SiS southbridges handle IDE, LAN, other I/O connectivity. SiS' proprietary MuTIOL interconnect connects the southbridge chip to the northbridge, which contains the RAM controller and interfaces with the CPU. SiS 6201 SiS 6202 SiS 6205 SiS 6215 SiS 6225 SiS 6306 SiS 6326 SiS 300 SiS 301 SiS 305 SiS 315 SiS 320 SiS 326 SiS 330 SiS 340 SiS 360 SiS 380 Some cards contain a 3D graphics accelerator but it is only functional with the SiS's Proprietary Windows-only driver.
However, the Linux kernel includes a working third party driver that, while not supporting 3D gaming, makes the cards usable under Linux. SiS 9202 SiS 9203 SiS 9220P SiS 9223 SiS 9250 SiS 9250H SiS 9251 SiS 9252 SiS 9255 SiS 9272 SiS 9275 SiS 9277 List of companies of Taiwan Silicon Integrated Systems Homepage
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, application servers. Client–server systems are today most implemented by the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it.
This implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many simple, replaceable server components. The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall, the paper that introduced Kendall's notation. In earlier papers, such as the Erlang, more concrete terms such as " operators" are used. In computing, "server" dates at least to RFC 5, one of the earliest documents describing ARPANET, is contrasted with "user", distinguishing two types of host: "server-host" and "user-host"; the use of "serving" dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests remote, with the 1981 version reading: SERVER n. A kind of DAEMON which performs a service for the requester, which runs on a computer other than the one on which the server runs.
Speaking, the term server refers to a computer program or process. Through metonymy, it refers to a device used for running several server programs. On a network, such a device is called a host. In addition to server, the words serve and service are used, though servicer and servant are not; the word service may refer to either the abstract form of e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Used as "servers serve users", in the sense of "obey", today one says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests"; the server is part of the client–server model. The nature of communication between a client and server is response; this is in contrast with peer-to-peer model. In principle, any computerized process that can be used or called by another process is a server, the calling process or processes is a client, thus any general purpose computer connected to a network can host servers.
For example, if files on a device are shared by some process, that process is a file server. Web server software can run on any capable computer, so a laptop or a personal computer can host a web server. While request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response; the purpose of a server is to share data as well as to distribute work. A server computer can serve its own computer programs as well; the following table shows several scenarios. The entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and every action taken by an ordinary Internet user requires one or more interactions with one or more server.
There are exceptions. Hardware requirement for servers vary depending on the server's purpose and its software. Since servers are accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface, they are managed remotely. Remote management can be conducted via various methods including Microsoft Management Console, PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large traditional single servers would need to be run for long periods without interruption. Ava