Disk partitioning or disk slicing is the creation of one or more regions on secondary storage, so that each region can be managed separately. These regions are called partitions, it is the first step of preparing a newly installed disk, before any file system is created. The disk stores the information about the partitions' locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk; each partition appears to the operating system as a distinct "logical" disk that uses part of the actual disk. System administrators use a program called a partition editor to create, resize and manipulate the partitions.. Partitioning allows the use of different filesystems to be installed for different kinds of files. Separating user data from system data can prevent the system partition from becoming full and rendering the system unusable. Partitioning can make backing up easier. A disadvantage is that it can be difficult to properly size partitions resulting in having one partition with much free space and another nearly allocated.
This section describes the master boot record partitioning scheme, as used in DOS, Microsoft Windows and Linux on PC-compatible computer systems. As of the mid-2010s, most new computers use the GUID Partition Table partitioning scheme instead. For examples of other partitioning schemes, see the general article on partition tables; the total data storage space of a PC HDD on which MBR partitioning is implemented can contain at most four primary partitions, or alternatively three primary partitions and an extended partition. The Partition Table, located in the master boot record, contains 16-byte entries, each of which describes a partition; the partition type is identified by a 1-byte code found in its partition table entry. Some of these codes may be used to indicate the presence of an extended partition. Most are used by an operating system's bootloader to decide if a partition contains a file system that can be mounted / accessed for reading or writing data. A primary partition contains one file system.
In DOS and all early versions of Microsoft Windows systems, Microsoft required what it called the system partition to be the first partition. All Windows operating systems from Windows 95 onwards can be located on any partition, but the boot files must reside on a primary partition. However, other factors, such as a PC's BIOS may impose specific requirements as to which partition must contain the primary OS; the partition type code for a primary partition can either correspond to a file system contained within or indicate that the partition has a special use. The FAT16 and FAT32 file systems have made use of a number of partition type codes due to the limits of various DOS and Windows OS versions. Though a Linux operating system may recognize a number of different file systems, they have all used the same partition type code: 0x83. An HDD may contain only one extended partition, but that extended partition can be subdivided into multiple logical partitions. DOS/Windows systems may assign a unique drive letter to each logical partition.
With DOS, Microsoft Windows, OS/2, a common practice is to use one primary partition for the active file system that will contain the operating system, the page/swap file, all utilities and user data. On most Windows consumer computers, the drive letter C: is assigned to this primary partition. Other partitions may exist on the HDD that may or may not be visible as drives, such as recovery partitions or partitions with diagnostic tools or data. Microsoft Windows 2000, XP, Windows 7 include a'Disk Management' program which allows for the creation and resizing of FAT and NTFS partitions; the Windows Disk Manager in Windows Vista and Windows 7 utilizes a 1 MB partition alignment scheme, fundamentally incompatible with Windows 2000, XP, OS/2, DOS as well as many other operating systems. On Unix-based and Unix-like operating systems such as Linux, macOS, BSD, Solaris, it is possible to use multiple partitions on a disk device; each partition can be formatted as a swap partition. Multiple partitions allow directories such as /boot, /tmp, /usr, /var, or /home to be allocated their own filesystems.
Such a scheme has a number of advantages: If one file system gets corrupted, the data outside that filesystem/partition may stay intact, minimizing data loss. Specific file systems can be mounted with different parameters e.g. read-only, or with the execution of setuid files disabled. A runaway program that uses up all available space on a non-system filesystem does not fill up critical filesystems. Keeping user data such as documents separate from system files allows the system to be updated with lessened risk of disturbing the data. A common minimal configuration for Linux systems is to use three partitions: one holding the system files mounted on "/", one holding user configuration files and data mounted on /home, a swap partition. By default, macOS systems use a single partition for the entire filesystem and use a swap file inside the file system rather than a swap partition. In Solaris, partitions are sometimes known as slices; this is a conceptual reference to the slicing of a cake into severa
Computer hardware includes the physical, tangible parts or components of a computer, such as the cabinet, central processing unit, keyboard, computer data storage, graphics card, sound card and motherboard. By contrast, software is instructions that can be run by hardware. Hardware is so-termed because it rigid with respect to changes or modifications. Intermediate between software and hardware is "firmware", software, coupled to the particular hardware of a computer system and thus the most difficult to change but among the most stable with respect to consistency of interface; the progression from levels of "hardness" to "softness" in computer systems parallels a progression of layers of abstraction in computing. Hardware is directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware components; the template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann.
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus; this is referred to as the Von Neumann bottleneck and limits the performance of the system. The personal computer known as the PC, is one of the most common types of computer due to its versatility and low price. Laptops are very similar, although they may use lower-power or reduced size components, thus lower performance; the computer case encloses most of the components of the system. It provides mechanical support and protection for internal elements such as the motherboard, disk drives, power supplies, controls and directs the flow of cooling air over internal components.
The case is part of the system to control electromagnetic interference radiated by the computer, protects internal parts from electrostatic discharge. Large tower cases provide extra internal space for multiple disk drives or other peripherals and stand on the floor, while desktop cases provide less expansion room. All-in-one style designs include a video display built into the same case. Portable and laptop computers require cases. A current development in laptop computers is a detachable keyboard, which allows the system to be configured as a touch-screen tablet. Hobbyists may decorate the cases with colored lights, paint, or other features, in an activity called case modding. A power supply unit converts alternating current electric power to low-voltage DC power for the internal components of the computer. Laptops are capable of running from a built-in battery for a period of hours; the motherboard is the main component of a computer. It is a board with integrated circuitry that connects the other parts of the computer including the CPU, the RAM, the disk drives as well as any peripherals connected via the ports or the expansion slots.
Components directly attached to or to part of the motherboard include: The CPU, which performs most of the calculations which enable a computer to function, is sometimes referred to as the brain of the computer. It is cooled by a heatsink and fan, or water-cooling system. Most newer CPUs include an on-die graphics processing unit; the clock speed of CPUs governs how fast it executes instructions, is measured in GHz. Many modern computers have the option to overclock the CPU which enhances performance at the expense of greater thermal output and thus a need for improved cooling; the chipset, which includes the north bridge, mediates communication between the CPU and the other components of the system, including main memory. Random-access memory, which stores the code and data that are being accessed by the CPU. For example, when a web browser is opened on the computer it takes up memory. RAM comes on DIMMs in the sizes 2GB, 4GB, 8GB, but can be much larger. Read-only memory, which stores the BIOS that runs when the computer is powered on or otherwise begins execution, a process known as Bootstrapping, or "booting" or "booting up".
The BIOS includes power management firmware. Newer motherboards use Unified Extensible Firmware Interface instead of BIOS. Buses that connect the CPU to various internal components and to expand cards for graphics and sound; the CMOS battery, which powers the memory for date and time in the BIOS chip. This battery is a watch battery; the video card, which processes computer graphics. More powerful graphics cards are better suited to handle strenuous tasks, such as playing intensive video games. An expansion card in computing is a printed circuit board that can be inserted into an expansion slot of a computer motherboard or
Hard disk drive
A hard disk drive, hard disk, hard drive, or fixed disk, is an electromechanical data storage device that uses magnetic storage to store and retrieve digital information using one or more rigid rotating disks coated with magnetic material. The platters are paired with magnetic heads arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored or retrieved in any order and not only sequentially. HDDs are a type of non-volatile storage, retaining stored data when powered off. Introduced by IBM in 1956, HDDs became the dominant secondary storage device for general-purpose computers by the early 1960s. Continuously improved, HDDs have maintained this position into the modern era of servers and personal computers. More than 200 companies have produced HDDs though after extensive industry consolidation most units are manufactured by Seagate and Western Digital. HDDs dominate the volume of storage produced for servers.
Though production is growing sales revenues and unit shipments are declining because solid-state drives have higher data-transfer rates, higher areal storage density, better reliability, much lower latency and access times. The revenues for SSDs, most of which use NAND exceed those for HDDs. Though SSDs have nearly 10 times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, durability are important; the primary characteristics of an HDD are its performance. Capacity is specified in unit prefixes corresponding to powers of 1000: a 1-terabyte drive has a capacity of 1,000 gigabytes; some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, inbuilt redundancy for error correction and recovery. There is confusion regarding storage capacity, since capacities are stated in decimal Gigabytes by HDD manufacturers, whereas some operating systems report capacities in binary Gibibytes, which results in a smaller number than advertised.
Performance is specified by the time required to move the heads to a track or cylinder adding the time it takes for the desired sector to move under the head, the speed at which the data is transmitted. The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, 2.5-inch for laptops. HDDs are connected to systems by standard interface cables such as SATA, USB or SAS cables; the first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was the size of two medium-sized refrigerators and stored five million six-bit characters on a stack of 50 disks. In 1962, the IBM 350 was superseded by the IBM 1301 disk storage unit, which consisted of 50 platters, each about 1/8-inch thick and 24 inches in diameter. While the IBM 350 used only two read/write heads, the 1301 used an array of heads, one per platter, moving as a single unit. Cylinder-mode read/write operations were supported, the heads flew about 250 micro-inches above the platter surface.
Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes. Access time was about a quarter of a second. In 1962, IBM introduced the model 1311 disk drive, about the size of a washing machine and stored two million characters on a removable disk pack. Users could interchange them as needed, much like reels of magnetic tape. Models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives; some high-performance HDDs were manufactured with one head per track so that no time was lost physically moving the heads to a track. Known as fixed-head or head-per-track disk drives they were expensive and are no longer in production. In 1973, IBM introduced a new type of HDD code-named "Winchester".
Its primary distinguishing feature was that the disk heads were not withdrawn from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was powered on; this reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. "Winchester" drives abandoned the removable media concept and returned to non-removable platters. Like the first removable pack drive, the first "Winchester" drives used platters 14 inches in diameter. A few years designers were exploring the possibility that physically smaller platters might offer advantages. Drives with non-removable eight-inch platters appeared, drives that used a 5 1⁄4 in form factor.
The latter were intended for the then-fl
A product key known as a software key, is a specific software-based key for a computer program. It certifies. Activation is sometimes done offline by entering the key, or with software like Windows 8.1, online activation is required to prevent multiple people using the same key. Not all software has a product key, as some publishers may choose to use a different method to protect their copyright, or in some cases, such as free or open source software, copyright protection is not used. Computer games use product keys to verify. One is not allowed to play online with two identical product keys at the same time. Product keys consist of a series of numbers and/or letters; this sequence is entered by the user during the installation of computer software, is passed to a verification function in the program. This function manipulates the key sequence according to a mathematical algorithm and attempts to match the results to a set of valid solutions. Standard key generation, where product keys are generated mathematically, is not effective in stopping copyright infringement of software, as these keys can be distributed.
The overall effectiveness of product keys in enforcing software copyrights requires further study. In addition, with improved communication from the rise of the Internet, more sophisticated attacks on keys such as cracks and product key generators have become common; because of this, software publishers are turning to alternative methods of verifying that keys are both valid and uncompromised. One method, product validation, assigns a product key based on a unique feature of the purchaser's computer hardware, which cannot be as duplicated since it depends on the user's hardware. Other newer methods may involve requiring periodical validation of the CD key with an internet server; the validation can be performed on the server side, preventing cracks tampering with it. The server can keep a blacklist of all CD keys known to be invalid or which have explicitly been banned, deny them access; some of the most effective CD key protection is controversial, due to inconvenience, strict enforcement, harsh penalties and, in some cases, false positives.
CD key protection has been linked to Digital Rights Management, in that it uses uncompromising digital procedures to enforce the license agreement. Product keys are somewhat inconvenient for end users. Not only do they need to be entered whenever a program is installed, but the user must be sure not to lose them. Loss of a product key means the software is useless once uninstalled. Product keys present new ways for distribution to go wrong. If a product is shipped with missing or invalid keys the CD itself is useless. For example, all copies of Splinter Cell: Pandora Tomorrow shipped to Australia without CD keys. There are many cases of permanent bans enforced by companies detecting usage violations, it is common for an online system to blacklist a CD key caught running cracks or, in some cases, cheats. This results in a permanent ban. Players who wish to continue use of the software must repurchase it; this has led to criticism over the motivations of enforcing permanent bans. Controversial is the situation which arises when multiple products' keys are bound together.
If products have dependencies on other products, it is common for companies to ban all bound products. For example, if a fake CD key is used with an expansion pack, the server may ban legitimate CD keys from the original game. With Valve's Steam service, all products the user has purchased are bound into the one account. If this account is banned, every product will be banned from online play; this "multi-ban" is controversial, since it bans users from products which they have legitimately purchased and used. Bans are enforced by servers upon detection of cracks or cheats without human intervention. Sometimes, legitimate users are wrongly deemed in violation of the license, banned. In large cases of false positives, they are sometimes corrected However, individual cases may not be given any attention. A common cause of false positives is users of unsupported platforms. For example, users of Linux can run Windows applications through compatibility layers such as Wine and Cedega; this software combination sometimes triggers the game's server anti-cheating software, resulting in a ban due to Wine or Cedega being a Windows API compatibility layer for Linux, so it is considered third-party software by the game's server.
A key recovery program is a program, designed to read and decrypt and/or de-obfuscate the key to its original state. When the key is returned to its original state, it may be used to reinstall its corresponding software. Biometric passport Cryptographic hash function Intel Upgrade Service Keygen License manager Product activation Serial number Software license server Volume license key
Microsoft Windows is a group of several graphical operating system families, all of which are developed and sold by Microsoft. Each family caters to a certain sector of the computing industry. Active Windows families include Windows Embedded. Defunct Windows families include Windows Mobile and Windows Phone. Microsoft introduced an operating environment named Windows on November 20, 1985, as a graphical operating system shell for MS-DOS in response to the growing interest in graphical user interfaces. Microsoft Windows came to dominate the world's personal computer market with over 90% market share, overtaking Mac OS, introduced in 1984. Apple came to see Windows as an unfair encroachment on their innovation in GUI development as implemented on products such as the Lisa and Macintosh. On PCs, Windows is still the most popular operating system. However, in 2014, Microsoft admitted losing the majority of the overall operating system market to Android, because of the massive growth in sales of Android smartphones.
In 2014, the number of Windows devices sold was less than 25 %. This comparison however may not be relevant, as the two operating systems traditionally target different platforms. Still, numbers for server use of Windows show one third market share, similar to that for end user use; as of October 2018, the most recent version of Windows for PCs, tablets and embedded devices is Windows 10. The most recent versions for server computers is Windows Server 2019. A specialized version of Windows runs on the Xbox One video game console. Microsoft, the developer of Windows, has registered several trademarks, each of which denote a family of Windows operating systems that target a specific sector of the computing industry; as of 2014, the following Windows families are being developed: Windows NT: Started as a family of operating systems with Windows NT 3.1, an operating system for server computers and workstations. It now consists of three operating system subfamilies that are released at the same time and share the same kernel: Windows: The operating system for mainstream personal computers and smartphones.
The latest version is Windows 10. The main competitor of this family is macOS by Apple for personal computers and Android for mobile devices. Windows Server: The operating system for server computers; the latest version is Windows Server 2019. Unlike its client sibling, it has adopted a strong naming scheme; the main competitor of this family is Linux. Windows PE: A lightweight version of its Windows sibling, meant to operate as a live operating system, used for installing Windows on bare-metal computers, recovery or troubleshooting purposes; the latest version is Windows PE 10. Windows IoT: Initially, Microsoft developed Windows CE as a general-purpose operating system for every device, too resource-limited to be called a full-fledged computer. However, Windows CE was renamed Windows Embedded Compact and was folded under Windows Compact trademark which consists of Windows Embedded Industry, Windows Embedded Professional, Windows Embedded Standard, Windows Embedded Handheld and Windows Embedded Automotive.
The following Windows families are no longer being developed: Windows 9x: An operating system that targeted consumers market. Discontinued because of suboptimal performance. Microsoft now caters to the consumer market with Windows NT. Windows Mobile: The predecessor to Windows Phone, it was a mobile phone operating system; the first version was called Pocket PC 2000. The last version is Windows Mobile 6.5. Windows Phone: An operating system sold only to manufacturers of smartphones; the first version was Windows Phone 7, followed by Windows Phone 8, the last version Windows Phone 8.1. It was succeeded by Windows 10 Mobile; the term Windows collectively describes any or all of several generations of Microsoft operating system products. These products are categorized as follows: The history of Windows dates back to 1981, when Microsoft started work on a program called "Interface Manager", it was announced in November 1983 under the name "Windows", but Windows 1.0 was not released until November 1985.
Windows 1.0 was to achieved little popularity. Windows 1.0 is not a complete operating system. The shell of Windows 1.0 is a program known as the MS-DOS Executive. Components included Calculator, Cardfile, Clipboard viewer, Control Panel, Paint, Reversi and Write. Windows 1.0 does not allow overlapping windows. Instead all windows are tiled. Only modal dialog boxes may appear over other windows. Microsoft sold as included Windows Development libraries with the C development environment, which included numerous windows samples. Windows 2.0 was released in December 1987, was more popular than its predecessor. It features several improvements to the user memory management. Windows 2.03 changed the OS from tiled windows to overlapping windows. The result of this change led to Apple Computer filing a suit against Microsoft alleging infringement on Apple's copyrights. Windows 2.0
A software wizard or setup assistant is a user interface type that presents a user with a sequence of dialog boxes that lead the user through a series of well-defined steps. Tasks that are complex, infrequently performed, or unfamiliar may be easier to perform using a wizard. Before the 1990s, "wizard" was a common term for a technical expert, somewhat akin to "hacker."When developing the first version of its desktop publishing software, Microsoft Publisher, around 1991, Microsoft wanted to let users with no graphic design skill make documents that still looked good. Publisher was targeted at non-professionals, Microsoft figured that, no matter what tools the program had, users wouldn't know what to do with them. Publisher's "Page Wizards" instead provided a set of forms to produce a complete document layout, based on a professionally-designed template, which could be manipulated with the standard tools. Wizards had been in development at Microsoft for several years before Publisher, notably for Microsoft Access, which wouldn't ship until November 1992.
Wizards were intended to learn from how someone used a program and anticipate what they may want to do next, guiding them through more complex sets of tasks by structuring and sequencing them. They served to teach the product by example; as early as 1989, Microsoft discussed using voice and talking heads as guides, but multimedia-capable hardware was not yet widespread. The feature spread to other applications. In 1992, Excel 4.0 for Mac introduced wizards for tasks like building crosstab tables, Windows used wizards for tasks like printer or Internet configuration. By 2001, wizards had become commonplace in most consumer-oriented operating systems, although not always under the name "wizard." In Mac OS X they are called "assistants". GNOME refers to its wizards as "assistants". Today, a wizard-like experience is used to "onboard" users the first time they open an app. Many web applications, for instance online booking sites, make use of the wizard paradigm to complete lengthy interactive processes.
Oracle Designer uses wizards extensively. The Microsoft Manual of Style for Technical Publications urges technical writers to refer to these assistants as "wizards" and to use lowercase letters; the following screenshots show the installation wizard for Kubuntu 12.04, a free and open-source operating system. The wizard consists of seven steps. By the end of the step seven, the operation will be completed. Expert system Virtual assistant Office Assistant Wizards — Microsoft Windows Dev Center Wizards — Eclipse User Interface Guidelines
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and