In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
This article tries to document the hardware capabilities of CPUs implementing the x86 or x86-64 instruction sets with regards to hardware-assisted virtualization. In the late 1990s x86 virtualization was achieved by complex software techniques, necessary to compensate for the processor's lack of hardware-assisted virtualization capabilities while attaining reasonable performance. In 2006, both Intel and AMD introduced limited hardware virtualization support that allowed simpler virtualization software but offered few speed benefits. Greater hardware support, which allowed substantial speed improvements, came with processor models; the following discussion focuses only on virtualization of the x86 architecture protected mode. In protected mode the operating system kernel runs at a higher privilege such as ring 0, applications at a lower privilege such as ring 3. In software-based virtualization, a host OS has direct access to hardware while the guest OSs have limited access to hardware, just like any other application of the host OS.
One approach used in x86 software-based virtualization to overcome this limitation is called ring deprivileging, which involves running the guest OS at a ring higher than 0. Three techniques made virtualization of protected mode possible: Binary translation is used to rewrite in terms of ring 3 instructions certain ring 0 instructions, such as POPF, that would otherwise fail silently or behave differently when executed above ring 0, making the classic trap-and-emulate virtualization impossible. To improve performance, the translated basic blocks need to be cached in a coherent way that detects code patching, the reuse of pages by the guest OS, or self-modifying code. A number of key data structures used by a processor need to be shadowed; because most operating systems use paged virtual memory, granting the guest OS direct access to the MMU would mean loss of control by the virtualization manager, some of the work of the x86 MMU needs to be duplicated in software for the guest OS using a technique known as shadow page tables.
This involves denying the guest OS any access to the actual page table entries by trapping access attempts and emulating them instead in software. The x86 architecture uses hidden state to store segment descriptors in the processor, so once the segment descriptors have been loaded into the processor, the memory from which they have been loaded may be overwritten and there is no way to get the descriptors back from the processor. Shadow descriptor tables must therefore be used to track changes made to the descriptor tables by the guest OS. I/O device emulation: Unsupported devices on the guest OS must be emulated by a device emulator that runs in the host OS; these techniques incur some performance overhead due to lack of MMU virtualization support, as compared to a VM running on a natively virtualizable architecture such as the IBM System/370. On traditional mainframes, the classic type 1 hypervisor was self-standing and did not depend on any operating system or run any user applications itself.
In contrast, the first x86 virtualization products were aimed at workstation computers, ran a guest OS inside a host OS by embedding the hypervisor in a kernel module that ran under the host OS. There has been some controversy whether the x86 architecture with no hardware assistance is virtualizable as described by Popek and Goldberg. VMware researchers pointed out in a 2006 ASPLOS paper that the above techniques made the x86 platform virtualizable in the sense of meeting the three criteria of Popek and Goldberg, albeit not by the classic trap-and-emulate technique. A different route was taken by other systems like Denali, L4, Xen, known as paravirtualization, which involves porting operating systems to run on the resulting virtual machine, which does not implement the parts of the actual x86 instruction set that are hard to virtualize; the paravirtualized I/O has significant performance benefits as demonstrated in the original SOSP'03 Xen paper. The initial version of x86-64 did not allow for a software-only full virtualization due to the lack of segmentation support in long mode, which made the protection of the hypervisor's memory impossible, in particular, the protection of the trap handler that runs in the guest kernel address space.
Revision D and 64-bit AMD processors added basic support for segmentation in long mode, making it possible to run 64-bit guests in 64-bit hosts via binary translation. Intel did not add segmentation support to its x86-64 implementation, making 64-bit software-only virtualization impossible on Intel CPUs, but Intel VT-x support makes 64-bit hardware assisted virtualization possible on the Intel platform. On some platforms, it is possible to run a 64-bit guest on a 32-bit host OS if the underlying processor is 64-bit and supports the necessary virtualization extensions. In 2005 and 2006, Intel and AMD created new processor extensions to the x86 architecture; the first generation of x86 hardware virtualization addressed the issue of privileged instructions. The issue of low performance of virtualized system memory was addressed with MMU virtualization, added to the chipset later. Based on painful experiences with the 80286 protected mode, which by itself was not suitable enough to run concurrent MS-DOS applications well, Intel introduced the virtual 8086 mode in their 80386 chip, which offered virtualized 8086 processors on the 386 and chips.
Hardware support for virtualizing the protected mode itself, became available 20 years later. AMD developed its first generation virtualization extensions under the code name "Pacifica", published them as AMD Secure Virtual Mac
A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Central processing unit
A central processing unit called a central processor or main processor, is the electronic circuitry within a computer that carries out the instructions of a computer program by performing the basic arithmetic, logic and input/output operations specified by the instructions. The computer industry has used the term "central processing unit" at least since the early 1960s. Traditionally, the term "CPU" refers to a processor, more to its processing unit and control unit, distinguishing these core elements of a computer from external components such as main memory and I/O circuitry; the form and implementation of CPUs have changed over the course of their history, but their fundamental operation remains unchanged. Principal components of a CPU include the arithmetic logic unit that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations and a control unit that orchestrates the fetching and execution of instructions by directing the coordinated operations of the ALU, registers and other components.
Most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may contain memory, peripheral interfaces, other components of a computer; some computers employ a multi-core processor, a single chip containing two or more CPUs called "cores". Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. There exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". Since the term "CPU" is defined as a device for software execution, the earliest devices that could rightly be called CPUs came with the advent of the stored-program computer; the idea of a stored-program computer had been present in the design of J. Presper Eckert and John William Mauchly's ENIAC, but was omitted so that it could be finished sooner.
On June 30, 1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC. It was the outline of a stored-program computer that would be completed in August 1949. EDVAC was designed to perform a certain number of instructions of various types; the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the physical wiring of the computer. This overcame a severe limitation of ENIAC, the considerable time and effort required to reconfigure the computer to perform a new task. With von Neumann's design, the program that EDVAC ran could be changed by changing the contents of the memory. EDVAC, was not the first stored-program computer. Early CPUs were custom designs used as part of a sometimes distinctive computer. However, this method of designing custom CPUs for a particular application has given way to the development of multi-purpose processors produced in large quantities; this standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit.
The IC has allowed complex CPUs to be designed and manufactured to tolerances on the order of nanometers. Both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, sometimes in toys. While von Neumann is most credited with the design of the stored-program computer because of his design of EDVAC, the design became known as the von Neumann architecture, others before him, such as Konrad Zuse, had suggested and implemented similar ideas; the so-called Harvard architecture of the Harvard Mark I, completed before EDVAC used a stored-program design using punched paper tape rather than electronic memory. The key difference between the von Neumann and Harvard architectures is that the latter separates the storage and treatment of CPU instructions and data, while the former uses the same memory space for both.
Most modern CPUs are von Neumann in design, but CPUs with the Harvard architecture are seen as well in embedded applications. Relays and vacuum tubes were used as switching elements; the overall speed of a system is dependent on the speed of the switches. Tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded outweighed the reliability problems. Most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were common at this time, limited by the speed of the switching de
The Hewlett-Packard Company or Hewlett-Packard was an American multinational information technology company headquartered in Palo Alto, California. It developed and provided a wide variety of hardware components as well as software and related services to consumers, small- and medium-sized businesses and large enterprises, including customers in the government and education sectors; the company was founded in a one-car garage in Palo Alto by Bill Hewlett and David Packard, produced a line of electronic test equipment. HP was the world's leading PC manufacturer from 2007 to Q2 2013, at which time Lenovo ranked ahead of HP. HP specialized in developing and manufacturing computing, data storage, networking hardware, designing software and delivering services. Major product lines included personal computing devices and industry standard servers, related storage devices, networking products, software and a diverse range of printers and other imaging products. HP directly marketed its products to households, small- to medium-sized businesses and enterprises as well as via online distribution, consumer-electronics and office-supply retailers, software partners and major technology vendors.
HP had services and consulting business around its products and partner products. Hewlett-Packard company events included the spin-off of its electronic and bio-analytical measurement instruments part of its business as Agilent Technologies in 1999, its merger with Compaq in 2002, the acquisition of EDS in 2008, which led to combined revenues of $118.4 billion in 2008 and a Fortune 500 ranking of 9 in 2009. In November 2009, HP announced the acquisition of 3Com, with the deal closing on April 12, 2010. On April 28, 2010, HP announced the buyout of Inc. for $1.2 billion. On September 2, 2010, HP won its bidding war for 3PAR with a $33 a share offer, which Dell declined to match. Hewlett-Packard spun off its enterprise products and services business as Hewlett Packard Enterprise on November 1, 2015. Hewlett-Packard held onto the PC and printer businesses, was renamed to HP Inc. Bill Hewlett and David Packard graduated with degrees in electrical engineering from Stanford University in 1935; the company originated in a garage in nearby Palo Alto during a fellowship they had with a past professor, Frederick Terman at Stanford during the Great Depression.
They considered Terman a mentor in forming Hewlett-Packard. In 1938, Packard and Hewlett begin part-time work in a rented garage with an initial capital investment of US$538. In 1939 Hewlett and Packard decided to formalize their partnership, they tossed a coin to decide whether the company they founded would be called Hewlett-Packard or Packard-Hewlett. HP incorporated on August 18, 1947, went public on November 6, 1957. Of the many projects they worked on, their first financially successful product, was a precision audio oscillator, the Model HP200A, their innovation was the use of a small incandescent light bulb as a temperature dependent resistor in a critical portion of the circuit, the negative feedback loop which stabilized the amplitude of the output sinusoidal waveform. This allowed them to sell the Model 200A for $89.40 when competitors were selling less stable oscillators for over $200. The Model 200 series of generators continued production until at least 1972 as the 200AB, still tube-based but improved in design through the years.
One of the company's earliest customers was Walt Disney Productions, which bought eight Model 200B oscillators for use in certifying the Fantasound surround sound systems installed in theaters for the movie Fantasia. They worked on counter-radar technology and artillery shell fuses during World War II, which allowed Packard to be exempt from the draft. HP is recognized as the symbolic founder of Silicon Valley, although it did not investigate semiconductor devices until a few years after the "traitorous eight" had abandoned William Shockley to create Fairchild Semiconductor in 1957. Hewlett-Packard's HP Associates division, established around 1960, developed semiconductor devices for internal use. Instruments and calculators were some of the products using these devices. During the 1960s, HP partnered with Sony and the Yokogawa Electric companies in Japan to develop several high-quality products; the products were not a huge success, as there were high costs in building HP-looking products in Japan.
HP and Yokogawa formed a joint venture in 1963 to market HP products in Japan. HP bought Yokogawa Electric's share of Hewlett-Packard Japan in 1999. HP spun off Dynac, to specialize in digital equipment; the name was picked so that the HP logo "hp" could be turned upside down to be a reverse reflect image of the logo "dy" of the new company. Dynac changed to Dymec, was folded back into HP in 1959. HP experimented with using Digital Equipment Corporation minicomputers with its instruments, but after deciding that it would be easier to build another small design team than deal with DEC, HP entered the computer market in 1966 with the HP 2100 / HP 1000 series of minicomputers; these had a simple accumulator-based design, with two accumulator registers and, in the HP 1000 models, two index registers. The series was produced for 20 years, in spite of several attempts to replace it, was a forerunner of the HP 9800 and HP 250 series of desktop and business computers; the HP 3000 was an advanced stack-based design for a business computing server redesigned with RISC technology.
The HP 2640 series of smart and intelligent terminals introduced forms-based interfaces to ASCII terminals, introduced screen labeled functio
A thin client is a lightweight computer, optimized for establishing a remote connection with a server-based computing environment. The server does most of the work, which can include launching software programs, crunching numbers, storing data; this contrasts with a conventional personal computer. Thin clients occur as components of a broader computing infrastructure, where many clients share their computations with a server or server farm; the server-side infrastructure uses cloud computing software such as application virtualization, hosted shared desktop or desktop virtualization. This combination forms what is known as a cloud-based system where desktop resources are centralized at one or more data centers; the benefits of centralization are hardware resource optimization, reduced software maintenance, improved security. Example of hardware resource optimization: Cabling, busing and I/O can be minimized while idle memory and processing power can be applied to user sessions that most need it.
Example of reduced software maintenance: Software patching and operating system migrations can be applied and activated for all users in one instance to accelerate roll-out and improve administrative efficiency. Example of improved security: Software assets are centralized and fire-walled and protected. Sensitive data is uncompromised in cases of desktop theft. Thin client hardware supports a keyboard, monitor, jacks for sound peripherals, open ports for USB devices; some thin clients include legacy serial or parallel ports to support older devices such as receipt printers, scales or time clocks. Thin client software consists of a graphical user interface, cloud access agents, a local web browser, terminal emulators, a basic set of local utilities. In using cloud-based architecture, the server takes on the processing load of several client sessions, acting as a host for each endpoint device; the client software is narrowly lightweight. One of the combined benefits of using cloud architecture with thin client desktops is that critical IT assets are centralized for better utilization of resources.
Unused memory, bussing lanes, processor cores within an individual user session, for example, can be leveraged for other active user sessions. The simplicity of thin client hardware and software results in a low total cost of ownership, but some of these initial savings can be offset by the need for a more robust cloud infrastructure required on the server side. An alternative to traditional server deployment which spreads out infrastructure costs over time is a cloud-based subscription model known as desktop as a service, which allows IT organizations to outsource the cloud infrastructure to a third party. Thin client computing is known to simplify the desktop endpoints by reducing the client-side software footprint. With a lightweight, read-only operating system, client-side setup and administration is reduced. Cloud access is the primary role of a thin client which eliminates the need for a large suite of local user applications, data storage, utilities; this architecture shifts most of the software execution burden from the endpoint to the data center.
User assets are centralized for greater visibility. Data recovery and desktop repurposing tasks are centralized for faster service and greater scalability. While the server must be robust enough to handle several client sessions at once, thin client hardware requirements are minimal compared to that of a traditional PC desktop. Most thin clients have low energy processors, flash storage, no moving parts; this reduces the cost and power consumption, making them affordable to own and easy to replace or deploy. Since thin clients consist of fewer hardware components than a traditional desktop PC, they can operate in more hostile environments, and because they don't store critical data locally, risk of theft is minimized because there is little or no user data to be compromised. Modern thin clients have come a long way to meet the demands of today's graphical computing needs. New generations of low energy chipset and CPU combinations improve processing power and graphical capabilities. To minimize latency of high resolution video sent across the network, some host software stacks leverage multimedia redirection techniques to offload video rendering to the desktop device.
Video codecs are embedded on the thin client to support these various multimedia formats. Other host software stacks makes use of User Datagram Protocol in order to accelerate fast changing pixel updates required by modern video content. Thin clients support local software agents capable of accepting and decoding UDP; some of the more graphically intense use cases, remain a challenge for thin clients. These use cases might include the applications like photo editors, 3D drawing programs, animation tools; this can be addressed at the host server using dedicated GPU cards, allocation of vGPUs, workstation cards, hardware acceleration cards. These solutions allow IT administrators to provide power-user performance where it is needed, to a generic endpoint device such as a thin client. To achieve such simplicity, thin clients some