Disk array controller
A disk array controller is a device which manages the physical disk drives and presents them to the computer as logical units. It always implements hardware RAID, thus it is sometimes referred to as RAID controller, it often provides additional disk cache. A disk array controller name is improperly shortened to a disk controller; the two should not be confused as they provide different functionality. A disk array controller provides back-end interfaces. Back-end interface communicates with controlled disks. Hence protocol is ATA, SATA, SCSI, FC or SAS. Front-end interface communicates with a computer's host adapter and uses: one of ATA, SATA, SCSI, FC. Many enterprise controllers use FC on front-end and SATA on back-end. In a modern enterprise architecture disk array controllers are parts of physically independent enclosures, such as disk arrays placed in a storage area network or network-attached storage servers; those external disk arrays are purchased as an integrated subsystem of RAID controllers, disk drives, power supplies, management software.
It is up to controllers to provide advanced functionality: Automatic failover to another controller Long-running operations performed without downtime Forming a new RAID set Reconstructing degraded RAID set Adding a disk to online RAID set Removing a disk from a RAID set Partitioning a RAID set to separate volumes/LUNs Snapshots Business continuance volumes Replication with a remote controller.... A simple disk array controller may fit inside a computer, either as a PCI expansion card or just built onto a motherboard; such a controller provides host bus adapter functionality itself to save physical space. Hence it is sometimes called a RAID adapter; as of February 2007 Intel started integrating their own Matrix RAID controller in their more upmarket motherboards, giving control over 4 devices and an additional 2 SATA connectors, totalling 6 SATA connections. For backward compatibility one IDE connector able to connect 2 ATA devices is present. While hardware RAID controllers were available for a long time, they always required expensive SCSI hard drives and aimed at the server and high-end computing market.
SCSI technology advantages include allowing up to 15 devices on one bus, independent data transfers, hot-swapping, much higher MTBF. Around 1997, with the introduction of ATAPI-4 the first ATA RAID controllers were introduced as PCI expansion cards; those RAID systems made their way to the consumer market, where the users wanted the fault-tolerance of RAID without investing in expensive SCSI drives. ATA drives make it possible to build RAID systems at lower cost than with SCSI, but most ATA RAID controllers lack a dedicated buffer or high-performance XOR hardware for parity calculation; as a result, ATA RAID performs poorly compared to most SCSI RAID controllers. Additionally, data safety suffers if there is no battery backup to finish writes interrupted by a power outage; because the hardware RAID controllers present assembled RAID volumes, operating systems aren't required to implement the complete configuration and assembly for each controller. Only the basic features are implemented in the open-source software driver, with extended features being provided through binary blobs directly by the hardware manufacturer.
RAID controllers can be configured through card BIOS before an operating system is booted, after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller, because the exact feature set of each controller may be specific to each manufacturer and product. Unlike the network interface controllers for Ethernet, which can be be configured and serviced through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, contributing to reliability issues. For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, use the Linux tooling from Adaptec compromising the stability and security of their setup when taking the long term view in mind.
However, this depends on the controller, whether appropriate hardware documentation is available in order to write a driver, some controllers do have open-source versions of their configuration utilities, for example and mptutil is available for FreeBSD since FreeBSD 8.0, as well as mpsutil/mprutil since 2015, each supporting only their respective device drivers, this latter fact contributing to code bloat. Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management, hot spare disk designations and data
Red Hat, Inc. is an American multinational software company providing open-source software products to the enterprise community. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide. Red Hat has become associated to a large extent with its enterprise operating system Red Hat Enterprise Linux and with the acquisition of open-source enterprise middleware vendor JBoss. Red Hat offers Red Hat Virtualization, an enterprise virtualization product. Red Hat provides storage, operating system platforms, applications, management products, support and consulting services. Red Hat creates and contributes to many free software projects, it has acquired several proprietary software product codebases through corporate mergers and acquisitions and has released such software under open-source licenses. As of March 2016, Red Hat is the second largest corporate contributor to the Linux kernel version 4.14 after Intel. On October 28, 2018, IBM announced its intent to acquire Red Hat for $34 billion.
In 1993, Bob Young incorporated the ACC Corporation, a catalog business that sold Linux and Unix software accessories. In 1994, Marc Ewing created his own Linux distribution. Ewing released the software in October, it became known as the Halloween release. Young bought Ewing's business in 1995, the two merged to become Red Hat Software, with Young serving as chief executive officer. Red Hat went public on August 11, 1999, achieving the eighth-biggest first-day gain in the history of Wall Street. Matthew Szulik succeeded Bob Young as CEO in December of that year. Bob Young went on to found the online print on demand and self-publishing company, Lulu in 2002. On November 15, 1999, Red Hat acquired Cygnus Solutions. Cygnus provided commercial support for free software and housed maintainers of GNU software products such as the GNU Debugger and GNU Binutils. One of the founders of Cygnus, Michael Tiemann, became the chief technical officer of Red Hat and by 2008 the vice president of open-source affairs.
Red Hat acquired WireSpeed, C2Net and Hell's Kitchen Systems. In February 2000, InfoWorld awarded Red Hat its fourth consecutive "Operating System Product of the Year" award for Red Hat Linux 6.1. Red Hat acquired Planning Technologies, Inc in 2001 and AOL's iPlanet directory and certificate-server software in 2004. Red Hat moved its headquarters from Durham to North Carolina State University's Centennial Campus in Raleigh, North Carolina in February 2002. In the following month Red Hat introduced Red Hat Linux Advanced Server renamed Red Hat Enterprise Linux. Dell, IBM, HP and Oracle Corporation announced their support of the platform. In December 2005, CIO Insight magazine conducted its annual "Vendor Value Survey", in which Red Hat ranked #1 in value for the second year in a row. Red Hat stock became part of the NASDAQ-100 on December 19, 2005. Red Hat acquired open-source middleware provider JBoss on June 5, 2006, JBoss became a division of Red Hat. On September 18, 2006, Red Hat released the Red Hat Application Stack, which integrated the JBoss technology and, certified by other well-known software vendors.
On December 12, 2006, Red Hat stock moved from trading on NASDAQ to the New York Stock Exchange. In 2007 Red Hat made an agreement with Exadel to distribute its software. On March 15, 2007, Red Hat released Red Hat Enterprise Linux 5, in June acquired Mobicents. On March 13, 2008, Red Hat acquired Amentra, a provider of systems integration services for service-oriented architecture, business process management, systems development and enterprise data services. On July 27, 2009, Red Hat replaced CIT Group in Standard and Poor's 500 stock index, a diversified index of 500 leading companies of the U. S. economy. This was reported as a major milestone for Linux. On December 15, 2009, it was reported that Red Hat will pay US$8.8 million to settle a class action lawsuit related to the restatement of financial results from July 2004. The suit had been pending in U. S. District Court for the Eastern District of North Carolina. Red Hat reached the proposed settlement agreement and recorded a one-time charge of US$8.8 million for the quarter that ended Nov. 30.
On January 10, 2011, Red Hat announced that it would expand its headquarters in two phases, adding 540 employees to the Raleigh operation, investing over US$109 million. The state of North Carolina is offering up to US$15 million in incentives; the second phase involves "expansion into new technologies such as software visualization and technology cloud offerings". On August 25, 2011, Red Hat announced it would move about 600 employees from the N. C. State Centennial Campus to Two Progress Plaza downtown. A ribbon cutting ceremony was held June 2013, in the re-branded Red Hat Headquarters. In 2012, Red Hat became the first one-billion dollar open-source company, reaching US$1.13 billion in annual revenue during its fiscal year. Red Hat passed the $2 billion benchmark in 2015; as of February 2018 the company's annual revenue was nearly $3 billion. On October 16, 2015, Red Hat announced its acquisition of IT automation startup Ansible, rumored for an estimated $100 million USD. In May 2018, Red Hat acquired CoreOS.
On October 28, 2018, IBM announced its intent to acquire Red Hat for US$34 billion, in one of its largest-ever acquisitions. The company will operate out of IBM's Hybrid Cloud division. Red Hat's lead advisor was Guggenheim Securities LLC. Red Hat sponsors the Fedora Project, a community-supported free software project that aims to promote the rapid progress of free and open-source software and conten
Synchronization is the coordination of events to operate a system in unison. The conductor of an orchestra keeps the orchestra synchronized or in time. Systems that operate with all parts in synchrony are said to be synchronous or in sync—and those that are not are asynchronous. Today, time synchronization can occur between systems around the world through satellite navigation signals. Time-keeping and synchronization of clocks has been a critical problem in long-distance ocean navigation. Before radio navigation and satellite-based navigation, navigators required accurate time in conjunction with astronomical observations to determine how far east or west their vessel traveled; the invention of an accurate marine chronometer revolutionized marine navigation. By the end of the 19th century, important ports provided time signals in the form of a signal gun, flag, or dropping time ball so that mariners could check their chronometers for error. Synchronization was important in the operation of 19th century railways, these being the first major means of transport fast enough for differences in local time between adjacent towns to be noticeable.
Each line handled the problem by synchronizing all its stations to headquarters as a standard railroad time. In some territories, sharing of single railroad tracks was controlled by the timetable; the need for strict timekeeping led the companies to settle on one standard, civil authorities abandoned local mean solar time in favor of that standard. In electrical engineering terms, for digital logic and data transfer, a synchronous circuit requires a clock signal. However, the use of the word "clock" in this sense is different from the typical sense of a clock as a device that keeps track of time-of-day. In a different sense, electronic systems are sometimes synchronized to make events at points far apart appear simultaneous or near-simultaneous from a certain perspective. Timekeeping technologies such as the GPS satellites and Network Time Protocol provide real-time access to a close approximation to the UTC timescale and are used for many terrestrial synchronization applications of this kind.
Synchronization is an important concept in the following fields: Computer science Cryptography Multimedia Music Neuroscience Photography Physics Synthesizers Telecommunication Synchronization of multiple interacting dynamical systems can occur when the systems are autonomous oscillators. For instance, integrate-and-fire oscillators with either two-way or one-way coupling can synchronize when the strength of the coupling is greater than the differences among the free-running natural oscillator frequencies. Poincare phase oscillators are model systems that can interact and synchronize within random or regular networks. In the case of global synchronization of phase oscillators, an abrupt transition from unsynchronized to full synchronization takes place when the coupling strength exceeds a critical threshold; this is known as the Kuramoto model phase transition. Synchronization is an emergent property that occurs in a broad range of dynamical systems, including neural signaling, the beating of the heart and the synchronization of fire-fly light waves.
Synchronization of movement is defined as similar movements between two or more people who are temporally aligned. This is different to mimicry. Muscular bonding is the idea; this sparked some of the first research into movement synchronization and its effects on human emotion. In groups, synchronization of movement has been shown to increase conformity and trust however more research on group synchronization is needed to determine its effects on the group as a whole and on individuals within a group. In dyads, groups of two people, synchronization has been demonstrated to increase affiliation, self-esteem and altruistic behaviour and increase rapport. During arguments, synchrony between the arguing pair has been noted to decrease, however it is not clear whether this is due to the change in emotion or other factors. There is evidence to show that movement synchronization requires other people to cause its beneficial effects, as the effect on affiliation does not occur when one of the dyad is synchronizing their movements to something outside the dyad.
This is known as interpersonal synchrony. There has been dispute regarding the true effect of synchrony in these studies. Research in this area detailing the positive effects of synchrony, have attributed this to synchrony alone. Indeed, the Reinforcement of Cooperation Model suggests that perception of synchrony leads to reinforcement that cooperation is occurring, which leads to the pro-social effects of synchrony. More research is required to separate the effect of intentionality from the beneficial effect of synchrony. Film synchronization of image and sound in sound film. Synchronization is important in fields such as digital telephony and digital audio where streams of sam
Non-standard RAID levels
Although all RAID implementations differ from the specification to some extent, some companies and open-source projects have developed non-standard RAID implementations that differ from the standard. Additionally, there are non-RAID drive architectures, providing configurations of multiple hard drives not referred to by RAID acronyms. Now part of RAID 6, double parity features two sets of parity checks, like traditional RAID 6. Differently, the second set is not another set of points in the over-defined polynomial which characterizes the data. Rather, double parity calculates the extra parity against a different group of blocks. For example, in our graph both RAID 5 and 6 consider all A-labeled blocks to produce one or more parity blocks. However, it is easy to calculate parity against multiple groups of blocks, one can calculate all A blocks and a permuted group of blocks. RAID-DP is proprietary NetApp RAID implementation available only in ONTAP systems. RAID DP implements RAID 4, except with an additional disk, used for a second parity, so it has the same failure characteristics of a RAID 6.
The performance penalty of RAID-DP is under 2% when compared to a similar RAID 4 configuration. RAID 5E, RAID 5EE, RAID 6E refer to variants of RAID 5 or 6 with an integrated hot-spare drive, where the spare drive is an active part of the block rotation scheme; this spreads I/O across all drives, including the spare, thus reducing the load on each drive, increasing performance. It does, prevent sharing the spare drive among multiple arrays, desirable. Intel Matrix RAID is a feature present in the ICH6R and subsequent Southbridge chipsets from Intel and configurable via the RAID BIOS setup utility. Matrix RAID supports as many as the controller supports; the distinguishing feature of Matrix RAID is that it allows any assortment of RAID 0, 1, 5, or 10 volumes in the array, to which a controllable portion of each disk is allocated. As such, a Matrix RAID array can improve both data integrity. A practical instance of this would use a small RAID 0 volume for the operating system and paging files. Linux MD RAID is capable of this.
The software RAID subsystem provided by the Linux kernel, called "md", supports the creation of both classic RAID 1+0 arrays, non-standard RAID arrays that use a single-level RAID layout with some additional features. The standard "near" layout, in which each chunk is repeated n times in a k-way stripe array, is equivalent to the standard RAID 10 arrangement, but it does not require that n evenly divides k. For example, an n2 layout on two and four drives would look like: 2 drives 3 drives 4 drives -------- ---------- -------------- A1 A1 A1 A1 A2 A1 A1 A2 A2 A2 A2 A2 A3 A3 A3 A3 A4 A4 A3 A3 A4 A4 A5 A5 A5 A6 A6 A4 A4 A5 A6 A6 A7 A7 A8 A8.................. The four-drive example is identical to a standard RAID 1+0 array, while the three-drive example is a software implementation of RAID 1E; the two-drive example is equivalent to RAID 1. The driver supports a "far" layout, in which all the drives are divided into f sections. All the chunks are switched in groups. For example, f2 layouts on two-, three-, four-drive arrays would look like this: 2 drives 3 drives 4 drives -------- ------------ ------------------ A1 A2 A1 A2 A3 A1 A2 A3 A4 A3 A4 A4 A5 A6 A5 A6 A7 A8 A5 A6 A7 A8 A9 A9 A10 A11 A12..................
A2 A1 A3 A1 A2 A2 A1 A4 A3 A4 A3 A6 A4 A5 A6 A5 A8 A7 A6 A5 A9 A7 A8 A10 A9 A12 A11.................. "Far" layout is designed for offering striping performance on a mirrored array. Random reads are somewhat faster, while sequential and random writes offer about equal speed to other mirrored RAID configurations. "Far" layout performs well for systems in which reads are more frequent than writes, a common case. For a comparison, regular RAID 1 as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel; the "near" and "far" options can be used together. For example, an n2 f2 layout stores 2×2 = 4 copies of each sector, thus requiring at least four drives: 4 drives 5 drives -------------- ------------------- A1 A1 A2 A2 A1 A1 A2 A2 A3 A3 A3 A4 A4 A3 A4 A4 A5 A5 A5 A5 A6 A6 A6 A6 A7 A7 A8 A7 A7 A8 A8 A8 A9 A9 A10 A10.................. A2 A2 A1 A1 A2 A3 A1 A1 A2 A4 A4 A3 A3 A5 A5 A3 A4 A4 A6 A6 A5 A5 A7 A8 A6 A6 A7 A8 A8 A7 A7 A10 A10 A8 A9 A9.................. The md driver supports an "offset" layout, in which each stripe is repeated o times and offset by f devices.
For example, o2 layouts on two-, three-, four-drive arrays are laid out as: 2 drives 3 drives 4 drives -------- ---------- --------------- A1 A2 A1 A2 A3 A1 A
The Linux Foundation is a non-profit technology consortium founded in 2000 as a merger between Open Source Development Labs and the Free Standards Group to standardize Linux, support its growth, promote its commercial adoption. It hosts and promotes the collaborative development of open source software projects, it began in 2000 under the Open Source Development Labs and became the organization it is today when OSDL merged with the Free Standards Group. The Linux Foundation sponsors the work of Linux creator Linus Torvalds and lead maintainer Greg Kroah-Hartman and is supported by members such as AT&T, Fujitsu, Huawei, IBM, Microsoft, NEC, Qualcomm, VMware, as well as developers from around the world. In recent years, the Linux Foundation has expanded its services through events and certification, open source projects. Projects hosted at the Linux Foundation include Open Network Automation Platform, Cloud Native Computing Foundation, Cloud Foundry Foundation, Xen Project, many others; the origins of The Linux Foundation can be traced to 1993 when Patrick D'Cruze started the Linux International email list known as LI.
In 1993 at Comdex, Bob Young introduced Mark Bolzern to the LI list and shortly thereafter Bolzern shared his vision and was asked to "make it so" by the members of the list. Bolzern funded LI and its activities until others joined; the vision defined among other things, an entity to deal with traditional public relations on behalf of Linus Torvalds, to file for TradeMark on behalf of Linus among many other things about to be described. Under Bolzern's direction, LI became a collaboration of Linux related vendors and technologists, heading a single direction that served everyone according to the original vision, it became clear that Bolzern could not continue to be both CEO of WorkGroup Solutions/LinuxMall AND executive director of Linux International at the same time because of perceived conflict of interest. So: In mid 1994 Bolzern and Young recruited Jon "maddog" Hall into the Executive Director position, who in turn filed the Corporate paperwork on behalf of the new Board of Directors while Bolzern remained on the Board, as well as continued leading trade show and marketing efforts until late 1999.
This included User Groups by Bolzern, or maddog. Bolzern organized and managed the launch of Linux Pavilions at major trade shows of the time such as UniForum, Comdex and with maddog helping to establish the Atlanta Linux Showcase helped Larry Augustin and the Silicon Valley Linux user group create the San Francisco Linux Expo. Notable in the 94–98 timeframe was an anti-fraud Linux Trademark filing led by LI. Included in the LI suite of projects by the mid 90s were the Linux Mark Institute, Linux Base Standard, Certification Programs and the Trade Show & Press relationships along with being a Vendor association. Here is a page outlining Linux International's membership as of the latter half of the 90s; the list is not presented as alphabetical, but as agreed in order of merit to Linux. Bolzern & maddog continued to provide the bulk of the funding until about 1998, augmented by vendor and individual membership fees; as more and more individuals and sponsors joined the LI vision, by 1999 LI had become a vendor-neutral 501c6 Non-Profit Industry Association for Linux with Linus Torvalds' blessing, while Linus himself focused on development and technical excellence for Linux itself.
LI's primary purpose was to be that Industry Marketing Organization that supported Linux related Certification Programs, along with development of essential Projects and Education. The vision was huge, as large vendors expected more sophistication, thus more help was needed as Bolzern was being distracted because his wife was diagnosed with cancer, maddog was becoming weary of the load. With everyone's support Augustin took action and suggested another organization be formed to continue. In 2000, OSDL was founded after appealing to the Linux International Board of Directors for a number of the fundamental projects that are still part of the Linux Foundation today. OSDL was a non-profit organization supported by a global consortium that aimed to "accelerate the deployment of Linux for enterprise computing" and "to be the recognized center-of-gravity for the Linux industry." While Jon "maddog" Hall went a different direction with LI.org. In 2003, Linus Torvalds, the creator of the available Linux kernel, announced he would join the organization as an OSDL Fellow to work full-time on future versions of Linux.
In 2007, OSDL merged with the Free Standards Group, another organization promoting the adoption of Linux. At the time, Jim Zemlin, who headed FSG, took over as executive director of The Linux Foundation where he remains today. On September 11, 2011, The Linux Foundation's website was taken down due to a breach discovered 27 days prior, including but limited to all attendant subdomains of The Linux Foundation, such as Linux.com. Major parts including OpenPrinting were still offline on October 20, 2011; the restoration was complete on January 4, 2012. In March 2014, The Linux Foundation announced it would begin building a MOOC program with nonprofit education platform, edX; the aim of this collaboration was to serve the growing demand for Linux expertise in a vehicle, available to "anyone, anywhere in the world, at any time." At this point, their first offering was a basic "Introduction to Linux" course, but the library has since expanded to include Intro to Cloud Infrastructure Te
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, application servers. Client–server systems are today most implemented by the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it.
This implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many simple, replaceable server components. The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall, the paper that introduced Kendall's notation. In earlier papers, such as the Erlang, more concrete terms such as " operators" are used. In computing, "server" dates at least to RFC 5, one of the earliest documents describing ARPANET, is contrasted with "user", distinguishing two types of host: "server-host" and "user-host"; the use of "serving" dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests remote, with the 1981 version reading: SERVER n. A kind of DAEMON which performs a service for the requester, which runs on a computer other than the one on which the server runs.
Speaking, the term server refers to a computer program or process. Through metonymy, it refers to a device used for running several server programs. On a network, such a device is called a host. In addition to server, the words serve and service are used, though servicer and servant are not; the word service may refer to either the abstract form of e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Used as "servers serve users", in the sense of "obey", today one says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests"; the server is part of the client–server model. The nature of communication between a client and server is response; this is in contrast with peer-to-peer model. In principle, any computerized process that can be used or called by another process is a server, the calling process or processes is a client, thus any general purpose computer connected to a network can host servers.
For example, if files on a device are shared by some process, that process is a file server. Web server software can run on any capable computer, so a laptop or a personal computer can host a web server. While request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response; the purpose of a server is to share data as well as to distribute work. A server computer can serve its own computer programs as well; the following table shows several scenarios. The entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and every action taken by an ordinary Internet user requires one or more interactions with one or more server.
There are exceptions. Hardware requirement for servers vary depending on the server's purpose and its software. Since servers are accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface, they are managed remotely. Remote management can be conducted via various methods including Microsoft Management Console, PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large traditional single servers would need to be run for long periods without interruption. Ava
In computing, a file system or filesystem controls how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is isolated and identified. Taking its name from the way paper-based information systems are named, each group of data is called a "file"; the structure and logic rules used to manage the groups of information and their names is called a "file system". There are many different kinds of file systems; each one has different structure and logic, properties of speed, security and more. Some file systems have been designed to be used for specific applications. For example, the ISO 9660 file system is designed for optical discs. File systems can be used on numerous different types of storage devices that use different kinds of media; as of 2019, hard disk drives have been key storage devices and are projected to remain so for the foreseeable future.
Other kinds of media that are used include SSDs, magnetic tapes, optical discs. In some cases, such as with tmpfs, the computer's main memory is used to create a temporary file system for short-term use; some file systems are used on local data storage devices. Some file systems are "virtual", meaning that the supplied "files" are computed on request or are a mapping into a different file system used as a backing store; the file system manages access to the metadata about those files. It is responsible for arranging storage space. Before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning. By 1964 it was in general use. A file system consists of three layers. Sometimes the layers are explicitly separated, sometimes the functions are combined; the logical file system is responsible for interaction with the user application. It provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing.
The logical file system "manage open file table entries and per-process file descriptors." This layer provides "file access, directory operations and protection."The second optional layer is the virtual file system. "This interface allows support for multiple concurrent instances of physical file systems, each of, called a file system implementation."The third layer is the physical file system. This layer is concerned with the physical operation of the storage device, it processes physical blocks being written. It handles buffering and memory management and is responsible for the physical placement of blocks in specific locations on the storage medium; the physical file system interacts with the device drivers or with the channel to drive the storage device. Note: this only applies to file systems used in storage devices. File systems allocate space in a granular manner multiple physical units on the device; the file system is responsible for organizing files and directories, keeping track of which areas of the media belong to which file and which are not being used.
For example, in Apple DOS of the early 1980s, 256-byte sectors on 140 kilobyte floppy disk used a track/sector map. This results in unused space when a file is not an exact multiple of the allocation unit, sometimes referred to as slack space. For a 512-byte allocation, the average unused space is 256 bytes. For 64 KB clusters, the average unused space is 32 KB; the size of the allocation unit is chosen. Choosing the allocation size based on the average size of the files expected to be in the file system can minimize the amount of unusable space; the default allocation may provide reasonable usage. Choosing an allocation size, too small results in excessive overhead if the file system will contain very large files. File system fragmentation occurs; as a file system is used, files are created and deleted. When a file is created the file system allocates space for the data; some file systems permit or require specifying an initial space allocation and subsequent incremental allocations as the file grows.
As files are deleted the space they were allocated is considered available for use by other files. This creates alternating unused areas of various sizes; this is free space fragmentation. When a file is created and there is not an area of contiguous space available for its initial allocation the space must be assigned in fragments; when a file is modified such that it becomes larger it may exceed the space allocated to it, another allocation must be assigned elsewhere and the file becomes fragmented. A filename is used to identify a storage location in the file system. Most file systems have restrictions on the length of filenames. In some file systems, filenames are not case sensitive. Most modern file systems allow filenames to contain a wide range of characters from the Unicode character set. However, they may have restrictions on the use of certain s