1.
File system
–
In computing, a file system or filesystem is used to control how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops, by separating the data into pieces and giving each piece a name, the information is easily isolated and identified. Taking its name from the way paper-based information systems are named, the structure and logic rules used to manage the groups of information and their names is called a file system. There are many different kinds of file systems, each one has different structure and logic, properties of speed, flexibility, security, size and more. Some file systems have been designed to be used for specific applications, for example, the ISO9660 file system is designed specifically for optical discs. File systems can be used on different types of storage devices that use different kinds of media. The most common device in use today is a hard disk drive. Other kinds of media that are used include flash memory, magnetic tapes, in some cases, such as with tmpfs, the computers main memory is used to create a temporary file system for short-term use. Some file systems are used on local storage devices, others provide file access via a network protocol. Some file systems are virtual, meaning that the files are computed on request or are merely a mapping into a different file system used as a backing store. The file system access to both the content of files and the metadata about those files. It is responsible for arranging storage space, reliability, efficiency, before the advent of computers the term file system was used to describe a method of storing and retrieving paper documents. By 1961 the term was being applied to computerized filing alongside the original meaning, by 1964 it was in general use. A file system consists of two or three layers, sometimes the layers are explicitly separated, and sometimes the functions are combined. The logical file system is responsible for interaction with the user application and it provides the application program interface for file operations — OPEN, CLOSE, READ, etc. and passes the requested operation to the layer below it for processing. The logical file system manage open file table entries and per-process file descriptors and this layer provides file access, directory operations, security and protection. The second optional layer is the file system. This interface allows support for multiple concurrent instances of physical file systems, the third layer is the physical file system
2.
Unix
–
Among these is Apples macOS, which is the Unix version with the largest installed base as of 2014. Many Unix-like operating systems have arisen over the years, of which Linux is the most popular, Unix was originally meant to be a convenient platform for programmers developing software to be run on it and on other systems, rather than for non-programmer users. The system grew larger as the system started spreading in academic circles, as users added their own tools to the system. Unix was designed to be portable, multi-tasking and multi-user in a time-sharing configuration and these concepts are collectively known as the Unix philosophy. By the early 1980s users began seeing Unix as a universal operating system. Under Unix, the system consists of many utilities along with the master control program. To mediate such access, the kernel has special rights, reflected in the division between user space and kernel space, the microkernel concept was introduced in an effort to reverse the trend towards larger kernels and return to a system in which most tasks were completed by smaller utilities. In an era when a standard computer consisted of a disk for storage and a data terminal for input and output. However, modern systems include networking and other new devices, as graphical user interfaces developed, the file model proved inadequate to the task of handling asynchronous events such as those generated by a mouse. In the 1980s, non-blocking I/O and the set of inter-process communication mechanisms were augmented with Unix domain sockets, shared memory, message queues, and semaphores. In microkernel implementations, functions such as network protocols could be moved out of the kernel, Multics introduced many innovations, but had many problems. Frustrated by the size and complexity of Multics but not by the aims and their last researchers to leave Multics, Ken Thompson, Dennis Ritchie, M. D. McIlroy, and J. F. Ossanna, decided to redo the work on a much smaller scale. The name Unics, a pun on Multics, was suggested for the project in 1970. Peter H. Salus credits Peter Neumann with the pun, while Brian Kernighan claims the coining for himself, in 1972, Unix was rewritten in the C programming language. Bell Labs produced several versions of Unix that are referred to as Research Unix. In 1975, the first source license for UNIX was sold to faculty at the University of Illinois Department of Computer Science, UIUC graduate student Greg Chesson was instrumental in negotiating the terms of this license. During the late 1970s and early 1980s, the influence of Unix in academic circles led to adoption of Unix by commercial startups, including Sequent, HP-UX, Solaris, AIX. In the late 1980s, AT&T Unix System Laboratories and Sun Microsystems developed System V Release 4, in the 1990s, Unix-like systems grew in popularity as Linux and BSD distributions were developed through collaboration by a worldwide network of programmers
3.
Unix-like
–
A Unix-like operating system is one that behaves in a manner similar to a Unix system, while not necessarily conforming to or being certified to any version of the Single UNIX Specification. A Unix-like application is one that behaves like the corresponding Unix command or shell, there is no standard for defining the term, and some difference of opinion is possible as to the degree to which a given operating system or application is Unix-like. The Open Group owns the UNIX trademark and administers the Single UNIX Specification and they do not approve of the construction Unix-like, and consider it a misuse of their trademark. Other parties frequently treat Unix as a genericized trademark, in 2007, Wayne R. Gray sued to dispute the status of UNIX as a trademark, but lost his case, and lost again on appeal, with the court upholding the trademark and its ownership. Unix-like systems started to appear in the late 1970s and early 1980s, many proprietary versions, such as Idris, UNOS, Coherent, and UniFlex, aimed to provide businesses with the functionality available to academic users of UNIX. These largely displaced the proprietary clones, growing incompatibility among these systems led to the creation of interoperability standards, including POSIX and the Single UNIX Specification. Various free, low-cost, and unrestricted substitutes for UNIX emerged in the 1980s and 1990s, including 4. 4BSD, Linux, some of these have in turn been the basis for commercial Unix-like systems, such as BSD/OS and OS X. The various BSD variants are notable in that they are in fact descendants of UNIX, however, the BSD code base has evolved since then, replacing all of the AT&T code. Since the BSD variants are not certified as compliant with the Single UNIX Specification, dennis Ritchie, one of the original creators of Unix, expressed his opinion that Unix-like systems such as Linux are de facto Unix systems. Eric S. Raymond and Rob Landley have suggested there are three kinds of Unix-like systems, Genetic UNIX Those systems with a historical connection to the AT&T codebase. Most commercial UNIX systems fall into this category, so do the BSD systems, which are descendants of work done at the University of California, Berkeley in the late 1970s and early 1980s. Some of these systems have no original AT&T code but can trace their ancestry to AT&T designs. Trademark or branded UNIX These systems—largely commercial in nature—have been determined by the Open Group to meet the Single UNIX Specification and are allowed to carry the UNIX name, many ancient UNIX systems no longer meet this definition. Around 2001, Linux was given the opportunity to get a certification including free help from the POSIX chair Andrew Josey for the price of one dollar. Some non-Unix-like operating systems provide a Unix-like compatibility layer, with degrees of Unix-like functionality. IBM z/OSs UNIX System Services is sufficiently complete to be certified as trademark UNIX, cygwin and MSYS both provide a GNU environment on top of the Microsoft Windows user API, sufficient for most common open source software to be compiled and run. Subsystem for Unix-based Applications provides Unix-like functionality as a Windows NT subsystem, Windows Subsystem for Linux provides a Linux-compatible kernel interface developed by Microsoft and containing no Linux code, with Ubuntu user-mode binaries running on top of it
4.
Linux
–
Linux is a Unix-like computer operating system assembled under the model of free and open-source software development and distribution. The defining component of Linux is the Linux kernel, an operating system kernel first released on September 17,1991 by Linus Torvalds, the Free Software Foundation uses the name GNU/Linux to describe the operating system, which has led to some controversy. Linux was originally developed for computers based on the Intel x86 architecture. Because of the dominance of Android on smartphones, Linux has the largest installed base of all operating systems. Linux is also the operating system on servers and other big iron systems such as mainframe computers. It is used by around 2. 3% of desktop computers, the Chromebook, which runs on Chrome OS, dominates the US K–12 education market and represents nearly 20% of the sub-$300 notebook sales in the US. Linux also runs on embedded systems – devices whose operating system is built into the firmware and is highly tailored to the system. This includes TiVo and similar DVR devices, network routers, facility automation controls, televisions, many smartphones and tablet computers run Android and other Linux derivatives. The development of Linux is one of the most prominent examples of free, the underlying source code may be used, modified and distributed—commercially or non-commercially—by anyone under the terms of its respective licenses, such as the GNU General Public License. Typically, Linux is packaged in a known as a Linux distribution for both desktop and server use. Distributions intended to run on servers may omit all graphical environments from the standard install, because Linux is freely redistributable, anyone may create a distribution for any intended use. The Unix operating system was conceived and implemented in 1969 at AT&Ts Bell Laboratories in the United States by Ken Thompson, Dennis Ritchie, Douglas McIlroy, first released in 1971, Unix was written entirely in assembly language, as was common practice at the time. Later, in a key pioneering approach in 1973, it was rewritten in the C programming language by Dennis Ritchie, the availability of a high-level language implementation of Unix made its porting to different computer platforms easier. Due to an earlier antitrust case forbidding it from entering the computer business, as a result, Unix grew quickly and became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of Bell Labs, freed of the legal obligation requiring free licensing, the GNU Project, started in 1983 by Richard Stallman, has the goal of creating a complete Unix-compatible software system composed entirely of free software. Later, in 1985, Stallman started the Free Software Foundation, by the early 1990s, many of the programs required in an operating system were completed, although low-level elements such as device drivers, daemons, and the kernel were stalled and incomplete. Linus Torvalds has stated that if the GNU kernel had been available at the time, although not released until 1992 due to legal complications, development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Torvalds has also stated that if 386BSD had been available at the time, although the complete source code of MINIX was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000
5.
Disk partitioning
–
Disk partitioning or disk slicing is the creation of one or more regions on a hard disk or other secondary storage, so that an operating system can manage information in each region separately. Partitioning is typically the first step of preparing a newly manufactured disk, the disk stores the information about the partitions locations and sizes in an area known as the partition table that the operating system reads before any other part of the disk. Each partition then appears in the system as a distinct logical disk that uses part of the actual disk. Partitioning a drive is when you divide the total storage of a drive into different pieces, once a partition is created, it can then be formatted so that it can be used on a computer. Creating more than one partition has the advantages, Separation of the operating system. This allows image backups to be made of only the operating system, having a separate area for operating system virtual memory swapping/paging. Keeping frequently used programs and data near each other, having cache and log files separate from other files. These can change size dynamically and rapidly, potentially making a file system full, use of multi-boot setups, which allow users to have more than one operating system on a single computer. Protecting or isolating files, to make it easier to recover a corrupted file system or operating system installation, If one partition is corrupted, other file systems may not be affected. Raising overall computer performance on systems where smaller file systems are more efficient, short stroking, which aims to minimize performance-eating head repositioning delays by reducing the number of tracks used per HDD. The basic idea is that you make one partition approx, 20–25% of the total size of the drive. This partition is expected to, occupy the outer tracks of the HDD, If you limit capacity with short stroking, the minimum throughput stays much closer to the maximum. This technique, however, is not related to creating multiple partitions, for example, a 1 TB disk may have an access time of 12 ms at 200 IOPS with an average throughput of 100 MB/s. When it is partitioned to 100 GB access time may be decreased to 6 ms at 300 IOPS with a throughput of 200 MB/s. Partitioning for significantly less than the size available when disk space is not needed can reduce the time for diagnostic tools such as checkdisk to run or for full image backups to run. It also prevents disk optimizers from moving all frequently accessed files closer to other on the disk. Files can still be moved closer to other on each partition. This issue does not apply to Solid-state drives as access times on those are neither affected by nor dependent upon relative sector positions, may prevent using the whole disk capacity, because it may break free capacities apart
6.
Unix filesystem
–
In Unix and operating systems inspired by it, the file system is considered a central component of the operating system. It was also one of the first parts of the system to be designed and implemented by Ken Thompson in the first experimental version of Unix, also, the filesystem provides access to other resources through so-called device files that are entry points to terminals, printers, and mice. The rest of this article uses Unix as a name to refer to both the original Unix operating system and its many workalikes. The filesystem appears as one rooted tree of directories, the root of the entire tree is denoted /. In the original Bell Labs Unix, a setup was customary. This second disk was mounted at the empty directory named usr on the first disk, Unix directories do not contain files. Instead, they contain the names of files paired with references to so-called inodes, multiple names in the file system may refer to the same file, a feature termed a hard link. The mathematical traits of hard links make the system a limited type of directed acyclic graph, although the directories still form a tree. The original Unix file system supported three types of files, ordinary files, directories, and special files, also termed device files. The Berkeley Software Distribution and System V each added a file type to be used for communication, BSD added sockets. BSD also added symbolic links to the range of types, which are files that refer to other files. Symlinks were modeled after a feature in Multics, and differ from hard links in that they may span filesystems. Other Unix systems may support added types of files, certain conventions exist for locating some kinds of files, such as programs, system configuration files, and users home directories. These were first documented in the man page since Version 7 Unix, subsequent versions. The details of the layout have varied over time.0 Unported License
7.
System administrator
–
A system administrator, or sysadmin, is a person who is responsible for the upkeep, configuration, and reliable operation of computer systems, especially multi-user computers, such as servers. The system administrator seeks to ensure that the uptime, performance, resources, many organizations staff other jobs related to system administration. In a larger company, these may all be separate positions within a support or Information Services department. In a smaller group they may be shared by a few sysadmins, a database administrator maintains a database system, and is responsible for the integrity of the data and the efficiency and performance of the system. A network administrator maintains network infrastructure such as switches and routers, a security administrator is a specialist in computer and network security, including the administration of security devices such as firewalls, as well as consulting on general security measures. A web administrator maintains web server services that allow for internal or external access to web sites, tasks include managing multiple sites, administering security, and configuring necessary components and software. Responsibilities may also include software change management, a computer operator performs routine maintenance and upkeep, such as changing backup tapes or replacing failed drives in a redundant array of independent disks. There are multiple paths to be part of becoming a system administrator, on top of this, nowadays some companies require an IT certification. Other schools have offshoots of their Computer Science program specifically for system administration, an alternate path to becoming a system administrator is to simply dive in without formal training, learning the systems they need to support, as they do other non-IT work. This is a route for informally trained system administration, and is often the result in small organizations that lack IT departments but have gradually growing needs. These informally trained system administrators could be regarded as hackers, but they do their work in support of the needs of their organization, some schools have started offering undergraduate degrees in System Administration. The first, Rochester Institute of Technology started in 1992, others such as Rensselaer Polytechnic Institute, University of New Hampshire, Marist College, and Drexel University have more recently offered degrees in Information Technology. Symbiosis Institute of Computer Studies and Research in Pune, India offers masters degree in Computers Applications with a specialization in System Administration, the University of South Carolina offers an Integrated Information Technology B. S. degree specializing in Microsoft product support. Several U. S. universities, including Rochester Institute of Technology, Tufts, Michigan Tech, in Norway, there is a special English-taught MSc program organized by Oslo University College in cooperation with Oslo University, named Masters programme in Network and System Administration. There is also a BSc in Network and System Administration offered by Gjøvik University College, University of Amsterdam offers a similar program in cooperation with Hogeschool van Amsterdam named Master System and Network Engineering. Many schools in the world offer related graduate degrees in such as network systems. By the time a new textbook has spent years working through approvals and committees, in addition, because of the practical nature of system administration and the easy availability of open-source server software, many system administrators enter the field self-taught. Some learning institutions are reluctant to teach, what is in effect, generally, a prospective will be required to have some experience with the computer system they are expected to manage
8.
Solaris (operating system)
–
Solaris is a Unix operating system originally developed by Sun Microsystems. It superseded their earlier SunOS in 1993, Oracle Solaris, so named as of 2010, has been owned by Oracle Corporation since the Sun acquisition by Oracle in January 2010. Solaris is known for its scalability, especially on SPARC systems, Solaris supports SPARC-based and x86-based workstations and servers from Oracle and other vendors, with efforts underway to port to additional platforms. Solaris is registered as compliant with the Single Unix Specification, historically, Solaris was developed as proprietary software. In June 2005, Sun Microsystems released most of the codebase under the CDDL license, with OpenSolaris, Sun wanted to build a developer and user community around the software. After the acquisition of Sun Microsystems in January 2010, Oracle decided to discontinue the OpenSolaris distribution, in August 2010, Oracle discontinued providing public updates to the source code of the Solaris kernel, effectively turning Solaris 11 back into a closed source proprietary operating system. Following that, in 2011 the Solaris 11 kernel source code leaked to BitTorrent, however, through the Oracle Technology Network, industry partners can still gain access to the in-development Solaris source code. Source code for the source components of Solaris 11 is available for download from Oracle. In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time, BSD, System V and this became Unix System V Release 4. On September 4,1991, Sun announced that it would replace its existing BSD-derived Unix, SunOS4 and this was identified internally as SunOS5, but a new marketing name was introduced at the same time, Solaris 2. The justification for this new overbrand was that it encompassed not only SunOS, but also the OpenWindows graphical user interface and Open Network Computing functionality. Although SunOS4.1. x micro releases were retroactively named Solaris 1 by Sun, for releases based on SunOS5, the SunOS minor version is included in the Solaris release number. For example, Solaris 2.4 incorporates SunOS5.4. After Solaris 2.6, the 2. was dropped from the name, so Solaris 7 incorporates SunOS5.7. Although SunSoft stated in its initial Solaris 2 press release their intent to support both SPARC and x86 systems, the first two Solaris 2 releases,2.0 and 2.1, were SPARC-only. An x86 version of Solaris 2.1 was released in June 1993, about 6 months after the SPARC version, as a desktop and it included the Wabi emulator to support Windows applications. At the time, Sun also offered the Interactive Unix system that it had acquired from Interactive Systems Corporation, in 1994, Sun released Solaris 2.4, supporting both SPARC and x86 systems from a unified source code base. Solaris uses a code base for the platforms it supports, SPARC
9.
KDE
–
KDE is an international free software community that develops Free and Libre software. Well-known products include the Plasma Desktop, KDE Frameworks and a range of applications designed to run on modern Unix-like. It further provides tools and documentation for developers that enables them to write software and this supporting role makes KDE a central development hub and home for many popular applications and projects like Calligra Suite, Krita or digiKam. The Plasma Desktop, being one of the most recognized projects of KDE, is the desktop environment on many Linux distributions, such as openSUSE, Mageia, Chakra, Kubuntu. It was also the default desktop environment on PC-BSD, but was replaced with Lumina. The work of the KDE community can be measured in the following figures, more than 1800 contributors participate in developing KDE software. About 20 new developers contribute their first code each month, KDE Software consists of over 6 million lines of code. KDE Software is translated in over 108 languages, KDE Software is available on more than 114 official FTP mirrors in over 34 countries. A read-only mirror of all repositories can be found on Github, K Desktop Environment was founded in 1996 by Matthias Ettrich, who was then a student at the Eberhard Karls University of Tübingen. At the time, he was troubled by certain aspects of the Unix desktop, among his concerns was that none of the applications looked, felt, or worked alike. He proposed the creation of not merely a set of applications but a desktop environment in which users could expect things to look, feel, and work consistently. He also wanted to make this easy to use, one of his complaints about desktop applications of the time was that it is too complicated for end user. His initial Usenet post spurred a lot of interest, and the KDE project was born, the name KDE was intended as a wordplay on the existing Common Desktop Environment, available for Unix systems. CDE is an X11-based user environment jointly developed by HP, IBM and it was supposed to be an intuitively easy-to-use desktop computer environment. The K was originally suggested to stand for Kool, but it was decided that the K should stand for nothing in particular. Therefore, the KDE initialism expanded to K Desktop Environment before it was dropped altogether in favor of KDE = Community due to the rebranding effort. The rebranding focused on de-emphasizing the desktop environment as just another product, what would have been previously known as KDE4 was split into three products, Plasma Workspaces, KDE Applications, and KDE Platform – bundled as KDE Software Compilation 4. As of today the name KDE no longer stands for K Desktop Environment, the financial and legal matters of KDE are handled by KDE e. V
10.
Udev
–
Udev is a device manager for the Linux kernel. As the successor of devfsd and hotplug, udev primarily manages device nodes in the /dev directory. At the same time, udev also handles all user space events raised while hardware devices are added into the system or removed from it and it is an operating systems kernel that is responsible for providing an abstract interface of the hardware to the rest of the software. Being a monolithic kernel, the Linux kernel does exactly that, and device drivers are part of the Linux kernel, hardware can be accessed through system calls or over their device nodes. Running in user space serves security and stability purposes, device drivers are part of the Linux kernel, and device discovery, state changes, etc. are handled by the Linux kernel. But after loading the driver into memory, the action the kernel takes is to send out an event to a userspace daemon. It is the manager, udevd, that catches all of these events. For this, udevd has a comprehensive set of configuration files. In case a new device is connected over USB, udevd is notified by the kernel. That daemon could then mount the file systems, in case a new Ethernet cable is plugged into the Ethernet NIC, udevd is notified by the kernel and itself notifies the NetworkManager-daemon. The NetworkManager-daemon could start dhclient for that NIC, or configure according to some manual configuration, the complexity of doing so forces application authors to re-implement hardware support logic. Some hardware devices also require privileged helper programs to them for use. These must often be invoked in ways that can be awkward to express with the Unix permissions model, application authors resort to using setuid binaries or run service daemons to provide their own access control and privilege separation, potentially introducing security holes each time. HAL was created to deal with this, and udev replaced HAL, the default udev setup provides persistent names for storage devices. Any hard disk is recognized by its unique filesystem id, the name of the disk, udev executes entirely in user space, as opposed to devfss kernel space. The udev, as a whole, is divided into three parts, Library libudev that allows access to information, it was incorporated into the systemd 183 software bundle. User space daemon udevd that manages the virtual /dev, administrative command-line utility udevadm for diagnostics. The system gets calls from the kernel via netlink socket, earlier versions used hotplug, adding a link to themselves in /etc/hotplug. d/default with this purpose
11.
Hot swapping
–
Hot swapping and hot plugging consist in replacing or adding components without stopping or shutting down the system. Once the appropriate software is installed on the computer, a user can plug and unplug the component without rebooting, a well-known example of this functionality is the Universal Serial Bus that allows users to add or remove peripheral components such as a mouse, keyboard, or printer. Computer components are described as cold-pluggable if the computer system must be powered down to add or remove them. The opposite term is hot-pluggable, hot-pluggable components can be added or removed without powering down the computer, most components in computer systems, such as CPUs and memory, are only cold-pluggable. However it is common for high-end servers and mainframes to feature hotplug capability for other components, such as PCIe, the terms hot plug and cold plug can be taken to mean two different things, depending on the context. In a more generic context, hot plug is the ability to add or remove hardware without powering down the system, hot swapping is used whenever it is desirable to change the configuration or repair a working system without interrupting its operation. Hot swapping may be used to add or remove peripherals or components, to allow a device to synchronize data with a computer, a machine may have dual power supplies, each adequate to power the machine, a faulty one may be hot-swapped. Machines that support hot swapping need to be able to modify their operation for the configuration, either automatically on detecting the change. All electrical and mechanical connections associated with hot-swapping must be designed so that neither the equipment nor the user can be harmed while hot-swapping, other components in the system must be designed so that the removal of a hot-swappable component does not interrupt operation. There are two slightly differing meanings of the term hot swapping and it may refer only to the ability to add or remove hardware without powering down the system, while the system software may have to be notified by the user of the event in order to cope with it. Examples include RS-232 and lower-end SCSI devices and this is sometimes called cold plugging. However, if the system can detect and respond to addition or removal of hardware, examples include USB, FireWire and higher-end SCSI devices. Some implementations require a component shutdown procedure prior to removal and this simplifies the design, but such devices are not robust in the case of component failure. If a component is removed while it is being used, the operations to that device fail, in these systems hot swap is normally used for regular maintenance to the computer, or to replace a broken component. Most modern hot-swap methods use a connector with staggered pins. Most staggered-pin designs have ground pins longer than the others, ensuring that no sensitive circuitry is connected before there is a reliable system ground. Pins of the nominal length do not necessarily make contact at exactly the same time due to mechanical tolerances. Specialized hot-plug power connector pins are now available with repeatable DC current interruption ratings of up to 16 A
12.
Dump (program)
–
Dump is a Unix program used to back up file systems. It operates on blocks, below filesystem abstractions such as files and directories, dump can back up a file system to a tape or another disk. It is often used across a network by piping its output through bzip2 then SSH, a dump utility first appeared in Version 6 AT&T UNIX. Dump filesystem dump -W | -w restore tar cpio rsync Home page of the Linux Ext2 filesystem dump/restore utilities Torture-testing Backup and Archive Programs FreeBSD manpage
13.
Fsck
–
The system utility fsck is a tool for checking the consistency of a file system in Unix and Unix-like operating systems, such as Linux and OS X. A similar command, CHKDSK exists in Microsoft Windows and it can be pronounced F-S-C-K, F-S-check, fizz-check, F-sack, fisk, fizik, F-sick, F-sock, fuck, fucked, F-suck, F-sek, the sibilant fsk, farsk or fusk. Generally, fsck is run either automatically at boot time, or manually by the system administrator, the command works directly on data structures stored on disk, which are internal and specific to the particular file system in use - so a matching fsck command tailored is generally required. The exact behaviors of various fsck implementations vary, but they follow a common order of internal operations. Partially recovered files where the file name cannot be reconstructed are typically recovered to a lost+found directory that is stored at the root of the file system. A system administrator can also run fsck manually if they there is a problem with the file system. The file system is normally checked while unmounted, mounted read-only, modern journaling file systems are designed such that tools such as fsck do not need to be run after unclean shutdown. The UFS2 Filesystem in FreeBSD has a background fsck, so it is not necessary to wait for fsck to finish before accessing the disk. Full copy-on-write file systems such as ZFS and Btrfs are designed to avoid most causes of corruption and have no traditional fsck repair tool, both have a scrub utility which examines and repairs any problems, in the background and on a mounted filesystem. The equivalent programs on Microsoft Windows and MS-DOS are CHKDSK and SCANDISK, the severity of file system corruption, led to the terms fsck and fscked becoming used among Unix system administrators as a minced oath for fuck and fucked
14.
Whitespace character
–
In computer programming, white space is any character or series of characters that represent horizontal or vertical space in typography. When rendered, a character does not correspond to a visible mark. For example, the common whitespace symbol U+0020 SPACE, also ASCII32, represents a blank space punctuation character in text, with many keyboard layouts, a horizontal whitespace character may be entered through the use of a spacebar. Horizontal white space may also be entered on many keyboards through the use of the Tab ↹ key, although the length of the space may vary. Vertical white space is a bit more varied as to how it is encoded, many early computer games used such codes to draw a screen. The term white space is based on the appearance on ordinary paper. However they are coded inside an application, white space can be processed the same as any character code. The most common whitespace characters may be typed via the bar or the tab key. Depending on context, a line-break generated by the return or enter key may be considered space as well. The table below lists the twenty-five characters defined as characters in the Unicode Character Database. Seventeen use a definition of white space consistent with the algorithm for writing and are known as Bidi-WS characters. The remaining characters may also be used, but are not of this Bidi type, note, Depending on the browser and fonts used to view the following table, not all spaces may be displayed properly. Some fonts display the character as a blank, however the Unicode standard explicitly states that it does not act as a space. Exact space The Cambridge Z88 provided a special exact space displayed as … by the systems display driver. It was therefore known as dot space in conjunction with BBC BASIC. Under codepoint 224 the computer provided a special three-character-cells-wide SPACE symbol SPC. In some cases, spaces are shown simply as blank space, many different characters could be used to produce spaces, and non-character functions can also affect white space. In computer character encodings, there is a normal general-purpose space whose width will vary according to the design of the typeface, typical values range from 1/5 em to 1/3 em
15.
Octal
–
The octal numeral system, or oct for short, is the base-8 number system, and uses the digits 0 to 7. Octal numerals can be made from binary numerals by grouping binary digits into groups of three. For example, the representation for decimal 74 is 1001010. Two zeroes can be added at the left,1001010, corresponding the octal digits 112, in the decimal system each decimal place is a power of ten. For example,7410 =7 ×101 +4 ×100 In the octal system each place is a power of eight. The Yuki language in California and the Pamean languages in Mexico have octal systems because the speakers count using the spaces between their fingers rather than the fingers themselves and it has been suggested that the reconstructed Proto-Indo-European word for nine might be related to the PIE word for new. Based on this, some have speculated that proto-Indo-Europeans used a number system. In 1716 King Charles XII of Sweden asked Emanuel Swedenborg to elaborate a number based on 64 instead of 10. Swedenborg however argued that for people with less intelligence than the king such a big base would be too difficult, in 1718 Swedenborg wrote a manuscript, En ny rekenkonst som om vexlas wid Thalet 8 i stelle then wanliga wid Thalet 10. The numbers 1-7 are there denoted by the l, s, n, m, t, f, u. Thus 8 = lo,16 = so,24 = no,64 = loo,512 = looo etc, numbers with consecutive consonants are pronounced with vowel sounds between in accordance with a special rule. Writing under the pseudonym Hirossa Ap-Iccim in The Gentlemans Magazine, July 1745, Hugh Jones proposed a system for British coins, weights. In 1801, James Anderson criticized the French for basing the Metric system on decimal arithmetic and he suggested base 8 for which he coined the term octal. In the mid 19th century, Alfred B. Taylor concluded that Our octonary radix is, therefore, so, for example, the number 65 would be spoken in octonary as under-un. Taylor also republished some of Swedenborgs work on octonary as an appendix to the above-cited publications, in the 2009 film Avatar, the language of the extraterrestrial Navi race employs an octal numeral system, probably due to the fact that they have four fingers on each hand. In the TV series Stargate SG-1, the Ancients, a race of beings responsible for the invention of the Stargates, in the tabletop game series Warhammer 40,000, the Tau race use an octal number system. Octal became widely used in computing systems such as the PDP-8, ICL1900. Octal was an abbreviation of binary for these machines because their word size is divisible by three
16.
NTFS
–
NTFS is a proprietary file system developed by Microsoft. Starting with Windows NT3.1, it is the file system of Windows NT family. Additional extensions are an elaborate security system based on Access control lists. MacOS kernels also have a limited ability to read NTFS. Linux and BSD kernels have a free and open-source driver for the NTFS filesystem with both read and write functionality, in the mid-1980s, Microsoft and IBM formed a joint project to create the next generation of graphical operating system, the result was OS/2 and HPFS. Because Microsoft disagreed with IBM on many important issues they eventually separated, OS/2 remained an IBM project and Microsoft worked to develop Windows NT, the HPFS file system for OS/2 contained several important new features. When Microsoft created their new operating system, they borrowed many of these concepts for NTFS, NTFS developers include, Tom Miller, Gary Kimura, Brian Andrew and David Goebel. Probably as a result of common ancestry, HPFS and NTFS use the same disk partition identification type code. Using the same Partition ID Record Number is highly unusual, since there were dozens of unused code numbers available, for example, FAT has more than nine. Algorithms identifying the system in a partition type 07 must perform additional checks to distinguish between HPFS and NTFS. Microsoft has released five versions of NTFS, v1.0, v1.0 is incompatible with v1.1 and newer, Volumes written by Windows NT3. 5x cannot be read by Windows NT3.1 until an update is installed. V1.1, Released with Windows NT3.51 in 1995, supports compressed files, named streams and access control lists v1.2, Released with Windows NT4.0 in 1996. Commonly called NTFS4.0 after the OS release, supports disk quotas, Encrypting File System, sparse files, reparse points, update sequence number journaling, the $Extend folder and its files. Reorganized security descriptors so that multiple files using the same security setting can share the same descriptor, commonly called NTFS5.0 after the OS release. V3.1, Released with Windows XP in Autumn,2001, expanded the Master File Table entries with redundant MFT record number. Commonly called NTFS5.1 after the OS release The NTFS. sys version number is based on the system version. Although subsequent versions of Windows added new file system-related features, they did not change NTFS itself, for example, Windows Vista implemented NTFS symbolic links, Transactional NTFS, partition shrinking, and self-healing. NTFS symbolic links are a new feature in the file system, NTFS is optimized for 4 kB clusters, but supports a maximum cluster size of 64 kB
17.
Man page
–
A man page is a form of software documentation usually found on a Unix or Unix-like operating system. Topics covered include computer programs, formal standards and conventions, a user may invoke a man page by issuing the man command. By default, man uses a terminal pager program such as more or less to display its output. To read a page for a Unix command, a user can type, Pages are traditionally referred to using the notation name, for example. The same page name may appear in more than one section of the manual, such as when the names of system calls, user commands, examples are man and man, or exit and exit. The syntax for accessing the non-default manual section varies between different man implementations, on Solaris and illumos, for example, the syntax for reading printf is, On Linux and BSD derivatives the same invocation would be, which searches for printf in section 3 of the man pages. In the first two years of the history of Unix, no documentation existed, the Unix Programmers Manual was first published on November 3,1971. The first actual man pages were written by Dennis Ritchie and Ken Thompson at the insistence of their manager Doug McIlroy in 1971. Aside from the man pages, the Programmers Manual also accumulated a set of papers, some of them tutorials. Later versions of the documentation imitated the first man pages terseness, Ritchie added a How to get started section to the Third Edition introduction, and Lorinda Cherry provided the Purple Card pocket reference for the Sixth and Seventh Editions. Versions of the software were named after the revision of the manual, for the Fourth Edition the man pages were formatted using the troff typesetting package and its set of -man macros. At the time, the availability of online documentation through the manual system was regarded as a great advance. The modern descendants of 4. 4BSD also distribute man pages as one of the forms of system documentation. Few alternatives to man have enjoyed popularity, with the possible exception of GNU Projects info system. In addition, some Unix GUI applications now provide end-user documentation in HTML, man pages are usually written in English, but translations into other languages may be available on the system. The default format of the man pages is troff, with either the macro package man or mdoc and this makes it possible to typeset a man page into PostScript, PDF, and various other formats for viewing or printing. Most Unix systems have a package for the command, which enables users to browse their man pages using an html browser. A consequence of this is that section 8 is sometimes relegated to the 1M subsection of the main commands section, Some subsection suffixes have a general meaning across sections, Some versions of man cache the formatted versions of the last several pages viewed
18.
IP address
–
An IP address is an identifier assigned to each computer and other device connected to a TCP/IP network that is used to locate and identify the node in communications with other nodes on the network. IP addresses are written and displayed in human-readable notations, such as 172.16.254.1 in IPv4. Version 4 of the Internet Protocol defines an IP address as a 32-bit number and its deployment commenced in the mid-2000s and is ongoing. Addresses have been distributed by IANA to the RIRs in blocks of approximately 16.8 million addresses each, each ISP or private network administrator assigns an IP address to each device connected to its network. Such assignments may be on a static or dynamic basis, depending on its software, an IP address serves two principal functions, host or network interface identification and location addressing. Its role has been characterized as follows, A name indicates what we seek, an address indicates where it is. A route indicates how to get there, the header of each IP packet sent over the Internet must contain the IP address of both the destination server or website and of the sender. The Domain Name System translates domain names to the corresponding destination IP address, both the source address and the destination address may be changed in transit by a network address translation device. The senders IP address is available to the server and becomes the destination address when the server responds to a client request. A sender wanting to remain anonymous to the server may use a proxy server, when the destination server responds to the proxy server, it would forward it on to the true client—ie. Change the IP address to that of the originator of the request, a reverse DNS lookup involves the querying of DNS to determine the domain name associated with an IP address. There are two versions of the Internet Protocol, IP version 4 and IP version 6, each version defines an IP address differently. Because of its prevalence, the generic term IP address typically still refers to the addresses defined by IPv4. The gap in version sequence between IPv4 and IPv6 resulted from the assignment of number 5 to the experimental Internet Stream Protocol in 1979, an IP address in IPv4 is 32-bits in size, which limits the address space to 4294967296 IP addresses. Of this number, IPv4 reserves some addresses for special purposes such as private networks or multicast addresses. IPv4 addresses are usually represented in dot-decimal notation, consisting of four numbers, each ranging from 0 to 255, separated by dots. Each part represents a group of 8 bits of the address, in some cases of technical writing, IPv4 addresses may be presented in various hexadecimal, octal, or binary representations. In the early stages of development of the Internet Protocol, network administrators interpreted an IP address in two parts, network number portion and host number portion
19.
OpenBSD
–
OpenBSD is a free and open source Unix-like computer operating system descended from Berkeley Software Distribution, a Research Unix derivative developed at the University of California, Berkeley. In late 1995, Theo de Raadt forked it from NetBSD, besides the operating system as a whole, the project maintains portable versions of many subsystems, most notably OpenSSH, which are available as packages in other operating systems. The project is known for its developers insistence on open-source code, good documentation, code correctness and it has strict policies on licensing, preferring the ISC license and other variants of the Simplified BSD License. Many of its security features are optional or absent in other operating systems and its developers frequently audit the source tree for software bugs and security holes. De Raadt coordinates the project from his home in Calgary, Alberta and its logo and mascot is a pufferfish named Puffy. In December 1994, NetBSD co-founder Theo de Raadt was asked to resign from his position as a developer and member of the NetBSD core team. The reason for this is not wholly clear, although there are claims that it was due to personality clashes within the NetBSD project, in October 1995, de Raadt founded OpenBSD, a new project forked from NetBSD1.0. The initial release, OpenBSD1.2, was made in July 1996, since then, the project has followed a schedule of a release every six months, each of which is supported for one year. Just how widely OpenBSD is used is hard to determine as its developers do not publish or collect usage statistics. OpenBSDs security enhancements, built-in cryptography, and the pf packet filter suit it for use in the security industry, such as on firewalls, intrusion-detection systems, and VPN gateways. Proprietary systems from several manufacturers are based on OpenBSD, including devices from Armorlogic, Calyptix Security, GeNUA, RTMX, later versions of Microsofts Services for UNIX, an extension to the Windows operating system providing Unix-like functionality, use large amounts of OpenBSD code. OpenBSD ships with the X Window System and is suitable for use on the desktop, packages are available for popular applications, including desktop environments such as GNOME, KDE, and Xfce, and web browsers such as Firefox and Chromium. The project also includes three window managers in the distribution, cwm, FVWM, and twm. OpenBSD used to include a fork of Apache 1.3, in the 5.6 release, Nginx was replaced with httpd, an HTTP server with FastCGI and Transport Layer Security support. As of May 2016, Apache and Nginx are still available as ports, development is continuous, and team management is open and tiered. Snapshot releases are available at frequent intervals. Maintenance patches for supported releases may be applied manually or by updating the system against the branch of the CVS repository for that release. The standard OpenBSD kernel, as maintained by the project, is recommended for end users
20.
NetBSD
–
NetBSD is a free and open source Unix-like operating system that descends from Berkeley Software Distribution, a Research Unix derivative developed at the University of California, Berkeley. It was the second open-source BSD descendant formally released after it forked from the 386BSD branch of the BSD source-code repository. It continues to be developed and is available for many platforms, including large-scale server systems, desktop systems, and handheld devices. The NetBSD project focuses on code clarity, careful design, netBSDs source code is openly available and permissively licensed. The NetBSD project began as a result of frustration within the 386BSD developer community with the pace and they aimed to produce a unified, multi-platform, production-quality, BSD-based operating system. The name NetBSD was suggested by de Raadt, based on the importance and growth of such as the Internet at that time. The NetBSD source code repository was established on 21 March 1993 and this was derived from 386BSD0.1 plus the version 0.2.2 unofficial patchkit, with several programs from the Net/2 release missing from 386BSD re-integrated, and various other improvements. The first multi-platform release, NetBSD1.0, was made in October 1994, also in 1994, for disputed reasons, one of the founders, Theo de Raadt, was removed from the project. He later founded a new project, OpenBSD, from a version of NetBSD1.0 near the end of 1995. In 1998, NetBSD1.3 introduced the pkgsrc packages collection, until 2004, NetBSD1. x releases were made at roughly annual intervals, with minor patch releases in between. The previous minor releases are now divided into two categories, x. y stable maintenance releases and x. y. z releases containing only security, as the projects motto suggests, NetBSD has been ported to a large number of 32- and 64-bit architectures. These range from VAX minicomputers to Pocket PC PDAs, as of 2009, NetBSD supports 57 hardware platforms. The kernel and userland for these platforms are all built from a central unified source-code tree managed by CVS, currently, unlike other kernels such as μClinux, the NetBSD kernel requires the presence of an MMU in any given target architecture. NetBSDs portability is aided by the use of hardware abstraction layer interfaces for low-level hardware access such as bus input/output or DMA, using this portability layer, device drivers can be split into machine-independent and machine-dependent components. This makes a single driver easily usable on several platforms by hiding hardware access details, and reduces the work to port it to a new system. This permits a device driver for a PCI card to work without modifications, whether its in a PCI slot on an IA-32, Alpha, PowerPC, SPARC. Also, a driver for a specific device can operate via several different buses, like ISA, PCI. In comparison, Linux device driver code often must be reworked for each new architecture, as a consequence, in porting efforts by NetBSD and Linux developers, NetBSD has taken much less time to port to new hardware
21.
FreeBSD
–
FreeBSD is a free and open source Unix-like operating system descended from Research Unix via the Berkeley Software Distribution. Although for legal reasons FreeBSD cannot use the Unix trademark, it is a descendant of BSD. FreeBSD has similarities with Linux, with two differences in scope and licensing, FreeBSD maintains a complete operating system, i. e. The FreeBSD project includes a security team overseeing all software shipped in the base distribution, a wide range of additional third-party applications may be installed using the pkgng package management system or the FreeBSD Ports, or by directly compiling source code. FreeBSDs roots go back to the University of California, Berkeley, the university acquired a UNIX source license from AT&T. The BSD project was founded in 1976 by Bill Joy, but since BSD contained code from AT&T Unix, all recipients had to get a license from AT&T first in order to use BSD. In June 1989, Networking Release 1 or simply Net-1 – the first public version of BSD – was released, after releasing Net-1, Keith Bostic, a developer of BSD, suggested replacing all AT&T code with freely-redistributable code under the original BSD license. Work on replacing AT&T code began and, after 18 months, however, six files containing AT&T code remained in the kernel. The BSD developers decided to release the Networking Release 2 without those six files and they released 386BSD via an anonymous FTP server. The first version of FreeBSD was released on November 1993, in the early days of the projects inception, a company named Walnut Creek CDROM, upon the suggestion of the two FreeBSD developers, agreed to release the operating system on CD-ROM. By 1997, FreeBSD was Walnut Creeks most successful product, the company itself later renamed to The FreeBSD Mall and later iXSystems. Today, FreeBSD is used by many IT companies such as IBM, Nokia, Juniper Networks, certain parts of Apples Mac OS X operating system are based on FreeBSD. The PlayStation 3 operating system also borrows certain components from FreeBSD, netflix, WhatsApp, and FlightAware are also examples of big, successful and heavily network-oriented companies which are running FreeBSD. 386BSD and FreeBSD were both derived from 1992s BSD release, in January 1992, BSDi started to release BSD/386, later called BSD/OS, an operating system similar to FreeBSD and based on 1992s BSD release. AT&T filed a lawsuit against BSDi and alleged distribution of AT&T source code in violation of license agreements, the lawsuit was settled out of court and the exact terms were not all disclosed. The only one that became public was that BSDi would migrate their source base to the newer 4. 4BSD-Lite sources, Although not involved in the litigation, it was suggested to FreeBSD that they should also move to 4. 4BSD-Lite. FreeBSD2.0, which was released on November 1994, was the first version of FreeBSD without any code from AT&T, Desktop Although FreeBSD does not install the X Window System by default, it is available in the FreeBSD ports collection. A number of Desktop environments such as GNOME, KDE and Xfce, embedded systems Although it explicitly focuses on the IA-32 and x86-64 platforms, FreeBSD also supports others such as ARM, PowerPC and MIPS to a lesser degree
22.
Command-line interface
–
A program which handles the interface is called a command language interpreter or shell. The interface is implemented with a command line shell, which is a program that accepts commands as text input. Command-line interfaces to computer operating systems are widely used by casual computer users. Alternatives to the line include, but are not limited to text user interface menus, keyboard shortcuts. Examples of this include the Windows versions 1,2,3,3.1, and 3.11, DosShell, and Mouse Systems PowerPanel. Command-line interfaces are preferred by more advanced computer users, as they often provide a more concise. Programs with command-line interfaces are generally easier to automate via scripting, a program that implements such a text interface is often called a command-line interpreter, command processor or shell. Under most operating systems, it is possible to replace the shell program with alternatives, examples include 4DOS for DOS, 4OS2 for OS/2. For example, the default Windows GUI is a program named EXPLORER. EXE. These programs are shells, but not CLIs, application programs may also have command line interfaces. When a program is launched from an OS command line shell, interactive command line sessions, After launch, a program may provide an operator with an independent means to enter commands in the form of text. OS inter-process communication, Most operating systems support means of inter-process communication, Command lines from client processes may be redirected to a CLI program by one of these methods. Some applications support only a CLI, presenting a CLI prompt to the user, Some examples of CLI-only applications are, DEBUG Diskpart Ed Edlin Fdisk Ping Some computer programs support both a CLI and a GUI. In some cases, a GUI is simply a wrapper around a separate CLI executable file, in other cases, a program may provide a CLI as an optional alternative to its GUI. CLIs and GUIs often support different functionality, for example, all features of MATLAB, a numerical analysis computer program, are available via the CLI, whereas the MATLAB GUI exposes only a subset of features. The early Sierra games, like the first three Kings Quest games, used commands from a command line to move the character around in the graphic window. Early computer systems often used teleprinter machines as the means of interaction with a human operator, the computer became one end of the human-to-human teleprinter model. So instead of a human communicating with another human over a teleprinter, in time, the actual mechanical teleprinter was replaced by a glass tty, and then by a smart terminal