A hypervisor or virtual machine monitor is computer software, firmware or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, each virtual machine is called a guest machine; the hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux and macOS instances can all run on a single physical x86 machine; this contrasts with operating-system-level virtualization, where all instances must share a single kernel, though the guest operating systems can differ in user space, such as different Linux distributions with the same kernel. The term hypervisor is a variant of supervisor, a traditional term for the kernel of an operating system: the hypervisor is the supervisor of the supervisor, with hyper- used as a stronger variant of super-.
The term dates to circa 1970. In their 1974 article, Formal Requirements for Virtualizable Third Generation Architectures, Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor: Type-1, native or bare-metal hypervisors These hypervisors run directly on the host's hardware to control the hardware and to manage guest operating systems. For this reason, they are sometimes called bare metal hypervisors; the first hypervisors, which IBM developed in the 1960s, were native hypervisors. These included the CP/CMS operating system. Modern equivalents include AntsleOs, Xen, XCP-ng, Oracle VM Server for SPARC, Oracle VM Server for x86, Microsoft Hyper-V, Xbox One system software, VMware ESX/ESXi. Type-2 or hosted hypervisors These hypervisors run on a conventional operating system just as other computer programs do. A guest operating system runs as a process on the host. Type-2 hypervisors abstract guest operating systems from the host operating system. VMware Workstation, VMware Player, VirtualBox, Parallels Desktop for Mac and QEMU are examples of type-2 hypervisors.
The distinction between these two types is not always clear. For instance, Linux's Kernel-based Virtual Machine and FreeBSD's bhyve are kernel modules that convert the host operating system to a type-1 hypervisor. At the same time, since Linux distributions and FreeBSD are still general-purpose operating systems, with applications competing with each other for VM resources, KVM and bhyve can be categorized as type-2 hypervisors; the first hypervisors providing full virtualization were the test tool SIMMON and IBM's one-off research CP-40 system, which began production use in January 1967, became the first version of IBM's CP/CMS operating system. CP-40 ran on a S/360-40, modified at the IBM Cambridge Scientific Center to support Dynamic Address Translation, a key feature that allowed virtualization. Prior to this time, computer hardware had only been virtualized enough to allow multiple user applications to run concurrently. With CP-40, the hardware's supervisor state was virtualized as well, allowing multiple operating systems to run concurrently in separate virtual machine contexts.
Programmers soon implemented CP-40 for the IBM System/360-67, the first production computer system capable of full virtualization. IBM first shipped this machine in 1966. Both CP-40 and CP-67 began production use in 1967. CP/CMS was available to IBM customers from 1968 in source code form without support. CP/CMS formed part of IBM's attempt to build robust time-sharing systems for its mainframe computers. By running multiple operating systems concurrently, the hypervisor increased system robustness and stability: Even if one operating system crashed, the others would continue working without interruption. Indeed, this allowed beta or experimental versions of operating systems—or of new hardware—to be deployed and debugged, without jeopardizing the stable main production system, without requiring costly additional development systems. IBM announced its System/370 series in 1970 without any virtualization features, but added virtual memory support in the August 1972 Advanced Function announcement.
Virtualization has been featured in all successor systems. The 1972 announcement included VM/370, a reimplementation of CP/CMS for the S/370. Unlike CP/CMS, IBM provided support for this version. VM stands for Virtual Machine, emphasizing that all, not just some, of the hardware interfaces are virtualized. Both VM and CP/CMS enjoyed early acceptance and rapid development by universities, corporate users, time-sharing vendors, as well as within IBM. Users played an active role in ongoing development, anticipating trends seen in modern open source projects. However, in a series of disputed and bitter battles, time-sharing lost out to batch processing through IBM political infighting, VM remained IBM's "other" mainframe operating system for decades, losing to MVS, it enjoyed a resurgence of popularity and support from 2000 as the z/VM product, for example as the platform for Linux for zSeries. As mentioned above, the VM control program includes a hyper
Disk storage is a general category of storage mechanisms where data is recorded by various electronic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive containing a non-removable disk, the floppy disk drive and its removable floppy disk, various optical disc drives and associated optical disc media.. Audio information was recorded by analog methods; the first video disc used an analog recording method. In the music industry, analog recording has been replaced by digital optical technology where the data is recorded in a digital format with optical information; the first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage.
Disk storage is now used in both computer storage and consumer electronic storage, e.g. audio CDs and video discs. Data on modern disks is stored in fixed length blocks called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records. Capacity decreased. Digital disk drives are block storage devices; each disk is divided into logical blocks. Blocks are addressed using their logical block addresses. Read from or writing to disk happens at the granularity of blocks; the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders and sectors. The sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it; the smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples. The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display; the information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself; the data is passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval. Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device; the other is the side-to-side motion of the head across the disk. There are two types of disk rotation methods: constant linear velocity varies the rotational speed of the optical disc depending upon the position of the head, constant angular velocity spins the media at one constant speed regardless of where the head is positioned. Track positioning follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g. HDDs, FDDs, Iomega zip drives, use concentric tracks to store data.
During a sequential read or write operation, after the drive accesses all the sectors in a track it repositions the head to the next track. This will cause a momentary delay in the flow of data between the computer. In contrast, optical audio and video discs use a single spiral track that starts at the inner most point on the disc and flows continuously to the outer edge; when reading or writing data there is no need to stop the flow of data to switch tracks. This is similar to vinyl records except vinyl records started at the outer edge and spiraled in toward the center; the disk drive interface is the mechanism/protocol of communicat
Berkeley sockets is an application programming interface for Internet sockets and Unix domain sockets, used for inter-process communication. It is implemented as a library of linkable modules, it originated with the 4.2BSD Unix operating system, released in 1983. A socket is an abstract representation for the local endpoint of a network communication path; the Berkeley sockets API represents it as a file descriptor in the Unix philosophy that provides a common interface for input and output to streams of data. Berkeley sockets evolved with little modification from a de facto standard into a component of the POSIX specification. Therefore, the term POSIX sockets is synonymous with Berkeley sockets, they are known as BSD sockets, acknowledging the first implementation in the Berkeley Software Distribution. Berkeley sockets originated with the 4.2BSD Unix operating system as a programming interface. Only in 1989, could UC Berkeley release versions of its operating system and networking library free from the licensing constraints of AT&T Corporation's proprietary Unix.
All modern operating systems implement a version of the POSIX socket interface. It became the standard interface for connecting to the Internet; the Winsock implementation for MS Windows, developed by unaffiliated developers follows the standard. The BSD sockets API is written in the C programming language. Most other programming languages provide similar interfaces written as a wrapper library based on the C API; as the Berkeley socket API evolved and yielded the POSIX socket API, certain functions were deprecated or removed and replaced by others. The POSIX API is designed to be reentrant; the STREAMS-based Transport Layer Interface API offers an alternative to the socket API. However, many systems that provide the TLI API provide the Berkeley socket API. Non-Unix systems expose the Berkeley socket API with a translation layer to a native networking API. Plan 9 and Genode use file-system APIs with control files rather than file-descriptors; the Berkeley socket interface is defined in several header files.
The names and content of these files differ between implementations. In general, they include: The Berkeley socket API provides the following functions: socket creates a new socket of a certain type, identified by an integer number, allocates system resources to it. Bind is used on the server side, associates a socket with a socket address structure, i.e. a specified local IP address and a port number. Listen is used on the server side, causes a bound TCP socket to enter listening state. Connect is used on the client side, assigns a free local port number to a socket. In case of a TCP socket, it causes an attempt to establish a new TCP connection. Accept is used on the server side, it accepts a received incoming attempt to create a new TCP connection from the remote client, creates a new socket associated with the socket address pair of this connection. Send, recv and recvfrom are used for sending and receiving data; the standard functions write and read may be used. Close causes the system to release resources allocated to a socket.
In case of TCP, the connection is terminated. Gethostbyname and gethostbyaddr are used to resolve host addresses. IPv4 only. Select is used to suspend, waiting for one or more of a provided list of sockets to be ready to read, ready to write, or that have errors. Poll is used to check on the state of a socket in a set of sockets; the set can be tested to see if an error occurred. Getsockopt is used to retrieve the current value of a particular socket option for the specified socket. Setsockopt is used to set a particular socket option for the specified socket; the function socket creates an endpoint for communication and returns a file descriptor for the socket. It uses three arguments: domain. For example: AF_INET for network protocol IPv4 AF_INET6 for IPv6 AF_UNIX for local socket type, one of: SOCK_STREAM SOCK_DGRAM SOCK_SEQPACKET SOCK_RAW protocol specifying the actual transport protocol to use; the most common are IPPROTO_TCP, IPPROTO_SCTP, IPPROTO_UDP, IPPROTO_DCCP. These protocols are specified in file netinet/in.h.
The value 0 may be used to select a default protocol from type. The function returns -1. Otherwise, it returns an integer representing the newly assigned descriptor. Bind associates a socket with an address; when a socket is created with socket, it is only given a protocol family, but not assigned an address. This association must be performed; the function has three arguments: sockfd, a descriptor representing the socket my_addr, a pointer to a sockaddr structure representing the address to bind to. Addrlen, a field of type socklen_t specifying the size of the sockaddr structure. Bind returns 0 on -1 if an error occurs. After a socket has been associated with an address, listen prepares it for incoming connections. However, this is only necessary for the stream-oriented data modes, i.e. for socket types. Listen requires two arguments: a valid socket descriptor. Backlog, an integer representing the number of pending connections that can be queued up at any one time; the operating system
In computing, a firewall is a network security system that monitors and controls incoming and outgoing network traffic based on predetermined security rules. A firewall establishes a barrier between a trusted internal network and untrusted external network, such as the Internet. Firewalls are categorized as either network firewalls or host-based firewalls. Network run on network hardware. Host-based firewalls run on host computers and control network traffic out of those machines; the term firewall referred to a wall intended to confine a fire within a building. Uses refer to similar structures, such as the metal sheet separating the engine compartment of a vehicle or aircraft from the passenger compartment; the term was applied in the late 1980s to network technology that emerged when the Internet was new in terms of its global use and connectivity. The predecessors to firewalls for network security were the routers used in the late 1980s, because they separated networks from one another, thus halting the spread of problems from one network to another.
The first reported type of network firewall is called a packet filter. Packet filters act by inspecting packets transferred between computers; when a packet does not match the packet filter's set of filtering rules, the packet filter either drops the packet, or rejects the packet else it is allowed to pass. Packets may be filtered by source and destination network addresses, protocol and destination port numbers; the bulk of Internet communication in 20th and early 21st century used either Transmission Control Protocol or User Datagram Protocol in conjunction with well-known ports, enabling firewalls of that era to distinguish between, thus control, specific types of traffic, unless the machines on each side of the packet filter used the same non-standard ports. The first paper published on firewall technology was in 1988, when engineers from Digital Equipment Corporation developed filter systems known as packet filter firewalls. At AT&T Bell Labs, Bill Cheswick and Steve Bellovin continued their research in packet filtering and developed a working model for their own company based on their original first generation architecture.
From 1989–1990, three colleagues from AT&T Bell Laboratories, Dave Presotto, Janardan Sharma, Kshitij Nigam, developed the second generation of firewalls, calling them circuit-level gateways. Second-generation firewalls perform the work of their first-generation predecessors but maintain knowledge of specific conversations between endpoints by remembering which port number the two IP addresses are using at layer 4 of the OSI model for their conversation, allowing examination of the overall exchange between the nodes; this type of firewall is vulnerable to denial-of-service attacks that bombard the firewall with fake connections in an attempt to overwhelm the firewall by filling its connection state memory. Marcus Ranum, Wei Xu, Peter Churchyard released an application firewall known as Firewall Toolkit in October 1993; this became the basis for Gauntlet firewall at Trusted Information Systems. The key benefit of application layer filtering is that it can understand certain applications and protocols.
This is useful as it is able to detect if an unwanted application or service is attempting to bypass the firewall using a disallowed protocol on an allowed port, or detect if a protocol is being abused in any harmful way. As of 2012, the so-called next-generation firewall is nothing more than the "wider" or "deeper" inspection at the application layer. For example, the existing deep packet inspection functionality of modern firewalls can be extended to include: Intrusion prevention systems User identity management integration Web application firewall. WAF attacks may be implemented in the tool "WAF Fingerprinting utilizing timing side channels" Firewalls are categorized as network-based or host-based. Network-based firewalls are positioned on the gateway computers of WANs and intranets, they are either software appliances running on general-purpose hardware, or hardware-based firewall computer appliances. Firewall appliances may offer other functionality to the internal network they protect, such as acting as a DHCP or VPN server for that network.
Host-based firewalls are positioned on the network node itself and control network traffic in and out of those machines. The host-based firewall may be a daemon or service as a part of the operating system or an agent application such as endpoint security or protection; each has disadvantages. However, each has a role in layered security. Firewalls vary in type depending on where communication originates, where it is intercepted, the state of communication being traced. Network layer firewalls called packet filters, operate at a low level of the TCP/IP protocol stack, not allowing packets to pass through the firewall unless they match the established rule set; the firewall administrator may define the rules. The term "packet filter" originated in the context of BSD operating systems. Network layer firewalls fall into two sub-categories and stateless. Used packet filters on various versions of Unix are ipfw, NPF, PF, ip
University of Glasgow
The University of Glasgow is a public research university in Glasgow, Scotland. Founded by papal bull in 1451, it is the fourth-oldest university in the English-speaking world and one of Scotland's four ancient universities. Along with the universities of Edinburgh, St. Andrews, the university was part of the Scottish Enlightenment during the 18th century. In common with universities of the pre-modern era, Glasgow educated students from wealthy backgrounds, however, it became a pioneer in British higher education in the 19th century by providing for the needs of students from the growing urban and commercial middle class. Glasgow University served all of these students by preparing them for professions: the law, civil service and the church, it trained smaller but growing numbers for careers in science and engineering. The annual income of the institution for 2017–18 was £626.5 million of which £180.8 million was from research grants and contracts, with an expenditure of £610.1 million. It is a member of Universitas 21, the Russell Group and the Guild of European Research-Intensive Universities.
The university was located in the city's High Street. Additionally, a number of university buildings are located elsewhere, such as the Veterinary School in Bearsden, the Crichton Campus in Dumfries. Alumni or former staff of the university include James Wilson, philosopher Francis Hutcheson, engineer James Watt and economist Adam Smith, physicist Lord Kelvin, surgeon Joseph Lister, seven Nobel laureates, three British Prime Ministers; the University of Glasgow was founded in 1451 by a charter or papal bull from Pope Nicholas V, at the suggestion of King James II, giving Bishop William Turnbull, a graduate of the University of St Andrews, permission to add a university to the city's Cathedral. It is the second-oldest university in Scotland after St Andrews and the fourth-oldest in the English-speaking world; the universities of St Andrews and Aberdeen were ecclesiastical foundations, while Edinburgh was a civic foundation. As one of the ancient universities of the United Kingdom, Glasgow is one of only eight institutions to award undergraduate master's degrees in certain disciplines.
The university has been without its original Bull since the mid-sixteenth century. In 1560, during the political unrest accompanying the Scottish Reformation, the chancellor, Archbishop James Beaton, a supporter of the Marian cause, fled to France, he took with him, for safe-keeping, many of the archives and valuables of the Cathedral and the university, including the Mace and the Bull. Although the Mace was sent back in 1590, the archives were not. Principal Dr James Fall told the Parliamentary Commissioners of Visitation on 28 August 1690, that he had seen the Bull at the Scots College in Paris, together with the many charters granted to the university by the monarchs of Scotland from James II to Mary, Queen of Scots; the university enquired of these documents in 1738, but was informed by Thomas Innes and the superiors of the Scots College that the original records of the foundation of the university were not to be found. If they had not been lost by this time, they went astray during the French Revolution when the Scots College was under threat.
Its records and valuables were moved for safe-keeping out of the city of Paris. The Bull remains the authority. Teaching at the university began in the chapterhouse of Glasgow Cathedral, subsequently moving to nearby Rottenrow, in a building known as the "Auld Pedagogy"; the university was given 13 acres of land belonging to the Black Friars on High Street by Mary, Queen of Scots, in 1563. By the late 17th century its building centred on two courtyards surrounded by walled gardens, with a clock tower, one of the notable features of Glasgow's skyline – reaching 140 feet in height – and a chapel adapted from the church of the former Dominican friary. Remnants of this Scottish Renaissance building parts of the main facade, were transferred to the Gilmorehill campus and renamed as the "Pearce Lodge", after Sir William Pearce, the shipbuilding magnate who funded its preservation; the Lion and Unicorn Staircase was transferred from the old college site and is now attached to the Main Building. John Anderson, while professor of natural philosophy at the university, with some opposition from his colleagues, pioneered vocational education for working men and women during the Industrial Revolution.
To continue this work in his will, he founded Anderson's College, associated with the university before merging with other institutions to become the University of Strathclyde in 1964. In 1973, Delphine Parrott became its first female professor, as Gardiner Professor of Immunology. In October 2014, the university court voted for the university to become the first academic institution in Europe to divest from the fossil fuel industry; the university is spread over a number of different campuses. The main one is the Gilmorehill campus, in Hillhead; as well as this there is the Garscube Estate in Bearsden, housing the Veterinary School, Ship model basin and much of the University's sports facilities, the Dental School in the city centre, the section of Mental Health and Well Being at Gartnavel Royal Hospital on Great Western Road, the Teaching and Learning Centre at the Queen Elizabeth University Hospital and the Crichton campus in Dumfries. The Imaging Ce
GNU is an operating system and an extensive collection of computer software. GNU is composed wholly of free software, most of, licensed under the GNU Project's own General Public License. GNU is a recursive acronym for "GNU's Not Unix!", chosen because GNU's design is Unix-like, but differs from Unix by being free software and containing no Unix code. The GNU project includes an operating system kernel, GNU Hurd, the original focus of the Free Software Foundation. However, given the Hurd kernel's status as not yet production-ready, non-GNU kernels, most popularly the Linux kernel, can be used with GNU software; the combination of GNU and Linux has become ubiquitous to the point that the duo is referred to as just "Linux" in short, or, less GNU/Linux. Richard Stallman, the founder of the project, views GNU as a "technical means to a social end". Relatedly Lawrence Lessig states in his introduction to the second edition of Stallman's book Free Software, Free Society that in it Stallman has written about "the social aspects of software and how Free Software can create community and social justice".
Development of the GNU operating system was initiated by Richard Stallman while he worked at MIT Artificial Intelligence Laboratory. It was called the GNU Project, was publicly announced on September 27, 1983, on the net.unix-wizards and net.usoft newsgroups by Stallman. Software development began on January 5, 1984, when Stallman quit his job at the Lab so that they could not claim ownership or interfere with distributing GNU components as free software. Richard Stallman chose the name including the song The Gnu; the goal was to bring a wholly free software operating system into existence. Stallman wanted computer users to be free to study the source code of the software they use, share software with other people, modify the behavior of software, publish their own modified versions of the software; this philosophy was published as the GNU Manifesto in March 1985. Richard Stallman's experience with the Incompatible Timesharing System, an early operating system written in assembly language that became obsolete due to discontinuation of PDP-10, the computer architecture for which ITS was written, led to a decision that a portable system was necessary.
It was thus decided that the development would be started using C and Lisp as system programming languages, that GNU would be compatible with Unix. At the time, Unix was a popular proprietary operating system; the design of Unix was modular, so it could be reimplemented piece by piece. Much of the needed software had to be written from scratch, but existing compatible third-party free software components were used such as the TeX typesetting system, the X Window System, the Mach microkernel that forms the basis of the GNU Mach core of GNU Hurd. With the exception of the aforementioned third-party components, most of GNU has been written by volunteers. In October 1985, Stallman set up the Free Software Foundation. In the late 1980s and 1990s, the FSF hired software developers to write the software needed for GNU; as GNU gained prominence, interested businesses began contributing to development or selling GNU software and technical support. The most prominent and successful of these was Cygnus Solutions, now part of Red Hat.
The system's basic components include the GNU Compiler Collection, the GNU C library, GNU Core Utilities, but the GNU Debugger, GNU Binary Utilities, the GNU Bash shell and the GNOME desktop environment. GNU developers have contributed to Linux ports of GNU applications and utilities, which are now widely used on other operating systems such as BSD variants and macOS. Many GNU programs have been ported to other operating systems, including proprietary platforms such as Microsoft Windows and macOS. GNU programs have been shown to be more reliable than their proprietary Unix counterparts; as of November 2015, there are a total of 466 GNU packages hosted on the official GNU development site. The official kernel of GNU Project was the GNU Hurd microkernel. With the April 30, 2015 release of the Debian GNU/Hurd 2015 distro, GNU OS now provides the components to assemble an operating system that users can install and use on a computer; this includes the GNU Hurd kernel, in a pre-production state. The Hurd status page states that "it may not be ready for production use, as there are still some bugs and missing features.
However, it should be a good base for further development and non-critical application usage."Due to Hurd not being ready for production use, in practice these operating systems are Linux distributions. They contain GNU components and software from many other free software projects. Looking at all program code contained in the Ubuntu Linux distribution in 2011, GNU encompassed 8% and the Linux kernel 6%. Other kernels like the FreeBSD kernel work together with GNU software to form a working operating system; the FSF maintains that an operating system built using the Linux kernel and GNU tools and utilities, should be considered a variant of GNU, promotes the term GNU/Linux for such systems. The GNU Project has endorsed Linux distributions, such as gNewSense and Par
The NetBSD rump kernel is the first implementation of the "anykernel" concept where drivers either can be compiled into and/or run in the monolithic kernel or in user space on top of a light-weight rump kernel. The NetBSD drivers can be used on top of the rump kernel on a wide range of POSIX operating systems, such as the Hurd, NetBSD, DragonFlyBSD, Solaris and Cygwin, along with the file system utilities built with the rump libraries; the rump kernels can run without POSIX directly on top of the Xen hypervisor, an L4 microkernel using the Genode OS Framework or on "OS-less" bare metal. An anykernel is different in concept from microkernels, partitioned kernels or hybrid kernels in that it tries to preserve the advantages of a monolithic kernel, while still enabling the faster driver development and added security in user space; the "anykernel" concept refers to an architecture-agnostic approach to drivers where drivers can either be compiled into the monolithic kernel or be run as a userspace process, microkernel-style, without code changes.
With drivers, a wider concept is considered where not only device drivers are included but file systems and the networking stack. The File System Access Utilities is a subproject built with the rump libraries, it aims to have a set of utilities to access and modify a file system image without having to mount it. The fs-utils does not require superuser account to access the device; the advantage of fs-utils over similar projects such as mtools is supporting the usage of familiar filesystem Unix commands for a large number of file systems which are supported by NetBSD. Filesystem in Userspace Unikernel Rump kernel home page Kantee, Antti; the Design and Implementation of the Anykernel and Rump Kernels. P. 218. Rump Kernels: No OS? No Problem! by Antti Kantee and Justin Cormack DDEKit LibOS