OpenVMS is a closed-source, proprietary computer operating system for use in general-purpose computing. It is the successor to the VMS Operating System, produced by Digital Equipment Corporation, first released in 1977 for its series of VAX-11 minicomputers; the 11/780 was introduced at DEC's Oct. 1977 annual shareholder's meeting. In the 1990s, it was used for the successor series of DEC Alpha systems. OpenVMS runs on the HP Itanium-based families of computers; as of 2019, a port to the x86-64 architecture is underway. The name VMS is derived from virtual memory system, according to one of its principal architectural features. OpenVMS is a proprietary operating system. OpenVMS is a multi-user, multiprocessing virtual memory-based operating system designed for use in time-sharing, batch processing, transaction processing; when process priorities are suitably adjusted, it may approach real-time operating system characteristics. The system offers high availability through clustering and the ability to distribute the system over multiple physical machines.
This allows the system to be tolerant against disasters that may disable individual data-processing facilities. OpenVMS contains a graphical user interface, a feature, not available on the original VAX-11/VMS system. Prior to the introduction of DEC VAXstation systems in the 1980s, the operating system was used and managed from text-based terminals, such as the VT100, which provide serial data communications and screen-oriented display features. Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL and Accelerated Graphics Port graphics adapters. Enterprise-class environments select and use OpenVMS for various purposes including mail servers, network services, manufacturing or transportation control and monitoring, critical applications and databases, environments where system uptime and data access is critical. System up-times of more than 10 years have been reported, features such as rolling upgrades and clustering allow clustered applications and data to remain continuously accessible while operating system software and hardware maintenance and upgrades are performed, or when a whole data center is destroyed.
Customers using OpenVMS include banks and financial services and healthcare, network information services, large-scale industrial manufacturers of various products. As of mid-2014, Hewlett-Packard licensed the development of OpenVMS to VMS Software Inc.. VMS Software will be responsible for developing OpenVMS, supporting existing hardware and providing roadmap to clients; the company has a team of veteran developers that developed the software during DEC's ownership. In April 1975, Digital Equipment Corporation embarked on a hardware project, code named Star, to design a 32-bit virtual address extension to its PDP-11 computer line. A companion software project, code named Starlet, was started in June 1975 to develop a new operating system, based on RSX-11M, for the Star family of processors; these two projects were integrated from the beginning. Gordon Bell was the VP lead on its architecture. Roger Gourd was the project lead for the Starlet program, with software engineers Dave Cutler, Dick Hustvedt, Peter Lipman acting as the technical project leaders, each having responsibility for a different area of the operating system.
The Star and Starlet projects culminated in the VAX 11/780 computer and the VAX-11/VMS operating system. The Starlet name survived in VMS as a name of several of the main system libraries, including STARLET. OLB and STARLET. MLB. Over the years the name of the product has changed. In 1980 it was renamed, with version 2.0 release, to VAX/VMS. With the introduction of the MicroVAX range such as the MicroVAX I, MicroVAX II and MicroVAX 2000 in the mid-to-late 1980s, DIGITAL released MicroVMS versions targeted for these platforms which had much more limited memory and disk capacity. MicroVMS kits were released for VAX/VMS 4.4 to 4.7 on TK50 tapes and RX50 floppy disks, but discontinued with VAX/VMS 5.0. In 1991, VMS was renamed to OpenVMS as an indication for its support of "open systems" industry standards such as POSIX and Unix compatibility, to drop the hardware connection as the port to DIGITAL's 64-bit Alpha RISC processor was in process; the OpenVMS name first appeared after the version 5.4-2 release.
The VMS port to Alpha resulted in the creation of a second and separate source code libraries for the VAX 32-bit source code library and a second and new source code library for the Alpha 64-bit architectures. 1992 saw the release of the first version of OpenVMS for Alpha AXP systems, designated OpenVMS AXP V1.0. The decision to use the 1.x version numbering stream for the pre-production quality releases of OpenVMS AXP caused confusion for some customers and was not repeated in the next platform port to the Itanium. In 1994, with the release of OpenVMS version 6.1, feature parity between the VAX and Alpha variants was achieved. This was the so-called Functional Equivalence release, in the marketing materials of the time; some features were missing however, e.g. based shareable images, which were implemented in versions. Subsequent version numberings for the VAX and Alpha variants of the product have remaine
A computer file is a computer resource for recording data discretely in a computer storage device. Just as words can be written to paper, so can information be written to a computer file. Files can be transferred through the internet. There are different types of computer files, designed for different purposes. A file may be designed to store a picture, a written message, a video, a computer program, or a wide variety of other kinds of data; some types of files can store several types of information at once. By using computer programs, a person can open, change and close a computer file. Computer files may be reopened and copied an arbitrary number of times. Files are organised in a file system, which keeps track of where the files are located on disk and enables user access; the word "file" derives from the Latin filum."File" was used in the context of computer storage as early as January 1940. In Punched Card Methods in Scientific Computation, W. J. Eckert stated, "The first extensive use of the early Hollerith Tabulator in astronomy was made by Comrie.
He used it for building a table from successive differences, for adding large numbers of harmonic terms". "Tables of functions are constructed from their differences with great efficiency, either as printed tables or as a file of punched cards." In February 1950: In a Radio Corporation of America advertisement in Popular Science Magazine describing a new "memory" vacuum tube it had developed, RCA stated: "the results of countless computations can be kept'on file' and taken out again. Such a'file' now exists in a'memory' tube developed at RCA Laboratories. Electronically it retains figures fed into calculating machines, holds them in storage while it memorizes new ones - speeds intelligent solutions through mazes of mathematics." In 1952, "file" denoted, information stored on punched cards. In early use, the underlying hardware, rather than the contents stored on it, was denominated a "file". For example, the IBM 350 disk drives were denominated "disk files"; the introduction, circa 1961, by the Burroughs MCP and the MIT Compatible Time-Sharing System of the concept of a "file system" that managed several virtual "files" on one storage device is the origin of the contemporary denotation of the word.
Although the contemporary "register file" demonstrates the early concept of files, its use has decreased. On most modern operating systems, files are organized into one-dimensional arrays of bytes; the format of a file is defined by its content since a file is a container for data, although, on some platforms the format is indicated by its filename extension, specifying the rules for how the bytes must be organized and interpreted meaningfully. For example, the bytes of a plain text file are associated with either ASCII or UTF-8 characters, while the bytes of image and audio files are interpreted otherwise. Most file types allocate a few bytes for metadata, which allows a file to carry some basic information about itself; some file systems can store arbitrary file-specific data outside of the file format, but linked to the file, for example extended attributes or forks. On other file systems this can be done via software-specific databases. All those methods, are more susceptible to loss of metadata than are container and archive file formats.
At any instant in time, a file might have a size expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. Many older operating systems kept track only of the number of blocks or tracks occupied by a file on a physical storage device. In such systems, software employed other methods to track the exact byte count; the general definition of a file does not require that its size have any real meaning, unless the data within the file happens to correspond to data within a pool of persistent storage. A special case is a zero byte file. For example, the file to which the link /bin/ls points in a typical Unix-like system has a defined size that changes. Compare this with /dev/null, a file, but its size may be obscure. Information in a computer file can consist of smaller packets of information that are individually different but share some common traits. For example, a payroll file might contain information concerning all the employees in a company and their payroll details.
A text file may contain lines of corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image or it may contain an executable; the way information is grouped into a file is up to how it is designed. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs which create, modify or delete the files for their own use on an as-needed basis; the programmers who create the programs decide what files are needed, how they are to be used and their names. In some cases, computer pr
Microsoft Hyper-V, codenamed Viridian and known as Windows Server Virtualization, is a native hypervisor. Starting with Windows 8, Hyper-V superseded Windows Virtual PC as the hardware virtualization component of the client editions of Windows NT. A server computer running Hyper-V can be configured to expose individual virtual machines to one or more networks. Hyper-V was first released alongside Windows Server 2008, has been available without additional charge since Windows Server 2012 and Windows 8. A standalone Windows Hyper-V Server is free. A beta version of Hyper-V was shipped with certain x86-64 editions of Windows Server 2008; the finalized version was delivered through Windows Update. Hyper-V has since been released with every version of Windows Server. Microsoft provides Hyper-V through two channels: Part of Windows: Hyper-V is an optional component of Windows Server 2008 and later, it is available in x64 SKUs of Pro and Enterprise editions of Windows 8, Windows 8.1 and Windows 10. Hyper-V Server: It is a freeware edition of Windows Server with limited functionality and Hyper-V component.
Hyper-V Server 2008 was released on October 1, 2008. It consists of Hyper-V role. Hyper-V Server 2008 is limited to a command-line interface used to configure the host OS, physical hardware, software. A menu driven CLI interface and some downloadable script files simplify configuration. In addition, Hyper-V Server supports remote access via Remote Desktop Connection; however and configuration of the host OS and the guest virtual machines is done over the network, using either Microsoft Management Consoles on another Windows computer or System Center Virtual Machine Manager. This allows much easier "point and click" configuration, monitoring of the Hyper-V Server. Hyper-V Server 2008 R2 was made available in September 2009 and includes Windows PowerShell v2 for greater CLI control. Remote access to Hyper-V Server requires CLI configuration of network interfaces and Windows Firewall. Using a Windows Vista PC to administer Hyper-V Server 2008 R2 is not supported. Hyper-V implements isolation of virtual machines in terms of a partition.
A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes. There must be at least one parent partition in a hypervisor instance, running a supported version of Windows Server; the virtualization software runs in the parent partition and has direct access to the hardware devices. The parent partition creates child partitions. A parent partition creates child partitions using the hypercall API, the application programming interface exposed by Hyper-V. A child partition does not have access to the physical processor, nor does it handle its real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address, depending on the configuration of the hypervisor, might not be the entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset of the processors to each partition; the hypervisor handles the interrupts to the processor, redirects them to the respective partition using a logical Synthetic Interrupt Controller.
Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by using second level address translation provided by the CPU, referred to as EPT on Intel and RVI on AMD. Child partitions do not have direct access to hardware resources, but instead have a virtual view of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via the VMBus to the devices in the parent partition, which will manage the requests; the VMBus is a logical channel. The response is redirected via the VMBus. If the devices in the parent partition are virtual devices, it will be redirected further until it reaches the parent partition, where it will gain access to the physical devices. Parent partitions run a Virtualization Service Provider, which connects to the VMBus and handles device access requests from child partitions. Child partition virtual devices internally run a Virtualization Service Client, which redirect the request to VSPs in the parent partition via the VMBus.
This entire process is transparent to the guest OS. Virtual devices can take advantage of a Windows Server Virtualization feature, named Enlightened I/O, for storage and graphics subsystems, among others. Enlightened I/O is a specialized virtualization-aware implementation of high level communication protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of VMBus directly; this makes the communication more efficient, but requires the guest OS to support Enlightened I/O. Only the following operating systems support Enlightened I/O, allowing them therefore to run faster as guest operating systems under Hyper-V than other operating systems that need to use slower emulated hardware: Windows Server 2008 and Windows Vista and Linux with a 3.4 or kernel FreeBSD The Hyper-V role is only available in the x86-64 variants of Standard and Datacenter editions of Windows Server 2008 and as well as the Pro and Education editions of Windows 8 and later. On Windows Server, it can be installed regardless of whether the installation is a full or core installation.
In addition, Hyper-V can be made available as part of
Active Directory is a directory service that Microsoft developed for the Windows domain networks. It is included in most Windows Server operating systems as a set of services. Active Directory was only in charge of centralized domain management. Starting with Windows Server 2008, Active Directory became an umbrella title for a broad range of directory-based identity-related services. A server running Active Directory Domain Service is called a domain controller, it authenticates and authorizes all users and computers in a Windows domain type network—assigning and enforcing security policies for all computers and installing or updating software. For example, when a user logs into a computer, part of a Windows domain, Active Directory checks the submitted password and determines whether the user is a system administrator or normal user, it allows management and storage of information, provides authentication and authorization mechanisms, establishes a framework to deploy other related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services and Rights Management Services.
Active Directory uses Lightweight Directory Access Protocol versions 2 and 3, Microsoft's version of Kerberos, DNS. Active Directory, like many information-technology efforts, originated out of a democratization of design using Request for Comments or RFCs; the Internet Engineering Task Force, which oversees the RFC process, has accepted numerous RFCs initiated by widespread participants. Active Directory incorporates decades of communication technologies into the overarching Active Directory concept makes improvements upon them. For example, LDAP underpins Active Directory. X.500 directories and the Organizational Unit preceded the Active Directory concept that makes use of those methods. The LDAP concept began to emerge before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823,RFC 2307, RFC 3062, RFC 4533. Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, revised it to extend functionality and improve administration in Windows Server 2003.
Additional improvements came with subsequent versions of Windows Server. In Windows Server 2008, additional services were added to Active Directory, such as Active Directory Federation Services; the part of the directory in charge of management of domains, a core part of the operating system, was renamed Active Directory Domain Services and became a server role like others. "Active Directory" became the umbrella title of a broader range of directory-based services. According to Bryon Hynes, everything related to identity was brought under Active Directory's banner. Active Directory Services consist of multiple directory services; the best known is Active Directory Domain Services abbreviated as AD DS or AD. Active Directory Domain Services is the cornerstone of every Windows domain network, it stores information about members of the domain, including devices and users, verifies their credentials and defines their access rights. The server running this service is called a domain controller. A domain controller is contacted when a user logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app sideloaded into a device.
Other Active Directory services as well as most of Microsoft server technologies rely on or use Domain Services. Active Directory Lightweight Directory Services known as Active Directory Application Mode, is a light-weight implementation of AD DS. AD LDS runs as a service on Windows Server. AD LDS shares the code base with AD DS and provides the same functionality, including an identical API, but does not require the creation of domains or domain controllers, it provides a Data Store for storage of directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS, multiple AD LDS instances can run on the same server. Active Directory Certificate Services establishes an on-premises public key infrastructure, it can create and revoke public key certificates for internal uses of an organization. These certificates can be used to encrypt files and network traffic. AD CS predates Windows Server 2008, but its name was Certificate Services. AD CS requires an AD DS infrastructure.
Active Directory Federation Services is a single sign-on service. With an AD FS infrastructure in place, users may use several web-based services or network resources using only one set of credentials stored at a central location, as opposed to having to be granted a dedicated set of credentials for each service. AD FS's purpose is an extension of that of AD DS: The latter enables users to authenticate with and use the devices that are part of the same network, using one set of credentials; the former enables them to use the same set of credentials in a different network. As the name suggests, AD FS works based on the concept of federated identity. AD FS requires an AD DS infrastructure. Active Directory Rights Management Services is a server software for information rights management shipped with Windows Server
Logical Volume Manager (Linux)
In Linux, Logical Volume Manager is a device mapper target that provides logical volume management for the Linux kernel. Most modern Linux distributions are LVM-aware to the point of being able to have their root file systems on a logical volume. Heinz Mauelshagen wrote the original LVM code in 1998, taking its primary design guidelines from the HP-UX's volume manager. LVM is used for the following purposes: Creating single logical volumes of multiple physical volumes or entire hard disks, allowing for dynamic volume resizing. Managing large hard disk farms by allowing disks to be added and replaced without downtime or service disruption, in combination with hot swapping. On small systems, instead of having to estimate at installation time how big a partition might need to be, LVM allows filesystems to be resized as needed. Performing consistent backups by taking snapshots of the logical volumes. Encrypting multiple physical partitions with one password. LVM can be considered as a thin software layer on top of the hard disks and partitions, which creates an abstraction of continuity and ease-of-use for managing hard drive replacement and backup.
Volume groups can be resized online by ejecting existing ones. Logical volumes can be resized online by concatenating extents onto them or truncating extents from them. LVs can be moved between PVs. Creation of read-only snapshots of logical volumes, or read-write snapshots. VGs can be merged in situ as long as no LVs span the split; this can be useful. LVM objects can be tagged for administrative convenience. VGs and LVs can be made active as the underlying devices become available through use of the lvmetad daemon. Hybrid volumes can be created using the dm-cache target, which allows one or more fast storage devices, such as flash-based SSDs, to act as a cache for one or more slower hard disk drives. Thinly provisioned LVs can be allocated from a pool. On newer versions of device mapper, LVM is integrated with the rest of device mapper enough to ignore the individual paths that back a dm-multipath device if devices/multipath_component_detection=1 is set in lvm.conf. This prevents LVM from activating volumes on an individual path instead of the multipath device.
LVs can be created to include RAID functionality, including RAID 1, 5 and 6. Entire LVs or their parts can be striped across multiple PVs to RAID 0. A RAID 1 backend device can be configured as "write-mostly", resulting in reads being avoided to such devices unless necessary. Recovery rate can be limited using lvchange --raidmaxrecoveryrate and lvchange --raidminrecoveryrate to maintain acceptable I/O performance while rebuilding a LV that includes RAID functionality; the LVM works in a shared-storage cluster in which disks holding the PVs are shared between multiple host computers, but can require an additional daemon to mediate metadata access via a form of locking. CLVM A distributed. Whenever a cluster node needs to modify the LVM metadata, it must secure permission from its local clvmd, in constant contact with other clvmd daemons in the cluster and can communicate a desire to get a lock on a particular set of objects. HA-LVM Cluster-awareness is left to the application providing the high availability function.
For the LVM's part, HA-LVM can use CLVM as a locking mechanism, or can continue to use the default file locking and reduce "collisions" by restricting access to only those LVM objects that have appropriate tags. Since this simpler solution avoids contention rather than mitigating it, no concurrent accesses are allowed; as such, HA-LVM is considered useful only in active-passive configurations. Lvmlockd As of 2017, a stable LVM component, designed to replace clvmd by making the locking of LVM objects transparent to the rest of LVM, without relying on a distributed lock manager, it saw massive development during 2016. The above described; the file system selected to be on top of such LVs must either support clustering by itself or it must only be mounted by a single cluster node at any time. LVM VGs must contain a default allocation policy for new volumes created from it; this can be changed for each LV using the lvconvert -A command, or on the VG itself via vgchange --alloc. To minimize fragmentation, LVM will attempt the strictest policy first and progress toward the most liberal policy defined for the LVM object until allocation succeeds.
In RAID configurations all policies are applied to each leg in isolation. For example if a LV has a policy of cling, expanding the file system will not result in LVM using a PV if it is used by one of the other legs in the RAID setup. LVs with RAID functionality will put each leg on different PVs, making the other PVs unavailable to any other given leg. If this was the only option available, expansion of the LV would fail. In this sense, the logic behind cling will only apply to expanding each of the individual legs of the array. Available allocation policies are: Contiguous - forces all LEs in a given LV to be adjacent and ordered; this eliminates fragmentation but reduces a LV expandability. Cling - forces new LEs to be allocated only on PVs used by an LV; this can help mitigate fragmentation as well as reduce vulnerability of particular LVs should a device go down, by reducing the likelihood that other LVs have extents on that PV. Normal - implies near-indiscriminate selection of PEs, but it will attempt t
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Windows 8 is a personal computer operating system, produced by Microsoft as part of the Windows NT family of operating systems. The operating system was released to manufacturing on August 1, 2012, with general availability on October 26, 2012. Windows 8 introduced major changes to the operating system's platform and user interface to improve its user experience on tablets, where Windows was now competing with mobile operating systems, including Android and iOS. In particular, these changes included a touch-optimized Windows shell based on Microsoft's "Metro" design language, the Start screen, a new platform for developing "apps" with an emphasis on touchscreen input, integration with online services, Windows Store, an online store for downloading and purchasing new software. Windows 8 added support for USB 3.0, Advanced Format hard drives, near field communications, cloud computing. Additional security features were introduced, such as built-in antivirus software, integration with Microsoft SmartScreen phishing filtering service and support for UEFI Secure Boot on supported devices with UEFI firmware, to prevent malware from infecting the boot process.
Windows 8 was released to a mixed critical reception. Although reaction towards its performance improvements, security enhancements, improved support for touchscreen devices was positive, the new user interface of the operating system was criticized for being confusing and difficult to learn when used with a keyboard and mouse instead of a touchscreen. Despite these shortcomings, 60 million Windows 8 licenses were sold through January 2013, a number that included both upgrades and sales to OEMs for new PCs. On October 17, 2013, Microsoft released Windows 8.1. It addressed some aspects of Windows 8 that were criticized by reviewers and early adopters and incorporated additional improvements to various aspects of the operating system. Windows 8 was succeeded by Windows 10 in July 2015. Microsoft stopped providing support and updates for Windows 8 RTM on January 12, 2016, per Microsoft lifecycle policies regarding service packs, Windows 8.1 must be installed to maintain support and receive further updates.
Windows 8 development started before Windows 7 had shipped in 2009. At the Consumer Electronics Show in January 2011, it was announced that the next version of Windows would add support for ARM system-on-chips alongside the existing x86 processors produced by vendors AMD and Intel. Windows division president Steven Sinofsky demonstrated an early build of the port on prototype devices, while Microsoft CEO Steve Ballmer announced the company's goal for Windows to be "everywhere on every kind of device without compromise." Details began to surface about a new application framework for Windows 8 codenamed "Jupiter", which would be used to make "immersive" applications using XAML that could be distributed via a new packaging system and a rumored application store. Three milestone releases of Windows 8 leaked to the general public. Milestone 1, Build 7850, was leaked on April 12, 2011, it was the first build where the text of a window was written centered instead of aligned to the left. It was probably the first appearance of the Metro-style font, its wallpaper had the text shhh... let's not leak our hard work.
However, its detailed build number reveals that the build was created on September 22, 2010. The leaked copy was Enterprise edition; the OS still reads as "Windows 7". Milestone 2, Build 7955, was leaked on April 25, 2011; the traditional Blue Screen of Death was replaced by a new black screen, although this was scrapped. This build introduced a new ribbon in Windows Explorer. Build 7959, with minor changes but the first 64-bit version was leaked on May 1, 2011; the "Windows 7" logo was temporarily replaced with text displaying "Microsoft Confidential". On June 17, 2011, build 7989 64-bit edition was leaked, it introduced a new boot screen featuring the same fish as the default Windows 7 Beta wallpaper, scrapped, the circling dots as featured in the final. It had the text Welcome below them, although this was scrapped. On June 1, 2011, Microsoft unveiled Windows 8's new user interface, as well as additional features at both Computex Taipei and the D9: All Things Digital conference in California; the "Building Windows 8" blog launched on August 15, 2011, featuring details surrounding Windows 8's features and its development process.
Microsoft unveiled more Windows 8 features and improvements on the first day of the Build conference on September 13, 2011. Microsoft released the first public beta build of Windows Developer Preview at the event. A Samsung tablet running the build was distributed to conference attendees; the build was released for download in the day in standard 32-bit and 64-bit versions, plus a special 64-bit version which included SDKs and developer tools for developing Metro-style apps. The Windows Store was not available in this build. According to Microsoft, there were about 535,000 downloads of the developer preview within the first 12 hours of its release. Set to expire on March 11, 2012, in February 2012 the Developer Preview's expiry date was changed to January 15, 2013. On February 19, 2012, Microsoft unveiled a new logo to be adopted for Windows 8. Designed by Pentagram partner Paula Scher, the Windows logo was changed to resemble a set of four window panes. Additionally, the entire logo is now rend