Metadata is "data that provides information about other data". Many distinct types of metadata exist, among these descriptive metadata, structural metadata, administrative metadata, reference metadata and statistical metadata. Descriptive metadata describes a resource for purposes such as identification, it can include elements such as title, abstract and keywords. Structural metadata is metadata about containers of data and indicates how compound objects are put together, for example, how pages are ordered to form chapters, it describes the types, versions and other characteristics of digital materials. Administrative metadata provides information to help manage a resource, such as when and how it was created, file type and other technical information, who can access it. Reference metadata describes the contents and quality of statistical data Statistical metadata may describe processes that collect, process, or produce statistical data. Metadata was traditionally used in the card catalogs of libraries until the 1980s, when libraries converted their catalog data to digital databases.
In the 2000s, as digital formats were becoming the prevalent way of storing data and information, metadata was used to describe digital data using metadata standards. The first description of "meta data" for computer systems is purportedly noted by MIT's Center for International Studies experts David Griffel and Stuart McIntosh in 1967: "In summary we have statements in an object language about subject descriptions of data and token codes for the data. We have statements in a meta language describing the data relationships and transformations, ought/is relations between norm and data."There are different metadata standards for each different discipline. Describing the contents and context of data or data files increases its usefulness. For example, a web page may include metadata specifying what software language the page is written in, what tools were used to create it, what subjects the page is about, where to find more information about the subject; this metadata can automatically improve the reader's experience and make it easier for users to find the web page online.
A CD may include metadata providing information about the musicians and songwriters whose work appears on the disc. A principal purpose of metadata is to help users discover resources. Metadata helps to organize electronic resources, provide digital identification, support the archiving and preservation of resources. Metadata assists users in resource discovery by "allowing resources to be found by relevant criteria, identifying resources, bringing similar resources together, distinguishing dissimilar resources, giving location information." Metadata of telecommunication activities including Internet traffic is widely collected by various national governmental organizations. This data can be used for mass surveillance. In many countries, the metadata relating to emails, telephone calls, web pages, video traffic, IP connections and cell phone locations are stored by government organizations. Metadata means "data about data". Although the "meta" prefix means "after" or "beyond", it is used to mean "about" in epistemology.
Metadata is defined as the data providing information about one or more aspects of the data. Some examples include:Means of creation of the data Purpose of the data Time and date of creation Creator or author of the data Location on a computer network where the data was created Standards used File size Data quality Source of the data Process used to create the dataFor example, a digital image may include metadata that describes how large the picture is, the color depth, the image resolution, when the image was created, the shutter speed, other data. A text document's metadata may contain information about how long the document is, who the author is, when the document was written, a short summary of the document. Metadata within web pages can contain descriptions of page content, as well as key words linked to the content; these links are called "Metatags", which were used as the primary factor in determining order for a web search until the late 1990s. The reliance of metatags in web searches was decreased in the late 1990s because of "keyword stuffing".
Metatags were being misused to trick search engines into thinking some websites had more relevance in the search than they did. Metadata can be stored and managed in a database called a metadata registry or metadata repository. However, without context and a point of reference, it might be impossible to identify metadata just by looking at it. For example: by itself, a database containing several numbers, all 13 digits long could be the results of calculations or a list of numbers to plug into an equation - without any other context, the numbers themselves can be perceived as the data, but if given the context that this database is a log of a book collection, those 13-digit numbers may now be identified as ISBNs - information that refers to the book, but is not itself the information within the book. The term "metadata" was coined in 1968 by Philip Bagley, in his book "Extension of Programming Language Concepts" where it is clear that he uses the term in the ISO 11179 "traditional" sense, "structural metadata" i.e. "data about the containers of data".
KDE Software Compilation
The KDE Software Compilation was an umbrella term for the desktop environment plus a range of included applications produced by KDE. From its 1.0 release in July 1998 until the release of version 4.4 in February 2010, the Software Compilation was known as KDE, which stood for K Desktop Environment until the rebrand. The called KDE SC was used for all releases from 4.4 onward until the final release 4.14 in July 2014. It consisted of the KDE Plasma 4 desktop and those KDE applications, whose development teams chose to follow the Software Compilation's release schedule. After that, the KDE SC was split into three separate product entities: KDE Plasma, KDE Frameworks and KDE Applications, each with their own independent release schedules. KDE was founded in 1996 by Matthias Ettrich, a student at the Eberhard Karls University of Tübingen. At the time, he was troubled by certain aspects of the Unix desktop. Among his qualms was that none of the applications looked, felt, or worked alike, he proposed the formation of not only a set of applications, rather, a desktop environment, in which users could expect things to look and work consistently.
He wanted to make this desktop easy to use. His initial Usenet post spurred a lot of interest, the KDE project was born. Ettrich chose to use Trolltech's Qt framework for the KDE project. Other programmers started developing KDE/Qt applications, by early 1997, a few applications were being released. On 12 July 1998, K Desktop Environment 1.0 was released. In November 1998, the Qt toolkit was dual-licensed under the free/open source Q Public License and a proprietary license for proprietary software developers. Debate continued about compatibility with the GNU General Public License, so in September 2000, Trolltech made the Unix version of the Qt libraries available under the GPL, in addition to the QPL. Trolltech continued to require licenses for developing proprietary software with Qt; the core libraries of KDE are collectively licensed under the GNU LGPL, but the only way for proprietary software to make use of them was to be developed under the terms of the Qt proprietary license. Beginning 23 October 2000, the second series of releases, K Desktop Environment 2, introduced significant technological improvements.
These included DCOP, KIO, KParts, KHTML. The third series was much larger than previous series, consisting of six major releases starting on 3 April 2002; the API changes between K Desktop Environment 2 and K Desktop Environment 3 were comparatively minor, meaning that the KDE 3 can be seen as a continuation of the K Desktop Environment 2 series. All releases of K Desktop Environment 3 were built upon Qt 3, only released under the GPL for Linux and Unix-like operating systems, including Mac OS X, it is marked stable running on Mac OS X since 2008. Unlike KDE SC 4, however, it requires an X11 server to operate. In 2002, members of the KDE on Cygwin project began porting the GPL licensed Qt/X11 code base to Windows. KDE Software Compilation 4, first released on 11 January 2008, is based on Qt 4, released under the GPL for Windows and Mac OS X. Therefore, KDE SC 4 applications can be compiled and run natively on these operating systems as well. KDE Software Compilation 4 on Mac OS X is considered beta, while on Windows it is not in the final state, so applications can be unsuitable for day to day use.
KDE SC 4 includes technical changes. The centerpiece is a redesigned desktop and panels collectively called Plasma, which replaces Kicker, KDesktop, SuperKaramba by integrating their functionality into one piece of technology. There are a number of new frameworks, including Phonon Solid, Decibel. Featured is a metadata and search framework, incorporating Strigi as a full-text file indexing service, NEPOMUK with KDE integration. Starting with Qt 4.5, Qt was made available under the LGPL version 2.1, a major step for KDE adoption in corporate and proprietary environments, as the LGPL permits proprietary applications to link to libraries licensed under the LGPL. As of August 2014, KDE no longer provides synchronized releases of the entire software compilation. Major changes include a move from Qt 4 to Qt 5, support for the next-generation display server protocol Wayland, support for the next-generation rendering API Vulkan and modularization of the KDE core libraries. Initial releases of Frameworks 5 and Plasma 5 were made available in July 2014.
KDE SC releases are made to the KDE FTP server in the form of source code with configure scripts, which are compiled by operating system vendors and integrated with the rest of their systems before distribution. Most vendors use only stable and tested ve
K Desktop Environment 1
K Desktop Environment 1 was the inaugural series of releases of the K Desktop Environment. There were two major releases in this series; the development started right after Matthias Ettrich’s announcement on 14 October 1996 to found the Kool Desktop Environment. The word Kool was dropped shortly afterward and the name became K Desktop Environment. In the beginning, all components were released to the developer community separately without any coordinated timeframe throughout the overall project. First communication of KDE via mailing list, called firstname.lastname@example.org-Tubingen.de. The first coordinated release was Beta 1 on 20 October 1997 – exactly one year after the original announcement. Three additional Betas followed 23 November 1997, 1 February 1998, 19 April 1998. On 12 July 1998 the finished version 1.0 of K Desktop Environments was released: This version received mixed reception. Many criticized the use of the Qt software framework – back under the Qt Free Edition License, claimed to not be compatible with free software – and advised the use of Motif or LessTif instead.
Despite that criticism, KDE was well received by many users and made its way into the first Linux distributions. An update, K Desktop Environment 1.1, included many small improvements. It included a new set of icons and textures. Among this overhauled artwork was a new KDE logo by Torsten Rahn consisting of the letter K in front of a gear, used in revised form to this day; some components received more far-reaching updates, such as the Konqueror predecessor kfm, the application launcher kpanel, the KWin predecessor kwm. Newly introduced were e. g. kab, a software library for address management, a rewrite of KMail, called kmail2, installed as alpha version in parallel to the classic KMail version. Kmail2, never left alpha state and development was ended in favor of updating classic KMail. K Desktop Environment 1.1 was well received among critics. At the same time Trolltech prepared version 2.0 of Qt, released as beta on 28 January 1999. No bigger upgrades for KDE 1 based on Qt 1 were developed. Instead only bugfixes were released: version 1.1.1 on 3 May 1999 and version 1.1.2 on 13 September 1999.
A more profound upgrade along with a port to Qt 2 was in development as K Desktop Environment 2. To celebrate KDE’s 20th birthday, KDE and Fedora contributor Helio Chissini de Castro re-released 1.1.2 on 14 October 2016. That re-release incorporates several changes required for compatibility with modern Linux variants. Work on that project started one month earlier at a conference for Qt developers, in Berlin. There Castro showcased Qt 1.45 compiling on a modern Linux system. Linux on the desktop
A computing platform or digital platform is the environment in which a piece of software is executed. It may be the hardware or the operating system a web browser and associated application programming interfaces, or other underlying software, as long as the program code is executed with it. Computing platforms have different abstraction levels, including a computer architecture, an OS, or runtime libraries. A computing platform is the stage. A platform can be seen both as a constraint on the software development process, in that different platforms provide different functionality and restrictions. For example, an OS may be a platform that abstracts the underlying differences in hardware and provides a generic command for saving files or accessing the network. Platforms may include: Hardware alone, in the case of small embedded systems. Embedded systems can access hardware directly, without an OS. A browser in the case of web-based software; the browser itself runs on a hardware+OS platform, but this is not relevant to software running within the browser.
An application, such as a spreadsheet or word processor, which hosts software written in an application-specific scripting language, such as an Excel macro. This can be extended to writing fully-fledged applications with the Microsoft Office suite as a platform. Software frameworks. Cloud computing and Platform as a Service. Extending the idea of a software framework, these allow application developers to build software out of components that are hosted not by the developer, but by the provider, with internet communication linking them together; the social networking sites Twitter and Facebook are considered development platforms. A virtual machine such as the Java virtual machine or. NET CLR. Applications are compiled into a format similar to machine code, known as bytecode, executed by the VM. A virtualized version of a complete system, including virtualized hardware, OS, storage; these allow, for instance, a typical Windows program to run on. Some architectures have multiple layers, with each layer acting as a platform to the one above it.
In general, a component only has to be adapted to the layer beneath it. For instance, a Java program has to be written to use the Java virtual machine and associated libraries as a platform but does not have to be adapted to run for the Windows, Linux or Macintosh OS platforms. However, the JVM, the layer beneath the application, does have to be built separately for each OS. AmigaOS, AmigaOS 4 FreeBSD, NetBSD, OpenBSD IBM i Linux Microsoft Windows OpenVMS Classic Mac OS macOS OS/2 Solaris Tru64 UNIX VM QNX z/OS Android Bada BlackBerry OS Firefox OS iOS Embedded Linux Palm OS Symbian Tizen WebOS LuneOS Windows Mobile Windows Phone Binary Runtime Environment for Wireless Cocoa Cocoa Touch Common Language Infrastructure Mono. NET Framework Silverlight Flash AIR GNU Java platform Java ME Java SE Java EE JavaFX JavaFX Mobile LiveCode Microsoft XNA Mozilla Prism, XUL and XULRunner Open Web Platform Oracle Database Qt SAP NetWeaver Shockwave Smartface Universal Windows Platform Windows Runtime Vexi Ordered from more common types to less common types: Commodity computing platforms Wintel, that is, Intel x86 or compatible personal computer hardware with Windows operating system Macintosh, custom Apple Inc. hardware and Classic Mac OS and macOS operating systems 68k-based PowerPC-based, now migrated to x86 ARM architecture based mobile devices iPhone smartphones and iPad tablet computers devices running iOS from Apple Gumstix or Raspberry Pi full function miniature computers with Linux Newton devices running the Newton OS from Apple x86 with Unix-like systems such as Linux or BSD variants CP/M computers based on the S-100 bus, maybe the earliest microcomputer platform Video game consoles, any variety 3DO Interactive Multiplayer, licensed to manufacturers Apple Pippin, a multimedia player platform for video game console development RISC processor based machines running Unix variants SPARC architecture computers running Solaris or illumos operating systems DEC Alpha cluster running OpenVMS or Tru64 UNIX Midrange computers with their custom operating systems, such as IBM OS/400 Mainframe computers with their custom operating systems, such as IBM z/OS Supercomputer architectures Cross-platform Platform virtualization Third platform Ryan Sarver: What is a platform
MusicBrainz is a project that aims to create an open data music database, similar to the freedb project. MusicBrainz was founded in response to the restrictions placed on the Compact Disc Database, a database for software applications to look up audio CD information on the Internet. MusicBrainz has expanded its goals to reach beyond a compact disc metadata storehouse to become a structured open online database for music. MusicBrainz captures information about artists, their recorded works, the relationships between them. Recorded works entries capture at a minimum the album title, track titles, the length of each track; these entries are maintained by volunteer editors. Recorded works can store information about the release date and country, the CD ID, cover art, acoustic fingerprint, free-form annotation text and other metadata; as of 21 September 2018, MusicBrainz contained information about 1.4 million artists, 2 million releases, 19 million recordings. End-users can use software that communicates with MusicBrainz to add metadata tags to their digital media files, such as FLAC, MP3, Ogg Vorbis or AAC.
MusicBrainz allows contributors to upload cover art images of releases to the database. Internet Archive provides the bandwidth and legal protection for hosting the images, while MusicBrainz stores metadata and provides public access through the web and via an API for third parties to use; as with other contributions, the MusicBrainz community is in charge of maintaining and reviewing the data. Cover art is provided for items on sale at Amazon.com and some other online resources, but CAA is now preferred because it gives the community more control and flexibility for managing the images. Besides collecting metadata about music, MusicBrainz allows looking up recordings by their acoustic fingerprint. A separate application, such as MusicBrainz Picard, must be used for this. In 2000, MusicBrainz started using Relatable's patented TRM for acoustic fingerprint matching; this feature allowed the database to grow quickly. However, by 2005 TRM was showing scalability issues as the number of tracks in the database had reached into the millions.
This issue was resolved in May 2006 when MusicBrainz partnered with MusicIP, replacing TRM with MusicDNS. TRMs were phased out and replaced by MusicDNS in November 2008. In October 2009 MusicIP was acquired by AmpliFIND; some time after the acquisition, the MusicDNS service began having intermittent problems. Since the future of the free identification service was uncertain, a replacement for it was sought; the Chromaprint acoustic fingerprinting algorithm, the basis for AcoustID identification service, was started in February 2010 by a long-time MusicBrainz contributor Lukáš Lalinský. While AcoustID and Chromaprint are not MusicBrainz projects, they are tied with each other and both are open source. Chromaprint works by analyzing the first two minutes of a track, detecting the strength in each of 12 pitch classes, storing these 8 times per second. Additional post-processing is applied to compress this fingerprint while retaining patterns; the AcoustID search server searches from the database of fingerprints by similarity and returns the AcoustID identifier along with MusicBrainz recording identifiers if known.
Since 2003, MusicBrainz's core data are in the public domain, additional content, including moderation data, is placed under the Creative Commons CC-BY-NC-SA-2.0 license. The relational database management system is PostgreSQL; the server software is covered by the GNU General Public License. The MusicBrainz client software library, libmusicbrainz, is licensed under the GNU Lesser General Public License, which allows use of the code by proprietary software products. In December 2004, the MusicBrainz project was turned over to the MetaBrainz Foundation, a non-profit group, by its creator Robert Kaye. On 20 January 2006, the first commercial venture to use MusicBrainz data was the Barcelona, Spain-based Linkara in their Linkara Música service. On 28 June 2007, BBC announced that it has licensed MusicBrainz's live data feed to augment their music Web pages; the BBC online music editors will join the MusicBrainz community to contribute their knowledge to the database. On 28 July 2008, the beta of the new BBC Music site was launched, which publishes a page for each MusicBrainz artist.
Amarok – KDE audio player Banshee – multi-platform audio player Beets – automatic CLI music tagger/organiser for Unix-like systems Clementine – multi-platform audio player CDex – Microsoft Windows CD ripper Demlo – a dynamic and extensible music manager using a CLI iEatBrainz – Mac OS X deprecated foo_musicbrainz component for foobar2000 – Music Library/Audio Player Jaikoz – Java mass tag editor Max – Mac OS X CD ripper and audio transcoder Mp3tag – Windows metadata editor and music organizer MusicBrainz Picard – cross-platform album-oriented tag editor MusicBrainz Tagger – deprecated Microsoft Windows tag editor puddletag – a tag editor for PyQt under the GPLv3 Rhythmbox music player – an audio player for Unix-like systems Sound Juicer – GNOME CD ripper Zortam Mp3 Media Studio – Windows music organizer and ID3 Tag Editor. Freedb clients can access MusicBrainz data through the freedb protocol by using the MusicBrainz to FreeDB gateway service, mb2freedb. List of online music databases Making Metadata: The Case of Mus
GNU General Public License
The GNU General Public License is a widely-used free software license, which guarantees end users the freedom to run, study and modify the software. The license was written by Richard Stallman of the Free Software Foundation for the GNU Project, grants the recipients of a computer program the rights of the Free Software Definition; the GPL is a copyleft license, which means that derivative work can only be distributed under the same license terms. This is in distinction to permissive free software licenses, of which the BSD licenses and the MIT License are widely-used examples. GPL was the first copyleft license for general use; the GPL license family has been one of the most popular software licenses in the free and open-source software domain. Prominent free-software programs licensed under the GPL include the Linux kernel and the GNU Compiler Collection. David A. Wheeler argues that the copyleft provided by the GPL was crucial to the success of Linux-based systems, giving the programmers who contributed to the kernel the assurance that their work would benefit the whole world and remain free, rather than being exploited by software companies that would not have to give anything back to the community.
In 2007, the third version of the license was released to address some perceived problems with the second version that were discovered during its long-time usage. To keep the license up to date, the GPL license includes an optional "any version" clause, allowing users to choose between the original terms or the terms in new versions as updated by the FSF. Developers can omit it; the GPL was written by Richard Stallman in 1989, for use with programs released as part of the GNU project. The original GPL was based on a unification of similar licenses used for early versions of GNU Emacs, the GNU Debugger and the GNU C Compiler; these licenses contained similar provisions to the modern GPL, but were specific to each program, rendering them incompatible, despite being the same license. Stallman's goal was to produce one license that could be used for any project, thus making it possible for many projects to share code; the second version of the license, version 2, was released in 1991. Over the following 15 years, members of the free software community became concerned over problems in the GPLv2 license that could let someone exploit GPL-licensed software in ways contrary to the license's intent.
These problems included tivoization, compatibility issues similar to those of the Affero General Public License—and patent deals between Microsoft and distributors of free and open-source software, which some viewed as an attempt to use patents as a weapon against the free software community. Version 3 was developed to attempt to address these concerns and was released on 29 June 2007. Version 1 of the GNU GPL, released on 25 February 1989, prevented what were the two main ways that software distributors restricted the freedoms that define free software; the first problem was that distributors may publish binary files only—executable, but not readable or modifiable by humans. To prevent this, GPLv1 stated that copying and distributing copies or any portion of the program must make the human-readable source code available under the same licensing terms; the second problem was that distributors might add restrictions, either to the license, or by combining the software with other software that had other restrictions on distribution.
The union of two sets of restrictions would apply to the combined work, thus adding unacceptable restrictions. To prevent this, GPLv1 stated that modified versions, as a whole, had to be distributed under the terms in GPLv1. Therefore, software distributed under the terms of GPLv1 could be combined with software under more permissive terms, as this would not change the terms under which the whole could be distributed. However, software distributed under GPLv1 could not be combined with software distributed under a more restrictive license, as this would conflict with the requirement that the whole be distributable under the terms of GPLv1. According to Richard Stallman, the major change in GPLv2 was the "Liberty or Death" clause, as he calls it – Section 7; the section says that licensees may distribute a GPL-covered work only if they can satisfy all of the license's obligations, despite any other legal obligations they might have. In other words, the obligations of the license may not be severed due to conflicting obligations.
This provision is intended to discourage any party from using a patent infringement claim or other litigation to impair users' freedom under the license. By 1990, it was becoming apparent that a less restrictive license would be strategically useful for the C library and for software libraries that did the job of existing proprietary ones; the version numbers diverged in 1999 when version 2.1 of the LGPL was released, which renamed it the GNU Lesser General Public License to reflect its place in the philosophy. Most "GPLv2 or any version" is stated by users of the license, to allow upgrading to GPLv3. In late 2005, the Free Software Foundation announced work on version 3 of the GPL. On 16 January 2006, the first "discussion draft" of GPLv3 was published, the public consultation began; the public consultation was planned for ni
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri