A computer network is a digital telecommunications network which allows nodes to share resources. In computer networks, computing devices exchange data with each other using connections between nodes; these data links are established over cable media such as wires or optic cables, or wireless media such as Wi-Fi. Network computer devices that originate and terminate the data are called network nodes. Nodes are identified by network addresses, can include hosts such as personal computers and servers, as well as networking hardware such as routers and switches. Two such devices can be said to be networked together when one device is able to exchange information with the other device, whether or not they have a direct connection to each other. In most cases, application-specific communications protocols are layered over other more general communications protocols; this formidable collection of information technology requires skilled network management to keep it all running reliably. Computer networks support an enormous number of applications and services such as access to the World Wide Web, digital video, digital audio, shared use of application and storage servers and fax machines, use of email and instant messaging applications as well as many others.
Computer networks differ in the transmission medium used to carry their signals, communications protocols to organize network traffic, the network's size, traffic control mechanism and organizational intent. The best-known computer network is the Internet; the chronology of significant computer-network developments includes: In the late 1950s, early networks of computers included the U. S. military radar system Semi-Automatic Ground Environment. In 1959, Anatolii Ivanovich Kitov proposed to the Central Committee of the Communist Party of the Soviet Union a detailed plan for the re-organisation of the control of the Soviet armed forces and of the Soviet economy on the basis of a network of computing centres, the OGAS. In 1960, the commercial airline reservation system semi-automatic business research environment went online with two connected mainframes. In 1963, J. C. R. Licklider sent a memorandum to office colleagues discussing the concept of the "Intergalactic Computer Network", a computer network intended to allow general communications among computer users.
In 1964, researchers at Dartmouth College developed the Dartmouth Time Sharing System for distributed users of large computer systems. The same year, at Massachusetts Institute of Technology, a research group supported by General Electric and Bell Labs used a computer to route and manage telephone connections. Throughout the 1960s, Paul Baran and Donald Davies independently developed the concept of packet switching to transfer information between computers over a network. Davies pioneered the implementation of the concept with the NPL network, a local area network at the National Physical Laboratory using a line speed of 768 kbit/s. In 1965, Western Electric introduced the first used telephone switch that implemented true computer control. In 1966, Thomas Marill and Lawrence G. Roberts published a paper on an experimental wide area network for computer time sharing. In 1969, the first four nodes of the ARPANET were connected using 50 kbit/s circuits between the University of California at Los Angeles, the Stanford Research Institute, the University of California at Santa Barbara, the University of Utah.
Leonard Kleinrock carried out theoretical work to model the performance of packet-switched networks, which underpinned the development of the ARPANET. His theoretical work on hierarchical routing in the late 1970s with student Farouk Kamoun remains critical to the operation of the Internet today. In 1972, commercial services using X.25 were deployed, used as an underlying infrastructure for expanding TCP/IP networks. In 1973, the French CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. In 1973, Robert Metcalfe wrote a formal memo at Xerox PARC describing Ethernet, a networking system, based on the Aloha network, developed in the 1960s by Norman Abramson and colleagues at the University of Hawaii. In July 1976, Robert Metcalfe and David Boggs published their paper "Ethernet: Distributed Packet Switching for Local Computer Networks" and collaborated on several patents received in 1977 and 1978.
In 1979, Robert Metcalfe pursued making Ethernet an open standard. In 1976, John Murphy of Datapoint Corporation created ARCNET, a token-passing network first used to share storage devices. In 1995, the transmission speed capacity for Ethernet increased from 10 Mbit/s to 100 Mbit/s. By 1998, Ethernet supported transmission speeds of a Gigabit. Subsequently, higher speeds of up to 400 Gbit/s were added; the ability of Ethernet to scale is a contributing factor to its continued use. Computer networking may be considered a branch of electrical engineering, electronics engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of the related disciplines. A computer network facilitates interpersonal communications allowing users to communicate efficiently and via various means: email, instant messaging, online chat, video telephone calls, video conferencing. A network allows sharing of computing resources.
Users may access and use resources provided by devices on the network, such as printing a document on a shared network printer or use of a shared storage device. A network allows sharing of files, and
Directory Opus is a popular file manager program written for the Amiga computer system in the early to mid-1990s. Development on the Amiga version ceased in 1997, but an re-written version of Directory Opus is still being developed and sold for the Microsoft Windows operating system by GPSoftware. Directory Opus was developed by, is still written by, Australian Jonathan Potter; until 1994, it was published by well-known Amiga software company Inovatronics, when the author joined with Greg Perry and the Australian-based GPSoftware to continue the development of the product, it has been published by GPSoftware since. Directory Opus has evolved since its first release in 1990 as a basic two-panel file manager; the interface has evolved due to feedback given by its users. Some of the features include: Single- or dual-panel exploring. Folder tree. Tabbed explorer panels. Ability to maintain date created/modified timestamps for both files and folders. Internal handling of ZIP, RAR, 7Zip and many other archive formats.
Internal FTP handling, including advanced FTP and SSH. Internal MTP handling for portable devices like phones and cameras. Flat-file display, where you can flatten a folder tree, hide the folders themselves. Powerful file renaming tools, with advanced regex if needed. User-definable toolbars, menus and filetype groups. Preview panel, preview of thumbnails. File collections; these are like virtual folders. Opus 1: January 1990 Opus 2: February 1991 Opus 3: 1991-12-01 Opus 4: 1992-12-04 Opus 5: 1995-04-12 Opus 5.5: 1996-08-01 Opus Magellan: 1997-05-17 Opus Magellan II: 1998-11-01 Opus Magellan II GPL: 2014-05-11Versions 1 and 2 were only available direct from the author. Versions 3 and 4 were published by Inovatronics. Versions since 5 have been published by GPSoftware; the full version of Magellan II is included for free with AmiKit package. Opus 6: 2001-06-18 Opus 8: 2004-10-04 Opus 9: 2007-04-27 Opus 10: 2011-04-30 Opus 11: 2014-03-03 Opus 12: 2016-09-05All Windows versions published by GPSoftware.
GPSoftware released the older Amiga Directory Opus 4 source code in 2000 as open-source under the GNU General Public License. AmigaOS4, AROS and MorphOS ports of this version were made available. Magellan II was released as open source under the AROS Public License in December 2012; the open source'Worker' filemanager is inspired by the Directory Opus 4 series Comparison of file managers Official website Directory Opus 4 Research Project Opus Resource Centre Forum Getting to know Directory Opus Directory Opus on SourceForge.net Directory Opus 5.8 for all Amigas on SourceForge.net
Text-based user interface
Text-based user interface called textual user interface or terminal user interface, is a retronym coined sometime after the invention of graphical user interfaces. TUIs display computer graphics in text mode. An advanced TUI may, like GUIs, accept mouse and other inputs. From text application's point of view, a text screen can belong to one of three types: A genuine text mode display, controlled by a video adapter or the central processor itself; this is a normal condition for a locally running application on various types of personal computers and mobile devices. If not deterred by the operating system, a smart program may exploit the full power of a hardware text mode. A text mode emulator. Examples are win32 console for Microsoft Windows; this supports programs which expect a real text mode display, but may run slower. Certain functions of an advanced text mode, such as an own font uploading certainly become unavailable. A remote text terminal; the communication capabilities become reduced to a serial line or its emulation with few ioctls as an out-of-band channel in such cases as Telnet and Secure Shell.
This is the worst case, because software restrictions hinder the use of capabilities of a remote display device. Under Linux and other Unix-like systems, a program accommodates to any of the three cases because the same interface controls the display and keyboard. Specialized programming libraries help to output the text in a way appropriate to the given display device and interface to it. See below for a comparison to Windows. American National Standards Institute standard ANSI X3.64 defines a standard set of escape sequences that can be used to drive terminals to create TUIs. Escape sequences may be supported for all three cases mentioned in the above section, allowing random cursor movements and color changes. However, not all terminals follow this standard, many non-compatible but functionally equivalent sequences exist. On IBM Personal Computers and compatibles, the Basic Input Output System and DOS system calls provide a way to write text on the screen, the ANSI. SYS driver could process standard ANSI escape sequences.
However, programmers soon learned that writing data directly to the screen buffer was far faster and simpler to program, less error-prone. This change in programming methods resulted in many DOS TUI programs; the win32 console environment is notorious for its emulation of certain EGA/VGA text mode features random access to the text buffer if the application runs in a window. On the other hand, programs running under Windows have much less control of the display and keyboard than Linux and DOS programs can have, because of aforementioned win32 console layer. Most those programs used a blue background for the main screen, with white or yellow characters, although they had user color customization, they used box-drawing characters in IBM's code page 437. The interface became influenced by graphical user interfaces, adding pull-down menus, overlapping windows, dialog boxes and GUI widgets operated by mnemonics or keyboard shortcuts. Soon mouse input was added – either at text resolution as a simple colored box or at graphical resolution thanks to the ability of the Enhanced Graphics Adapter and Video Graphics Array display adapters to redefine the text character shapes by software – providing additional functions.
Some notable programs of this kind were Microsoft Word, DOS Shell, WordPerfect, Norton Commander, Turbo Vision based Borland Turbo Pascal and Turbo C, Lotus 1-2-3 and many others. Some of these interfaces survived during the Microsoft Windows 3.1x period in the early 1990s. For example, the Microsoft C 6.0 compiler, used to write true GUI programs under 16-bit Windows, still has its own TUI. Since its start, Microsoft Windows includes a console to display DOS software. Versions added the Win32 console as a native interface for command-line interface and TUI programs; the console opens in window mode, but it can be switched to full, true text mode screen and vice versa by pressing the Alt and Enter keys together. Full-screen mode is not available in Windows Vista and but may be used with some workarounds. In Unix-like operating systems, TUIs are constructed using the terminal control library curses, or ncurses, a compatible library; the advent of the curses library with Berkeley Unix created a portable and stable API for which to write TUIs.
The ability to talk to various text terminal types using the same interfaces led to more widespread use of "visual" Unix programs, which occupied the entire terminal screen instead of using a simple line interface. This can be seen in text editors such as vi, mail clients such as pine or mutt, system management tools such as SMIT, SAM, FreeBSD's Sysinstall and web browsers such as lynx; some applications, such as w3m, older versions of pine and vi use the less-able termcap library, performing many of the functions associated with curses within the application. In addition, the rise in popularity of Linux brought many former DOS users to a Unix-like platform, which has fostered a DOS influence in many TUIs; the program minicom, for example, is modeled after the popular DOS program Telix. Some other TUI programs, such as the Twin desktop, were ported over; the Linux kernel supports virtual consoles accessed through a Ctrl-Alt-F key combination. Up to 64 consoles may be
The user interface, in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, whilst the machine feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls, process controls; the design considerations applicable when creating user interfaces are related to or involve such disciplines as ergonomics and psychology. The goal of user interface design is to produce a user interface which makes it easy and enjoyable to operate a machine in the way which produces the desired result; this means that the operator needs to provide minimal input to achieve the desired output, that the machine minimizes undesired outputs to the human. User interfaces are composed of one or more layers including a human-machine interface interfaces machines with physical input hardware such a keyboards, game pads and output hardware such as computer monitors and printers.
A device that implements a HMI is called a human interface device. Other terms for human-machine interfaces are man–machine interface and when the machine in question is a computer human–computer interface. Additional UI layers may interact with one or more human sense, including: tactile UI, visual UI, auditory UI, olfactory UI, equilibrial UI, gustatory UI. Composite user interfaces are UIs that interact with two or more senses; the most common CUI is a graphical user interface, composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI it becomes a multimedia user interface. There are three broad categories of CUI: standard and augmented. Standard composite user interfaces use standard human interface devices like keyboards and computer monitors; when the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface.
When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense Standard CUI with visual display and smells; the user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch. In complex systems, the human–machine interface is computerized; the term human–computer interface refers to this kind of system. In the context of computing, the term extends as well to the software dedicated to control the physical elements used for human-computer interaction; the engineering of the human–machine interfaces is enhanced by considering ergonomics.
The corresponding disciplines are human factors engineering and usability engineering, part of systems engineering. Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics. There is a difference between a user interface and an operator interface or a human–machine interface; the term "user interface" is used in the context of computer systems and electronic devices Where a network of equipment or computers are interlinked through an MES -or Host to display information. A human-machine interface is local to one machine or piece of equipment, is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple equipment that are linked by a host control system is accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons and the other for library personnel; the user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface. HMI is a modification of the original term MMI. In practice, the abbreviation MMI is still used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more used for human–computer interaction. Other terms used are operator interface terminal; however it is abbreviated, the terms refer to the'layer' that separates a human, operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to
An abbreviation is a shortened form of a word or phrase. It consists of a group of letters taken from the phrase. For example, the word abbreviation can itself be represented by the abbreviation abbr. abbrv. or abbrev. In strict analysis, abbreviations should not be confused with contractions, acronyms, or initialisms, with which they share some semantic and phonetic functions, though all four are connected by the term "abbreviation" in loose parlance. An abbreviation is a shortening by any method. A contraction of a word is made by omitting certain letters or syllables and bringing together the first and last letters or elements. A contraction is an abbreviation, but an abbreviation is not a contraction. Acronyms and initialisms are regarded as subsets of abbreviations, they are abbreviations that consist of the initial parts of words. Abbreviations have a long history, created; this might be done to save time and space, to provide secrecy. Shortened words were used and initial letters were used to represent words in specific applications.
In classical Greece and Rome, the reduction of words to single letters was common. In Roman inscriptions, "Words were abbreviated by using the initial letter or letters of words, most inscriptions have at least one abbreviation." However, "some could have more than one meaning, depending on their context."Abbreviations in English were used from its earliest days. Manuscripts of copies of the old English poem Beowulf used many abbreviations, for example 7 or & for and, y for since, so that "not much space is wasted"; the standardisation of English in the 15th through 17th centuries included such a growth in the use of abbreviations. At first, abbreviations were sometimes represented with various suspension signs, not only periods. For example, sequences like ‹er› were replaced with ‹ɔ›, as in ‹mastɔ› for master and ‹exacɔbate› for exacerbate. While this may seem trivial, it was symptomatic of an attempt by people manually reproducing academic texts to reduce the copy time. An example from the Oxford University Register, 1503: Mastɔ subwardenɔ y ɔmēde me to you.
And wherɔ y wrot to you the last wyke that y trouyde itt good to differrɔ thelectionɔ ovɔ to quīdenaɔ tinitatis y have be thougħt me synɔ that itt woll be thenɔ a bowte mydsomɔ. The Early Modern English period, between the 15th and 17th centuries, had abbreviations like ye for Þe, used for the word the: "hence, by misunderstanding, Ye Olde Tea Shoppe."During the growth of philological linguistic theory in academic Britain, abbreviating became fashionable. The use of abbreviation for the names of J. R. R. Tolkien and his friend C. S. Lewis, other members of the Oxford literary group known as the Inklings, are sometimes cited as symptomatic of this. A century earlier in Boston, a fad of abbreviation started that swept the United States, with the globally popular term OK credited as a remnant of its influence. After World War II, the British reduced the use of the full stop and other punctuation points after abbreviations in at least semi-formal writing, while the Americans more kept such use until more and still maintain it more than Britons.
The classic example, considered by their American counterparts quite curious, was the maintenance of the internal comma in a British organisation of secret agents called the "Special Operations, Executive"—"S. O. E"—which is not found in histories written after about 1960, but before that, many Britons were more scrupulous at maintaining the French form. In French, the period only follows an abbreviation if the last letter in the abbreviation is not the last letter of its antecedent: "M." is the abbreviation for "monsieur" while "Mme" is that for "madame". Like many other cross-channel linguistic acquisitions, many Britons took this up and followed this rule themselves, while the Americans took a simpler rule and applied it rigorously. Over the years, the lack of convention in some style guides has made it difficult to determine which two-word abbreviations should be abbreviated with periods and which should not; the U. S. media tend to use periods in two-word abbreviations like United States, but not personal computer or television.
Many British publications have done away with the use of periods in abbreviations. Minimization of punctuation in typewritten material became economically desirable in the 1960s and 1970s for the many users of carbon-film ribbons since a period or comma consumed the same length of non-reusable expensive ribbon as did a capital letter. Widespread use of electronic communication through mobile phones and the Internet during the 1990s allowed for a marked rise in colloquial abbreviation; this was due to increasing popularity of textual communication services such as instant- and text messaging. SMS, for instance, supports message lengths of 160 characters at most; this brevity gave rise to an informal abbreviation scheme sometimes called Textese, with which 10% or more of the words in a typical SMS message are abbreviated. More Twitter, a popular social networking service, began driving abbreviation use with 140 character message limits. In modern English, there are several conventions for abbreviations, the choice may be confusing.
The only rule universally accepted is th
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function. A computer program is written by a computer programmer in a programming language. From the program in its human-readable form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a computer program may be executed with the aid of an interpreter. A collection of computer programs and related data are referred to as software. Computer programs may be categorized along functional lines, such as application software and system software; the underlying method used for some calculation or manipulation is known as an algorithm. The earliest programmable machines preceded the invention of the digital computer. In 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be repeated by arranging the cards.
In 1837, Charles Babbage was inspired by Jacquard's loom to attempt to build the Analytical Engine. The names of the components of the calculating device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled; the device would have had a "store"—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the "store" would have been transferred to the "mill", for processing, and a "thread" being the execution of programmed instructions by the device. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables. However, after more than 17,000 pounds of the British government's money, the thousands of cogged wheels and gears never worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea; the memoir covered the Analytical Engine. The translation contained Note G which detailed a method for calculating Bernoulli numbers using the Analytical Engine.
This note is recognized by some historians as the world's first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine, it is a finite-state machine. The machine can move the tape forth, changing its contents as it performs an algorithm; the machine starts in the initial state, goes through a sequence of steps, halts when it encounters the halt state. This machine is considered by some to be the origin of the stored-program computer—used by John von Neumann for the "Electronic Computing Instrument" that now bears the von Neumann architecture name; the Z3 computer, invented by Konrad Zuse in Germany, was a programmable computer. A digital computer uses electricity as the calculating component; the Z3 contained 2,400 relays to create the circuits. The circuits provided a floating-point, nine-instruction computer. Programming the Z3 was through a specially designed keyboard and punched tape.
The Electronic Numerical Integrator And Computer was a Turing complete, general-purpose computer that used 17,468 vacuum tubes to create the circuits. At its core, it was a series of Pascalines wired together, its 40 units weighed 30 tons, occupied 1,800 square feet, consumed $650 per hour in electricity when idle. It had 20 base-10 accumulators. Programming the ENIAC took up to two months. Three function tables needed to be rolled to fixed function panels. Function tables were connected to function panels using heavy black cables; each function table had 728 rotating knobs. Programming the ENIAC involved setting some of the 3,000 switches. Debugging a program took a week; the programmers of the ENIAC were women who were known collectively as the "ENIAC girls." The ENIAC featured parallel operations. Different sets of accumulators could work on different algorithms, it used punched card machines for input and output, it was controlled with a clock signal. It ran for eight years, calculating hydrogen bomb parameters, predicting weather patterns, producing firing tables to aim artillery guns.
The Manchester Baby was a stored-program computer. Programming transitioned away from setting dials. Only three bits of memory were available to store each instruction, so it was limited to eight instructions. 32 switches were available for programming. Computers manufactured; the computer program was written on paper for reference. An instruction was represented by a configuration of on/off settings. After setting the configuration, an execute button was pressed; this process was repeated. Computer programs were manually input via paper tape or punched cards. After the medium was loaded, the starting address was set via switches and the execute button pressed. In 1961, the Burroughs B5000 was built to be programmed in the ALGOL 60 language; the hardware featured circuits to ease the compile phase. In 1964, the IBM System/360 was a line of six computers each having the same instruction set architecture; the Model 30 was the least expensive. Customers could retain the same application software; each System/360 model featured multiprogramming.
With operating system support, multiple programs could be in memory at once. When one was waiting for input/output, another could compute; each model could emulate other computers. Customers could upgrade to the System/360 and ret
OpenVMS is a closed-source, proprietary computer operating system for use in general-purpose computing. It is the successor to the VMS Operating System, produced by Digital Equipment Corporation, first released in 1977 for its series of VAX-11 minicomputers; the 11/780 was introduced at DEC's Oct. 1977 annual shareholder's meeting. In the 1990s, it was used for the successor series of DEC Alpha systems. OpenVMS runs on the HP Itanium-based families of computers; as of 2019, a port to the x86-64 architecture is underway. The name VMS is derived from virtual memory system, according to one of its principal architectural features. OpenVMS is a proprietary operating system. OpenVMS is a multi-user, multiprocessing virtual memory-based operating system designed for use in time-sharing, batch processing, transaction processing; when process priorities are suitably adjusted, it may approach real-time operating system characteristics. The system offers high availability through clustering and the ability to distribute the system over multiple physical machines.
This allows the system to be tolerant against disasters that may disable individual data-processing facilities. OpenVMS contains a graphical user interface, a feature, not available on the original VAX-11/VMS system. Prior to the introduction of DEC VAXstation systems in the 1980s, the operating system was used and managed from text-based terminals, such as the VT100, which provide serial data communications and screen-oriented display features. Versions of VMS running on DEC Alpha workstations in the 1990s supported OpenGL and Accelerated Graphics Port graphics adapters. Enterprise-class environments select and use OpenVMS for various purposes including mail servers, network services, manufacturing or transportation control and monitoring, critical applications and databases, environments where system uptime and data access is critical. System up-times of more than 10 years have been reported, features such as rolling upgrades and clustering allow clustered applications and data to remain continuously accessible while operating system software and hardware maintenance and upgrades are performed, or when a whole data center is destroyed.
Customers using OpenVMS include banks and financial services and healthcare, network information services, large-scale industrial manufacturers of various products. As of mid-2014, Hewlett-Packard licensed the development of OpenVMS to VMS Software Inc.. VMS Software will be responsible for developing OpenVMS, supporting existing hardware and providing roadmap to clients; the company has a team of veteran developers that developed the software during DEC's ownership. In April 1975, Digital Equipment Corporation embarked on a hardware project, code named Star, to design a 32-bit virtual address extension to its PDP-11 computer line. A companion software project, code named Starlet, was started in June 1975 to develop a new operating system, based on RSX-11M, for the Star family of processors; these two projects were integrated from the beginning. Gordon Bell was the VP lead on its architecture. Roger Gourd was the project lead for the Starlet program, with software engineers Dave Cutler, Dick Hustvedt, Peter Lipman acting as the technical project leaders, each having responsibility for a different area of the operating system.
The Star and Starlet projects culminated in the VAX 11/780 computer and the VAX-11/VMS operating system. The Starlet name survived in VMS as a name of several of the main system libraries, including STARLET. OLB and STARLET. MLB. Over the years the name of the product has changed. In 1980 it was renamed, with version 2.0 release, to VAX/VMS. With the introduction of the MicroVAX range such as the MicroVAX I, MicroVAX II and MicroVAX 2000 in the mid-to-late 1980s, DIGITAL released MicroVMS versions targeted for these platforms which had much more limited memory and disk capacity. MicroVMS kits were released for VAX/VMS 4.4 to 4.7 on TK50 tapes and RX50 floppy disks, but discontinued with VAX/VMS 5.0. In 1991, VMS was renamed to OpenVMS as an indication for its support of "open systems" industry standards such as POSIX and Unix compatibility, to drop the hardware connection as the port to DIGITAL's 64-bit Alpha RISC processor was in process; the OpenVMS name first appeared after the version 5.4-2 release.
The VMS port to Alpha resulted in the creation of a second and separate source code libraries for the VAX 32-bit source code library and a second and new source code library for the Alpha 64-bit architectures. 1992 saw the release of the first version of OpenVMS for Alpha AXP systems, designated OpenVMS AXP V1.0. The decision to use the 1.x version numbering stream for the pre-production quality releases of OpenVMS AXP caused confusion for some customers and was not repeated in the next platform port to the Itanium. In 1994, with the release of OpenVMS version 6.1, feature parity between the VAX and Alpha variants was achieved. This was the so-called Functional Equivalence release, in the marketing materials of the time; some features were missing however, e.g. based shareable images, which were implemented in versions. Subsequent version numberings for the VAX and Alpha variants of the product have remaine