International Standard Serial Number
An International Standard Serial Number is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is helpful in distinguishing between serials with the same title. ISSN are used in ordering, interlibrary loans, other practices in connection with serial literature; the ISSN system was first drafted as an International Organization for Standardization international standard in 1971 and published as ISO 3297 in 1975. ISO subcommittee TC 46/SC 9 is responsible for maintaining the standard; when a serial with the same content is published in more than one media type, a different ISSN is assigned to each media type. For example, many serials are published both in electronic media; the ISSN system refers to these types as electronic ISSN, respectively. Conversely, as defined in ISO 3297:2007, every serial in the ISSN system is assigned a linking ISSN the same as the ISSN assigned to the serial in its first published medium, which links together all ISSNs assigned to the serial in every medium.
The format of the ISSN is an eight digit code, divided by a hyphen into two four-digit numbers. As an integer number, it can be represented by the first seven digits; the last code digit, which may be 0-9 or an X, is a check digit. Formally, the general form of the ISSN code can be expressed as follows: NNNN-NNNC where N is in the set, a digit character, C is in; the ISSN of the journal Hearing Research, for example, is 0378-5955, where the final 5 is the check digit, C=5. To calculate the check digit, the following algorithm may be used: Calculate the sum of the first seven digits of the ISSN multiplied by its position in the number, counting from the right—that is, 8, 7, 6, 5, 4, 3, 2, respectively: 0 ⋅ 8 + 3 ⋅ 7 + 7 ⋅ 6 + 8 ⋅ 5 + 5 ⋅ 4 + 9 ⋅ 3 + 5 ⋅ 2 = 0 + 21 + 42 + 40 + 20 + 27 + 10 = 160 The modulus 11 of this sum is calculated. For calculations, an upper case X in the check digit position indicates a check digit of 10. To confirm the check digit, calculate the sum of all eight digits of the ISSN multiplied by its position in the number, counting from the right.
The modulus 11 of the sum must be 0. There is an online ISSN checker. ISSN codes are assigned by a network of ISSN National Centres located at national libraries and coordinated by the ISSN International Centre based in Paris; the International Centre is an intergovernmental organization created in 1974 through an agreement between UNESCO and the French government. The International Centre maintains a database of all ISSNs assigned worldwide, the ISDS Register otherwise known as the ISSN Register. At the end of 2016, the ISSN Register contained records for 1,943,572 items. ISSN and ISBN codes are similar in concept. An ISBN might be assigned for particular issues of a serial, in addition to the ISSN code for the serial as a whole. An ISSN, unlike the ISBN code, is an anonymous identifier associated with a serial title, containing no information as to the publisher or its location. For this reason a new ISSN is assigned to a serial each time it undergoes a major title change. Since the ISSN applies to an entire serial a new identifier, the Serial Item and Contribution Identifier, was built on top of it to allow references to specific volumes, articles, or other identifiable components.
Separate ISSNs are needed for serials in different media. Thus, the print and electronic media versions of a serial need separate ISSNs. A CD-ROM version and a web version of a serial require different ISSNs since two different media are involved. However, the same ISSN can be used for different file formats of the same online serial; this "media-oriented identification" of serials made sense in the 1970s. In the 1990s and onward, with personal computers, better screens, the Web, it makes sense to consider only content, independent of media; this "content-oriented identification" of serials was a repressed demand during a decade, but no ISSN update or initiative occurred. A natural extension for ISSN, the unique-identification of the articles in the serials, was the main demand application. An alternative serials' contents model arrived with the indecs Content Model and its application, the digital object identifier, as ISSN-independent initiative, consolidated in the 2000s. Only in 2007, ISSN-L was defined in the
A workstation is a special computer designed for technical or scientific applications. Intended to be used by one person at a time, they are connected to a local area network and run multi-user operating systems; the term workstation has been used loosely to refer to everything from a mainframe computer terminal to a PC connected to a network, but the most common form refers to the group of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, DEC, HP, NeXT and IBM which opened the door for the 3D graphics animation revolution of the late 1990s. Workstations offered higher performance than mainstream personal computers with respect to CPU and graphics, memory capacity, multitasking capability. Workstations were optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation and rendering of images, mathematical plots; the form factor is that of a desktop computer, consist of a high resolution display, a keyboard and a mouse at a minimum, but offer multiple displays, graphics tablets, 3D mice, etc.
Workstations were the first segment of the computer market to present advanced accessories and collaboration tools. The increasing capabilities of mainstream PCs in the late 1990s have blurred the lines somewhat with technical/scientific workstations; the workstation market employed proprietary hardware which made them distinct from PCs. However, by the early 2000s this difference disappeared, as workstations now use commoditized hardware dominated by large PC vendors, such as Dell, Hewlett-Packard and Fujitsu, selling Microsoft Windows or Linux systems running on x86-64 processors; the first computer that might qualify as a "workstation" was the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1960. One peculiar feature of the machine was. To perform addition, it required a memory-resident table of decimal addition rules; this saved on the cost of logic circuitry. The machine was code-named CADET and rented for $1000 a month.
In 1965, IBM introduced the IBM 1130 scientific computer, meant as the successor to the 1620. Both of these systems came with the ability to run programs written in other languages. Both the 1620 and the 1130 were built into desk-sized cabinets. Both were available with add-on disk drives and both paper-tape and punched-card I/O. A console typewriter for direct interaction was standard on each. Early examples of workstations were dedicated minicomputers. A notable example was the PDP-8 from Digital Equipment Corporation, regarded to be the first commercial minicomputer; the Lisp machines developed at MIT in the early 1970s pioneered some of the principles of the workstation computer, as they were high-performance, single-user systems intended for interactive use. Lisp Machines were commercialized beginning 1980 by companies like Symbolics, Lisp Machines, Texas Instruments and Xerox; the first computer designed for single-users, with high-resolution graphics facilities was the Xerox Alto developed at Xerox PARC in 1973.
Other early workstations include the Terak 8510/a, Three Rivers PERQ and the Xerox Star. In the early 1980s, with the advent of 32-bit microprocessors such as the Motorola 68000, a number of new participants in this field appeared, including Apollo Computer and Sun Microsystems, who created Unix-based workstations based on this processor. Meanwhile, DARPA's VLSI Project created several spinoff graphics products as well, notably the SGI 3130, Silicon Graphics' range of machines that followed, it was not uncommon to differentiate the target market for the products, with Sun and Apollo considered to be network workstations, while the SGI machines were graphics workstations. As RISC microprocessors became available in the mid-1980s, these were adopted by many workstation vendors. Workstations tended to be expensive several times the cost of a standard PC and sometimes costing as much as a new car. However, minicomputers sometimes cost as much as a house; the high expense came from using costlier components that ran faster than those found at the local computer store, as well as the inclusion of features not found in PCs of the time, such as high-speed networking and sophisticated graphics.
Workstation manufacturers tend to take a "balanced" approach to system design, making certain to avoid bottlenecks so that data can flow unimpeded between the many different subsystems within a computer. Additionally, given their more specialized nature, tend to have higher profit margins than commodity-driven PCs; the systems that come out of workstation companies feature SCSI or Fibre Channel disk storage systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, well-designed cooling. Additionally, the companies that make the products tend to have good repair/replacement plans. However, the line between workstation and PC is becoming blurred as the demand for fast computers and graphics have become
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
IBM PC DOS
IBM PC DOS is a discontinued operating system for the IBM Personal Computer and sold by IBM from the early 1980s into the 2000s. Before version 6.1, PC DOS was an IBM-branded version of MS-DOS. From version 6.1 on, PC DOS became IBM's independent product. The IBM task force assembled to develop the PC decided that critical components of the machine, including the operating system, would come from outside vendors; this radical break from company tradition of in-house development was one of the key decisions that made the IBM PC an industry standard. At that time the private company Microsoft, founded five years earlier by Bill Gates, was selected for the operating system. IBM wanted Microsoft to retain ownership of whatever software it developed, wanted nothing to do with helping Microsoft, other than making suggestions from afar. According to task force member Jack Sams: The reasons were internal. We had a terrible problem being sued by people claiming, it could be horribly expensive for us to have our programmers look at code that belonged to someone else because they would come back and say we stole it and made all this money.
We had lost a series of suits on this, so we didn't want to have a product, someone else's product worked on by IBM people. We went to Microsoft on the proposition. IBM first contacted Microsoft to look the company over in July 1980. Negotiations continued over the months that followed, the paperwork was signed in early November. Although IBM expected that most customers would use PC DOS, the IBM PC supported CP/M-86, which became available six months after PC DOS, UCSD p-System operating systems. IBM's expectation proved correct: one survey found that 96.3% of PCs were ordered with the $40 PC-DOS compared to 3.4% with the $240 CP/M-86. Microsoft first licensed purchased 86-DOS from Seattle Computer Products, modified for the IBM PC by Microsoft employee Bob O'Rear with assistance from SCP employee Tim Paterson. O'Rear got 86-DOS to run on the prototype PC in February 1981. 86-DOS had to be converted from 8-inch to 5.25-inch floppy disks and integrated with the BIOS, which Microsoft was helping IBM to write.
IBM had more people writing requirements for the computer. O'Rear felt overwhelmed by the number of people he had to deal with at the ESD facility in Boca Raton, Florida; the first public mention of the operating system was in July 1981, when Byte discussed rumors of a forthcoming personal computer with "a CP/M-like DOS... to be called, simply,'IBM Personal Computer DOS.'" 86-DOS was rebranded IBM PC DOS 1.0 for its August 1981 release with the IBM PC. The initial version of DOS was based on CP/M-80 1.x and most of its architecture, function calls and file-naming conventions were copied directly from the older OS. The most significant difference was the fact that it introduced a different file system, FAT12. Unlike all DOS versions, the DATE and TIME commands were separate executables rather than part of COMMAND. COM. Single-sided 160 kilobyte 5.25" floppies were the only disk format supported. In late 1981 Paterson, now at Microsoft, began writing PC DOS 1.10. It debuted in May 1982 along with the Revision B IBM PC.
Support for the new double-sided drives was added. A number of bugs were fixed, error messages and prompts were made less cryptic; the DEBUG utility was now able to load files greater than 64k in size. A group of Microsoft programmers began work on PC DOS 2.0. Rewritten, DOS 2.0 added subdirectories and hard disk support for the new IBM XT, which debuted in March 1983. A new 9-sector format bumped the capacity of floppy disks to 360 kB; the Unix-inspired kernel featured file handles in place of the CP/M-derivative file control blocks and loadable device drivers could now be used for adding hardware beyond that which the IBM PC BIOS supported. BASIC and most of the utilities provided with DOS were upgraded as well. A major undertaking that took 10 months of work, DOS 2.0 was more than twice as big as DOS 1.x, occupying around 28k of RAM compared to the 12k of its predecessor. It would form the basis for all Microsoft consumer-oriented OSes until 2001, when Windows XP was released. In October 1983 DOS 2.1 debuted.
It added support for half-height floppy drives and the new IBM PCjr. In 1983, Compaq released the Compaq Portable, the first 100% IBM PC compatible and licensed their own OEM version of DOS 1.10 from Microsoft. Other PC compatibles followed suit, most of which included hardware-specific DOS features, although some were generic. In August 1984, IBM introduced its next-generation machine. Along with this was DOS 3.00. Despite jumping a whole version number, it again proved little more than an incremental upgrade, adding nothing more substantial than support for the AT's new 1.2 megabyte floppy disks. Planned networking capabilities in DOS 3.00 were judged too buggy to be usable and Microsoft disabled them prior to the OS's release. In any case, IBM's original plans for the AT had been to equip it with a proper next-generation OS that would use its extended features, but this never materialized. PC DOS 3.1 fixed the bugs in DOS 3.00 and supported IBM's Network Adapter card on the IBM PC Network. PC DOS 3.2 added support for 3½-inch double-density 720 kB floppy disk drives, supporting the IBM PC Convertible, IBM's first co
CP/M standing for Control Program/Monitor and Control Program for Microcomputers, is a mass-market operating system created in 1974 for Intel 8080/85-based microcomputers by Gary Kildall of Digital Research, Inc. Confined to single-tasking on 8-bit processors and no more than 64 kilobytes of memory versions of CP/M added multi-user variations and were migrated to 16-bit processors; the combination of CP/M and S-100 bus computers was loosely patterned on the MITS Altair, an early standard in the microcomputer industry. This computer platform was used in business through the late 1970s and into the mid-1980s. CP/M increased the market size for both hardware and software by reducing the amount of programming required to install an application on a new manufacturer's computer. An important driver of software innovation was the advent of low-cost microcomputers running CP/M, as independent programmers and hackers bought them and shared their creations in user groups. CP/M was displaced by DOS soon after the 1981 introduction of the IBM PC.
A minimal 8-bit CP/M system would contain the following components: A computer terminal using the ASCII character set An Intel 8080 or Zilog Z80 microprocessor The NEC V20 and V30 processors support an 8080-emulation mode that can run 8-bit CP/M on a PC DOS/MS-DOS computer so equipped, though any PC can run the 16-bit CP/M-86. At least 16 kilobytes of RAM, beginning at address 0 A means to bootstrap the first sector of the diskette At least one floppy disk driveThe only hardware system that CP/M, as sold by Digital Research, would support was the Intel 8080 Development System. Manufacturers of CP/M-compatible systems customized portions of the operating system for their own combination of installed memory, disk drives, console devices. CP/M would run on systems based on the Zilog Z80 processor since the Z80 was compatible with 8080 code. While the Digital Research distributed core of CP/M did not use any of the Z80-specific instructions, many Z80-based systems used Z80 code in the system-specific BIOS, many applications were dedicated to Z80-based CP/M machines.
On most machines the bootstrap was a minimal bootloader in ROM combined with some means of minimal bank switching or a means of injecting code on the bus. CP/M used the 7-bit ASCII set; the other 128 characters made possible by the 8-bit byte were not standardized. For example, one Kaypro used them for Greek characters, Osborne machines used the 8th bit set to indicate an underlined character. WordStar used the 8th bit as an end-of-word marker. International CP/M systems most used the ISO 646 norm for localized character sets, replacing certain ASCII characters with localized characters rather than adding them beyond the 7-bit boundary. In the 8-bit versions, while running, the CP/M operating system loaded into memory had three components: Basic Input/Output System or BIOS, Basic Disk Operating System or BDOS, Console Command Processor or CCP; the BIOS and BDOS were memory-resident, while the CCP was memory-resident unless overwritten by an application, in which case it was automatically reloaded after the application finished running.
A number of transient commands for standard utilities were provided. The transient commands resided in files with the extension. COM on disk; the BIOS directly controlled hardware components other than main memory. It contained functions such as character input and output and the reading and writing of disk sectors; the BDOS implemented the CP/M file system and some input/output abstractions on top of the BIOS. The CCP took user commands and either executed them directly or loaded and started an executable file of the given name. Third-party applications for CP/M were essentially transient commands; the BDOS, CCP and standard transient commands were the same in all installations of a particular revision of CP/M, but the BIOS portion was always adapted to the particular hardware. Adding memory to a computer, for example, meant that the CP/M system had to be reinstalled with an updated BIOS capable of addressing the additional memory. A utility was provided to patch the supplied BIOS, BDOS and CCP to allow them to be run from higher memory.
Once installed, the operating system was stored in reserved areas at the beginning of any disk which would be used to boot the system. On start-up, the bootloader would load the operating system from the disk in drive A:. By modern standards CP/M was primitive. With version 1.0 there was no provision for detecting a changed disk. If a user changed disks without manually rereading the disk directory the system would write on the new disk using the old disk's directory information, ruining the data stored on the disk. From version 1.1 or 1.2 onwards, changing a disk trying to write to it before its directory was read would cause a fatal error to be signalled. This avoided overwriting the disk but required a reboot and loss of the data, to be stored on disk; the majority of the complexity in CP/M was isolated in the BDOS, to a lesser extent, the CCP and transient commands. This meant that by porting the limited number of simple routines in the BIOS to a particular hardware platform, the entire OS would work.
A minicomputer, or colloquially mini, is a class of smaller computers, developed in the mid-1960s and sold for much less than mainframe and mid-size computers from IBM and its direct competitors. In a 1970 survey, The New York Times suggested a consensus definition of a minicomputer as a machine costing less than US$25,000, with an input-output device such as a teleprinter and at least four thousand words of memory, capable of running programs in a higher level language, such as Fortran or BASIC; the class formed a distinct group with its own software architectures and operating systems. Minis were designed for control, human interaction, communication switching as distinct from calculation and record keeping. Many were sold indirectly to original equipment manufacturers for final end use application. During the two decade lifetime of the minicomputer class 100 companies formed and only a half dozen remained; when single-chip CPU microprocessors appeared, beginning with the Intel 4004 in 1971, the term "minicomputer" came to mean a machine that lies in the middle range of the computing spectrum, in between the smallest mainframe computers and the microcomputers.
The term "minicomputer" is little used today. The term "minicomputer" developed in the 1960s to describe the smaller computers that became possible with the use of transistors and core memory technologies, minimal instructions sets and less expensive peripherals such as the ubiquitous Teletype Model 33 ASR, they took up one or a few 19-inch rack cabinets, compared with the large mainframes that could fill a room. The definition of minicomputer is vague with the consequence that there are a number of candidates for the first minicomputer. An early and successful minicomputer was Digital Equipment Corporation's 12-bit PDP-8, built using discrete transistors and cost from US$16,000 upwards when launched in 1964. Versions of the PDP-8 took advantage of small-scale integrated circuits; the important precursors of the PDP-8 include the PDP-5, LINC, the TX-0, the TX-2, the PDP-1. DEC gave rise to a number of minicomputer companies along Massachusetts Route 128, including Data General, Wang Laboratories, Apollo Computer, Prime Computer.
Minicomputers were known as midrange computers. They grew to have high processing power and capacity, they were used in manufacturing process control, telephone switching and to control laboratory equipment. In the 1970s, they were the hardware, used to launch the computer-aided design industry and other similar industries where a smaller dedicated system was needed; the 7400 series of TTL integrated circuits started appearing in minicomputers in the late 1960s. The 74181 arithmetic logic unit was used in the CPU data paths; each 74181 had a bus width of hence the popularity of bit-slice architecture. Some scientific computers, such as the Nicolet 1080, would use the 7400 series in groups of five ICs for their uncommon twenty bits architecture; the 7400 series offered data-selectors, three-state buffers, etc. in dual in-line packages with one-tenth inch spacing, making major system components and architecture evident to the naked eye. Starting in the 1980s, many minicomputers used VLSI circuits.
At the launch of the MITS Altair 8800 in 1975, Radio Electronics magazine referred to the system as a "minicomputer", although the term microcomputer soon became usual for personal computers based on single-chip microprocessors. At the time, microcomputers were 8-bit single-user simple machines running simple program-launcher operating systems like CP/M or MS-DOS, while minis were much more powerful systems that ran full multi-user, multitasking operating systems, such as VMS and Unix, although the classical mini was a 16-bit computer, the emerging higher performance superminis were 32-bit; the decline of the minis happened due to the lower cost of microprocessor-based hardware, the emergence of inexpensive and deployable local area network systems, the emergence of the 68020, 80286 and the 80386 microprocessors, the desire of end-users to be less reliant on inflexible minicomputer manufacturers and IT departments or "data centers". The result was that minicomputers and computer terminals were replaced by networked workstations, file servers and PCs in some installations, beginning in the latter half of the 1980s.
During the 1990s, the change from minicomputers to inexpensive PC networks was cemented by the development of several versions of Unix and Unix-like systems that ran on the Intel x86 microprocessor architecture, including Solaris, FreeBSD, NetBSD and OpenBSD. The Microsoft Windows series of operating systems, beginning with, now included server versions that supported preemptive multitasking and other features required for servers; as microprocessors have become more powerful, the CPUs built up from multiple components – once the distinguishing feature differentiating mainframes and midrange systems from microcomputers – have become obsolete in the largest mainframe computers. Digital Equipment Corporation was once the leading minicomputer manufacturer, at one time the second-largest computer company after IBM, but as the minicomputer declined in the face of generic Unix servers and Intel-based PCs, not only DEC, but every other minicomputer company including Data General, Computervision and Wang Laboratories, many based in New England collapsed or merg