Digital Equipment Corporation's PDP-10 marketed as the DECsystem-10, was a mainframe computer family manufactured beginning in 1966. 1970s models and beyond were marketed under the DECsystem-10 name as the TOPS-10 operating system became used. The PDP-10's architecture is identical to that of DEC's earlier PDP-6, sharing the same 36-bit word length and extending the instruction set; some aspects of the instruction set are unusual, most notably the byte instructions, which operated on bit fields of any size from 1 to 36 bits inclusive, according to the general definition of a byte as a contiguous sequence of a fixed number of bits. The PDP-10 is the machine that made time-sharing common, this and other features made it a common fixture in many university computing facilities and research labs during the 1970s, the most notable being Harvard University's Aiken Lab, MIT's AI Lab and Project MAC, Stanford's SAIL, Computer Center Corporation, ETH, Carnegie Mellon University, its main operating systems, TOPS-10 and TENEX, were used to build out the early ARPANET.
For these reasons, the PDP-10 looms large in early hacker folklore. Projects to extend the PDP-10 line were eclipsed by the success of the unrelated VAX superminicomputer, the cancellation of the PDP-10 line was announced in 1983; the original PDP-10 processor is the KA10, introduced in 1968. It uses discrete transistors packaged in DEC's Flip-Chip technology, with backplanes wire wrapped via a semi-automated manufacturing process, its cycle time is its add time 2.1 μs. In 1973, the KA10 was replaced by the KI10, which uses transistor–transistor logic SSI; this was joined in 1975 by the higher-performance KL10, built from emitter-coupled logic and has cache memory. The KL10's performance was about 1 megaflops using 36-bit floating point numbers on matrix row reduction, it was faster than the newer VAX-11/750, although more limited in memory. A smaller, less expensive model, the KS10, was introduced in 1978, using TTL and Am2901 bit-slice components and including the PDP-11 Unibus to connect peripherals.
The KS was marketed as the DECsystem-2020, DEC's entry in the distributed processing arena, it was introduced as "the world's lowest cost mainframe computer system." The KA10 has a maximum main memory capacity of 256 kilowords. As supplied by DEC, it did not include paging hardware; this allows each half of a user's address space to be limited to a set section of main memory, designated by the base physical address and size. This allows the model of separate read-only shareable code segment and read-write data/stack segment used by TOPS-10 and adopted by Unix; some KA10 machines, first at MIT, at Bolt and Newman, were modified to add virtual memory and support for demand paging, more physical memory. KA10 weighed about 1,920 pounds; the 10/50 was the top-of-the-line Uni-processor KA machine at the time when the PA1050 software package was introduced. Two other KA10 models were the uniprocessor 10/40, the dual-processor 10/55; the KI10 and processors offer paged memory management, support a larger physical address space of 4 megawords.
KI10 models include 1060, 1070 and 1077, the latter incorporating two CPUs. The original KL10 PDP-10 models use the original PDP-10 memory bus, with external memory modules. Module in this context meant a cabinet, dimensions 30 x 75 x 30 in. with a capacity of 32 to 256 kWords of magnetic core memory. The processors used in the DECSYSTEM-20 but incorrectly called "KL20", use internal memory, mounted in the same cabinet as the CPU; the 10xx models have different packaging. The differences between the 10xx and 20xx models are more cosmetic than real. In particular, all ARPAnet TOPS-20 systems had an I/O bus because the AN20 IMP interface was an I/O bus device. Both could run thus the corresponding operating system; the "Model B" version of the 2060 processors removed the 256 kiloword limit on the virtual address space, by allowing the use of up to 32 "sections" of up to 256 kilowords each, along with substantial changes to the instruction set. "Model A" and "Model B" KL10. The first operating system that took advantage of the Model B's capabilities was TOPS-20 release 3, user mode extended addressing was offered in TOPS-20 release 4.
TOPS-20 versions after release 4.1 would only run on a Model B. TOPS-10 versions 7.02 and 7.03 use extended addressing when run on a 1090 Model B processor running TOPS-20 microcode. The final upgrade to the KL10 was the MCA25 upgrade of a 2060 to 2065, which gave some performance increases for programs which run in multiple sections; the KS10 design was crippled to be a Model A though most of the necessary data paths needed to support the Model B architecture were present. This was no doubt intended to segment the market, but it shortened the KS10's product life. Frontend processors are comp
A computer cluster is a set of loosely or connected computers that work together so that, in many respects, they can be viewed as a single system. Unlike grid computers, computer clusters have each node set to perform the same task and scheduled by software; the components of a cluster are connected to each other through fast local area networks, with each node running its own instance of an operating system. In most circumstances, all of the nodes use the same hardware and the same operating system, although in some setups, different operating systems can be used on each computer, or different hardware. Clusters are deployed to improve performance and availability over that of a single computer, while being much more cost-effective than single computers of comparable speed or availability. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low-cost microprocessors, high-speed networks, software for high-performance distributed computing.
They have a wide range of applicability and deployment, ranging from small business clusters with a handful of nodes to some of the fastest supercomputers in the world such as IBM's Sequoia. Prior to the advent of clusters, single unit fault tolerant mainframes with modular redundancy were employed. In contrast to high-reliability mainframes clusters are cheaper to scale out, but have increased complexity in error handling, as in clusters error modes are not opaque to running programs; the desire to get more computing power and better reliability by orchestrating a number of low-cost commercial off-the-shelf computers has given rise to a variety of architectures and configurations. The computer clustering approach connects a number of available computing nodes via a fast local area network; the activities of the computing nodes are orchestrated by "clustering middleware", a software layer that sits atop the nodes and allows the users to treat the cluster as by and large one cohesive computing unit, e.g. via a single system image concept.
Computer clustering relies on a centralized management approach which makes the nodes available as orchestrated shared servers. It is distinct from other approaches such as peer to peer or grid computing which use many nodes, but with a far more distributed nature. A computer cluster may be a simple two-node system which just connects two personal computers, or may be a fast supercomputer. A basic approach to building a cluster is that of a Beowulf cluster which may be built with a few personal computers to produce a cost-effective alternative to traditional high performance computing. An early project that showed the viability of the concept was the 133-node Stone Soupercomputer; the developers used Linux, the Parallel Virtual Machine toolkit and the Message Passing Interface library to achieve high performance at a low cost. Although a cluster may consist of just a few personal computers connected by a simple network, the cluster architecture may be used to achieve high levels of performance.
The TOP500 organization's semiannual list of the 500 fastest supercomputers includes many clusters, e.g. the world's fastest machine in 2011 was the K computer which has a distributed memory, cluster architecture. Greg Pfister has stated that clusters were not invented by any specific vendor but by customers who could not fit all their work on one computer, or needed a backup. Pfister estimates the date as some time in the 1960s; the formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. The history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster; the first production system designed as a cluster was the Burroughs B5700 in the mid-1960s.
This allowed up to four computers, each with either one or two processors, to be coupled to a common disk storage subsystem in order to distribute the workload. Unlike standard multiprocessor systems, each computer could be restarted without disrupting overall operation; the first commercial loosely coupled clustering product was Datapoint Corporation's "Attached Resource Computer" system, developed in 1977, using ARCnet as the cluster interface. Clustering per se did not take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system; the ARC and VAXcluster products not only supported parallel computing, but shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. Two other noteworthy early commercial clusters were the Tandem Himalayan and the IBM S/390 Parallel Sysplex. Within the same time frame, while computer clusters used parallelism outside the computer on a commodity network, supercomputers began to use them within the same computer.
Following the success of the CDC 6600 in 1964, the Cray 1 was delivered in 1976, introduced internal parallelism via vector processing. While early supercomputers excluded clusters and relied on shared memory, in time some of the fastest supercomputers (e.g. the K compu
Tegra is a system on a chip series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, mobile Internet devices. The Tegra integrates an ARM architecture central processing unit, graphics processing unit, northbridge and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors, while more recent models emphasize performance for gaming and machine learning applications, without sacrificing power efficiency; the Tegra APX 2500 was announced on February 12, 2008. The Tegra 6xx product line was revealed on June 2, 2008, the APX 2600 was announced in February 2009; the APX chips were designed for smartphones, while the Tegra 600 and 650 chips were intended for smartbooks and mobile Internet devices. The first product to use the Tegra was Microsoft's Zune HD media player in September 2009, followed by the Samsung M1. Microsoft's Kin was the first cellular phone to use the Tegra. In September 2008, Nvidia and Opera Software announced that they would produce a version of the Opera 9.5 browser optimised for the Tegra on Windows Mobile and Windows CE.
At Mobile World Congress 2009, Nvidia introduced its port of Google's Android to the Tegra. On January 7, 2010, Nvidia announced and demonstrated its next generation Tegra system-on-a-chip, the Nvidia Tegra 250, at Consumer Electronics Show 2010. Nvidia supports Android on Tegra 2, but booting other ARM-supporting operating systems is possible on devices where the bootloader is accessible. Tegra 2 support for the Ubuntu GNU/Linux distribution was announced on the Nvidia developer forum. Nvidia announced the first quad-core SoC at the February 2011 Mobile World Congress event in Barcelona. Though the chip was codenamed Kal-El, it is now branded as Tegra 3. Early benchmark results show impressive gains over Tegra 2, the chip was used in many of the tablets released in the second half of 2011. In January 2012, Nvidia announced that Audi had selected the Tegra 3 processor for its In-Vehicle Infotainment systems and digital instruments display; the processor will be integrated into Audi's entire line of vehicles worldwide, beginning in 2013.
The process is ISO 26262-certified. In summer of 2012 Tesla Motors began shipping the Model S all electric, high performance sedan, which contains two NVIDIA Tegra 3D Visual Computing Modules. One VCM powers the 17-inch touchscreen infotainment system, one drives the 12.3-inch all digital instrument cluster."In March 2015, Nvidia announced the Tegra X1, the first SoC to have a graphics performance of 1 teraflop. At the announcement event, Nvidia showed off Epic Games' Unreal Engine 4 "Elemental" demo, running on a Tegra X1. On October 20, 2016, Nvidia announced that Nintendo's upcoming Switch hybrid home/portable game console will be powered by Tegra hardware. On March 15, 2017, TechInsights revealed the Nintendo Switch is powered by the Tegra X1. Tegra APX 2500 Processor: ARM11 600 MHz MPCore Suffix: APX Memory: NOR or NAND flash, Mobile DDR Graphics: Image processor Up to 12 megapixels camera support LCD controller supports resolutions up to 1280×1024 Storage: IDE for SSD Video codecs: up to 720p MPEG-4 AVC/H.264 and VC-1 decoding Includes GeForce ULV support for OpenGL ES 2.0, Direct3D Mobile, programmable shaders Output: HDMI, VGA, composite video, S-Video, stereo jack, USB USB On-The-GoTegra APX 2600 Enhanced NAND flash Video codecs:720p H.264 Baseline Profile encode or decode 720p VC-1/WMV9 Advanced Profile decode D-1 MPEG-4 Simple Profile encode or decode Tegra 600 Targeted for GPS segment and automotive Processor: ARM11 700 MHz MPCore Memory: low-power DDR SXGA, HDMI, USB, stereo jack HD camera 720pTegra 650 Targeted for GTX of handheld and notebook Processor: ARM11 800 MHz MPCore Low power DDR Less than 1 watt envelope HD image processing for advanced digital still camera and HD camcorder functions Display supports 1080p at 24 frame/s, HDMI v1.3, WSXGA+ LCD and CRT, NTSC/PAL TV output Direct support for Wi-Fi, disk drives, keyboard and other peripherals A complete board support package to enable fast time to market for Windows Mobile-based designs The second generation Tegra SoC has a dual-core ARM Cortex-A9 CPU, an ultra low power GeForce GPU, a 32-bit memory controller with either LPDDR2-600 or DDR2-667 memory, a 32KB/32KB L1 cache per core and a shared 1MB L2 cache.
Tegra 2's Cortex A9 implementation does not include ARM's SIMD extension, NEON. There is a version of the Tegra 2 SoC supporting 3D displays; the Tegra 2 video decoder is unchanged from the original Tegra and has limited support for HD formats. The lack of support for high-profile H.264 is troublesome when using online video streaming services. Common features: CPU cache: L1: 32 KB instruction + 32 KB data, L2: 1 MB 40 nm semiconductor technology The Tegra 3 is functionally a SoC with a quad-core ARM Cortex-A9 MPCore CPU, but includes a fifth "companion" core in what Nvidia refers to as a "variable SMP architecture". While all cores are Cortex-A9s, the companion core is manufactured with a low-power silicon process; this core operates transparently to applications and is used to reduce power consumption when processing load is minimal. The main quad-core portion of the CPU powers off in these situations. Tegra 3 is the first Tegra release to support ARM's SIMD extension, NEON; the GPU in Tegra 3 is an evolution of the Tegra 2 GPU, with 4 additional pixel shader units and higher clock frequency.
It can output video up to 2560×1600 resolution and supports
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time. Its introduction in the 1960s and emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing a large number of users to interact concurrently with a single computer, time-sharing lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, promoted the interactive use of computers and the development of new interactive applications; the earliest computers were expensive devices, slow in comparison to models. Machines were dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs; these programs might take hours, or weeks, to run. As computers grew in speed, run times dropped, soon the time taken to start up the next program became a concern.
Batch processing methodologies evolved to decrease these "dead periods" by queuing up programs so that as soon as one program completed, the next would start. To support a batch processing operation, a number of comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline"; when typing was complete, the programs were submitted to the operations team, which scheduled them to be run. Important programs were started quickly; when the program run was completed, the output was returned to the programmer. The complete process might take days; the alternative of allowing the user to operate the computer directly was far too expensive to consider. This was; this situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part. Programmers at the universities decried the behaviors that batch processing imposed, to the point that Stanford students made a short film humorously critiquing it.
They experimented with new ways to interact directly with the computer, a field today known as human–computer interaction. Time-sharing was developed out of the realization that while any single user would make inefficient use of a computer, a large group of users together would not; this was due to the pattern of interaction: Typically an individual user entered bursts of information followed by long pauses but a group of users working at the same time would mean that the pauses of one user would be filled by the activity of the others. Given an optimal group size, the overall process could be efficient. Small slices of time spent waiting for disk, tape, or network input could be granted to other users; the concept is claimed to have been first described by John Backus in the 1954 summer session at MIT, by Bob Bemer in his 1957 article "How to consider a computer" in Automatic Control Magazine. In a paper published in December 1958 by W. F. Bauer, he wrote that "The computers would handle a number of problems concurrently.
Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies." Implementing a system able to take advantage of this was difficult. Batch processing was a methodological development on top of the earliest systems. Since computers still ran single programs for single users at any time, the primary change with batch processing was the time delay between one program and the next. Developing a system that supported multiple users at the same time was a different concept; the "state" of each user and their programs would have to be kept in the machine, switched between quickly. This would take up computer cycles, on the slow machines of the era this was a concern. However, as computers improved in speed, in size of core memory in which users' states were retained, the overhead of time-sharing continually decreased speaking; the first project to implement time-sharing of user programs was initiated by John McCarthy at MIT in 1959 planned on a modified IBM 704, on an additionally modified IBM 709.
One of the deliverables of the project, known as the Compatible Time-Sharing System or CTSS, was demonstrated in November 1961. CTSS has a good claim to be the first time-sharing system and remained in use until 1973. Another contender for the first demonstrated time-sharing system was PLATO II, created by Donald Bitzer at a public demonstration at Robert Allerton Park near the University of Illinois in early 1961, but this was a special purpose system. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for 2 years. JOSS began time-sharing service in January 1964; the first commercially successful time-sharing system was the Dartmouth Time Sharing System. Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers, which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user.
Technology in interconnections were interrupt driven, some of these used parallel data trans
Michigan Terminal System
The Michigan Terminal System is one of the first time-sharing computer operating systems. Developed in 1967 at the University of Michigan for use on IBM S/360-67, S/370 and compatible mainframe computers, it was developed and used by a consortium of eight universities in the United States and the United Kingdom over a period of 33 years; the University of Michigan Multiprogramming Supervisor was developed by the staff of the academic computing center at the University of Michigan for operation of the IBM S/360-67, S/370 and compatible computers. The software may be described as a multiprogramming, virtual memory, time-sharing supervisor that runs multiple resident, reentrant programs. Among these programs is the Michigan Terminal System for command interpretation, execution control, file management, accounting. End-users interact with the computing resources through MTS using terminal and server oriented facilities; the name MTS refers to: The UMMPS Job Program. MTS was used on a production basis at about 13 sites in the United States, the United Kingdom, in Yugoslavia and at several more sites on a trial or benchmarking basis.
MTS was developed and maintained by a core group of eight universities included in the MTS Consortium. The University of Michigan announced in 1988 that "Reliable MTS service will be provided as long as there are users requiring it... MTS may be phased out after alternatives are able to meet users' computing requirements", it ceased operating MTS for end-users on June 30, 1996. By that time, most services had moved to client/server-based computing systems Unix for servers and various Mac, PC, Unix flavors for clients; the University of Michigan shut down its MTS system for the last time on May 30, 1997. Rensselaer Polytechnic Institute is believed to be the last site to use MTS in a production environment. RPI retired MTS in June 1999. Today, MTS still runs using IBM S/370 emulators such as Hercules, Sim390, FLEX-ES. In the mid-1960s, the University of Michigan was providing batch processing services on IBM 7090 hardware under the control of the University of Michigan Executive System, but was interested in offering interactive services using time-sharing.
At that time the work that computers could perform was limited by their small real memory capacity. When IBM introduced its System/360 family of computers in the mid-1960s, it did not provide a solution for this limitation and within IBM there were conflicting views about the importance of and need to support time-sharing. A paper titled Program and Addressing Structure in a Time-Sharing Environment by Bruce Arden, Bernard Galler, Frank Westervelt, Tom O'Brian building upon some basic ideas developed at the Massachusetts Institute of Technology was published in January 1966; the paper outlined a virtual memory architecture using dynamic address translation that could be used to implement time-sharing. After a year of negotiations and design studies, IBM agreed to make a one-of-a-kind version of its S/360-65 mainframe computer with dynamic address translation features that would support virtual memory and accommodate UM's desire to support time-sharing; the computer was dubbed the Model S/360-65M.
The "M" stood for Michigan. But IBM decided not to supply a time-sharing operating system for the machine. Meanwhile, a number of other institutions heard about the project, including General Motors, the Massachusetts Institute of Technology's Lincoln Laboratory, Princeton University, Carnegie Institute of Technology, they were all intrigued by the time-sharing idea and expressed interest in ordering the modified IBM S/360 series machines. With this demonstrated interest IBM changed the computer's model number to S/360-67 and made it a supported product. With requests for over 100 new model S/360-67s IBM realized there was a market for time-sharing, agreed to develop a new time-sharing operating system called TSS/360 for delivery at the same time as the first model S/360-67. While waiting for the Model 65M to arrive, UM Computing Center personnel were able to perform early time-sharing experiments using an IBM System/360 Model 50, funded by the ARPA CONCOMP Project; the time-sharing experiment began as a "half-page of code written out on a kitchen table" combined with a small multi-programming system, LLMPS from MIT's Lincoln Laboratory, modified and became the UM Multi-Programming Supervisor which in turn ran the MTS job program.
This earliest incarnation of MTS was intended as a throw-away system used to gain experience with the new IBM S/360 hardware and which would be discarded when IBM's TSS/360 operating system became available. Development of TSS took longer than anticipated, its delivery date was delayed, it was not yet available when the S/360-67 arrived at the Computing Center in January 1967. At this time UM had to decide whether to return the Model 67 and select another mainframe or to develop MTS as an interim system for use until TSS was ready; the decision was to continue development of MTS and the staff moved their initial development work from the Model 50 to the Model 67. TSS development was eventua