A workstation is a special computer designed for technical or scientific applications. Intended to be used by one person at a time, they are connected to a local area network and run multi-user operating systems; the term workstation has been used loosely to refer to everything from a mainframe computer terminal to a PC connected to a network, but the most common form refers to the group of hardware offered by several current and defunct companies such as Sun Microsystems, Silicon Graphics, Apollo Computer, DEC, HP, NeXT and IBM which opened the door for the 3D graphics animation revolution of the late 1990s. Workstations offered higher performance than mainstream personal computers with respect to CPU and graphics, memory capacity, multitasking capability. Workstations were optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation and rendering of images, mathematical plots; the form factor is that of a desktop computer, consist of a high resolution display, a keyboard and a mouse at a minimum, but offer multiple displays, graphics tablets, 3D mice, etc.
Workstations were the first segment of the computer market to present advanced accessories and collaboration tools. The increasing capabilities of mainstream PCs in the late 1990s have blurred the lines somewhat with technical/scientific workstations; the workstation market employed proprietary hardware which made them distinct from PCs. However, by the early 2000s this difference disappeared, as workstations now use commoditized hardware dominated by large PC vendors, such as Dell, Hewlett-Packard and Fujitsu, selling Microsoft Windows or Linux systems running on x86-64 processors; the first computer that might qualify as a "workstation" was the IBM 1620, a small scientific computer designed to be used interactively by a single person sitting at the console. It was introduced in 1960. One peculiar feature of the machine was. To perform addition, it required a memory-resident table of decimal addition rules; this saved on the cost of logic circuitry. The machine was code-named CADET and rented for $1000 a month.
In 1965, IBM introduced the IBM 1130 scientific computer, meant as the successor to the 1620. Both of these systems came with the ability to run programs written in other languages. Both the 1620 and the 1130 were built into desk-sized cabinets. Both were available with add-on disk drives and both paper-tape and punched-card I/O. A console typewriter for direct interaction was standard on each. Early examples of workstations were dedicated minicomputers. A notable example was the PDP-8 from Digital Equipment Corporation, regarded to be the first commercial minicomputer; the Lisp machines developed at MIT in the early 1970s pioneered some of the principles of the workstation computer, as they were high-performance, single-user systems intended for interactive use. Lisp Machines were commercialized beginning 1980 by companies like Symbolics, Lisp Machines, Texas Instruments and Xerox; the first computer designed for single-users, with high-resolution graphics facilities was the Xerox Alto developed at Xerox PARC in 1973.
Other early workstations include the Terak 8510/a, Three Rivers PERQ and the Xerox Star. In the early 1980s, with the advent of 32-bit microprocessors such as the Motorola 68000, a number of new participants in this field appeared, including Apollo Computer and Sun Microsystems, who created Unix-based workstations based on this processor. Meanwhile, DARPA's VLSI Project created several spinoff graphics products as well, notably the SGI 3130, Silicon Graphics' range of machines that followed, it was not uncommon to differentiate the target market for the products, with Sun and Apollo considered to be network workstations, while the SGI machines were graphics workstations. As RISC microprocessors became available in the mid-1980s, these were adopted by many workstation vendors. Workstations tended to be expensive several times the cost of a standard PC and sometimes costing as much as a new car. However, minicomputers sometimes cost as much as a house; the high expense came from using costlier components that ran faster than those found at the local computer store, as well as the inclusion of features not found in PCs of the time, such as high-speed networking and sophisticated graphics.
Workstation manufacturers tend to take a "balanced" approach to system design, making certain to avoid bottlenecks so that data can flow unimpeded between the many different subsystems within a computer. Additionally, given their more specialized nature, tend to have higher profit margins than commodity-driven PCs; the systems that come out of workstation companies feature SCSI or Fibre Channel disk storage systems, high-end 3D accelerators, single or multiple 64-bit processors, large amounts of RAM, well-designed cooling. Additionally, the companies that make the products tend to have good repair/replacement plans. However, the line between workstation and PC is becoming blurred as the demand for fast computers and graphics have become
RISC-V is an open-source hardware instruction set architecture based on established reduced instruction set computer principles. The project began in 2010 at the University of California, but many contributors are volunteers not affiliated with the university; as of March 2019, version 2.2 of the user-space ISA is frozen, permitting most software development to proceed. The privileged ISA is available as draft version 1.10. A debug specification is available as a draft version 0.13.1. Usable new ISAs are very expensive. Computer-designers cannot afford to work for free. Developing a CPU requires design expertise in several specialties: Electronic digital logic and operating systems, it is rare to find such a team outside of a professional engineering organization. The team is paid from money charged for their designs. Therefore, commercial vendors of computer designs, such as ARM Holdings and MIPS Technologies charge royalties for the use of their designs and copyrights, they often require non-disclosure agreements before releasing documents that describe their designs' detailed advantages and instruction set.
In many cases, they never describe the reasons for their design choices. This expense and secrecy make the development of new software much more difficult, it prevents security audits. Another result is that modern, high-quality general-purpose computer instruction sets have not been explained or available except in academic settings. RISC-V was started to solve these problems; the goal was to make a practical ISA, open-sourced, usable in any hardware or software design without royalties. The rationales for every part of the project are explained, at least broadly; the RISC-V authors have substantial experience in computer design. The RISC-V ISA is a direct development from a series of academic computer-design projects, it was originated in part to aid such projects. To address the cost of design, the project started as academic research funded by DARPA. In order to build a large, continuing community of users and therefore accumulate designs and software, the RISC-V ISA designers planned to support a wide variety of practical uses: Small and low-power real-world implementations, without over-architecting for a particular microarchitecture.
A need for a large base of contributors is part of the reason why RISC-V was engineered to fit so many uses. Therefore, many RISC-V contributors see the project as a unified community effort; the term RISC dates from about 1980. Before this, there was some knowledge that simpler computers could be effective, but the design principles were not described. Simple, effective computers have always been of academic interest. Academics created the RISC instruction set DLX for the first edition of Computer Architecture: A Quantitative Approach in 1990. David Patterson was an author, assisted RISC-V. DLX was for educational use. Academics and hobbyists implemented it using field-programmable gate arrays, it was not a commercial success. ARM CPUs, versions 2 and earlier, had a public-domain instruction set, it is still supported by the GNU Compiler Collection, a popular free-software compiler. Three open-source cores exist for this ISA. OpenRISC is an open-source ISA based with associated RISC designs, it is supported with GCC and Linux implementations.
However, it has few commercial implementations. Krste Asanović at the University of California, found many uses for an open-source computer system. In 2010, he decided to develop and publish one in a "short, three-month project over the summer"; the plan was to help both industrial users. David Patterson at Berkeley aided the effort, he identified the properties of Berkeley RISC, RISC-V is one of his long series of cooperative RISC research projects. At this stage, students inexpensively provided initial software, CPU designs; the RISC-V authors and their institution provided the ISA documents and several CPU designs under BSD licenses, which allow derivative works—such as RISC-V chip designs—to be either open and free, or closed and proprietary. Early funding was from DARPA. Commercial concerns require an ISA to be stable before they can utilize it in a product that might last many years. To address this issue, the RISC-V foundation was formed to own and publish intellectual property related to RISC-V's definition.
The original authors and owners have surrendered their rights to the foundation. As of 2019 the foundation publishes the documents defining RISC-V and permits unrestricted utilization of the ISA for both software and hardware design. However, only paid members of the RISC-V foundation can vote to approve changes or utilize the trademarked compatibility logo. 2017: The Linley Group's Analyst's Choice Award for Best Technology The designers say that the instruction set is the main interface in a computer because it lies between the hardware and the software. If a good instruction set was open, available for use by all, it should reduce the cost of software by permitting far more reuse, it should increase competition among hardware providers, who can use more resources for design and less for software support. The designers assert that new principles are becoming rare in instruction set design, as the most successful designs of the last forty years have become similar. Of those that failed, most did so because their sponsoring companies failed commercially, not because the instruction sets were poor technically.
So, a well-designed open instruction set designed using well-established principles should attract long-term sup
The ACM A. M. Turing Award is an annual prize given by the Association for Computing Machinery to an individual selected for contributions "of lasting and major technical importance to the computer field"; the Turing Award is recognized as the highest distinction in computer science and the "Nobel Prize of computing". The award is named after Alan Turing, a British mathematician and reader in mathematics at the University of Manchester. Turing is credited as being the key founder of theoretical computer science and artificial intelligence. From 2007 to 2013, the award was accompanied by an additional prize of US $250,000, with financial support provided by Intel and Google. Since 2014, the award has been accompanied by a prize of US $1 million, with financial support provided by Google; the first recipient, in 1966, was Alan Perlis, of Carnegie Mellon University. The first female recipient was Frances E. Allen of IBM in 2006. List of ACM Awards List of science and technology awards List of prizes named after people IEEE John von Neumann Medal List of Turing Award laureates by university affiliation Turing Lecture Nobel Prize Schock Prize Nevanlinna Prize Kanellakis Award Millennium Technology Prize ACM Chronological listing of Turing Laureates Visualizing Turing Award Laureates ACM A.
M. Turing Award Centenary Celebration ACM A. M. Turing Award Laureate Interviews Celebration of 50 Years of the ACM A. M. Turing Award ACM A. M. Turing Award by SFBayACM
Datamation is a computer magazine, published in print form in the United States between 1957 and 1998, has since continued publication on the web. Today, Datamation is published as an online magazine at Datamation.com. When Datamation was first launched in 1957, it was not clear there would be a significant market for a computer magazine given how few computers there were; the idea for the magazine came from Donald Prell, Vice President of Application Engineering at a Los Angeles computer input-output company. In 1957, the only place his company could advertise their products was in either Scientific American or Business Week. Prell had discussed the idea with John Diebold who started "Automation Data Processing Newsletter", and, the inspiration for the name DATAMATION. Thompson Publications of Chicago agreed to publish the magazine. In 1995, after rival CMP Media Inc.'s 1994 launch of its TechWeb network of publications, Datamation worked in partnership with Bolt Beranek and Newman and launched one of the first online publications, Datamation.com.
In 1996, Datamation editors Bill Semich, Michael Lasell and April Blumenstiel, received the first-ever Jesse H. Neal Editorial Achievement Award for an online publication; the Neal Award is the highest award for business journalism in the U. S. In 1998, when its publisher, Reed Business Information, terminated print publication of Datamation 41 years after its first issue went to press, the online version, Datamation.com, became one of the first online-only magazines. In 2001, Internet.com acquired the still-profitable Datamation.com online publication. In 2009, Internet.com were acquired by Inc.. Traditionally, an April issue of Datamation contained a number of spoof articles and humorous stories related to computers; however humor was not limited to April. For example, in a spoof Datamation article, R. Lawrence Clark suggested that the GOTO statement could be replaced by the COMEFROM statement, provides some entertaining examples; this was implemented in the INTERCAL programming language, a language designed to make programs as obscure as possible.
Real Programmers Don't Use Pascal was a letter to the editor of Datamation, volume 29 number 7, July 1983, written by Ed Post, Wilsonville, Oregon, USA. Some of BOFH were reprinted in Datamation; the humor section was resurrected in 1996 by editor in chief Bill Semich with a two-page spread titled "Over the Edge" with material contributed by Annals of Improbable Research editor Marc Abrahams and MISinformation editor Chris Miksanek. Semich commissioned BOFH author Simon Travaglia to write humor columns for the magazine; that year, Miksanek became the sole humor contributor. The column was dropped from the magazine in 2001. A collection of "Over the Edge" columns was published in 2008 under the title "Esc: 400 Years of Computer Humor". Official website
AVR is a family of microcontrollers developed since 1996 by Atmel, acquired by Microchip Technology in 2016. These are modified Harvard architecture 8-bit RISC single-chip microcontrollers. AVR was one of the first microcontroller families to use on-chip flash memory for program storage, as opposed to one-time programmable ROM, EPROM, or EEPROM used by other microcontrollers at the time. AVR microcontrollers find many applications as embedded systems, they are common in hobbyist and educational embedded applications, popularized by their inclusion in many of the Arduino line of open hardware development boards. The AVR architecture was conceived by two students at the Norwegian Institute of Technology, Alf-Egil Bogen and Vegard Wollan; the original AVR MCU was developed at a local ASIC house in Trondheim, called Nordic VLSI at the time, now Nordic Semiconductor, where Bogen and Wollan were working as students. It was known as a μRISC and was available as silicon IP/building block from Nordic VLSI.
When the technology was sold to Atmel from Nordic VLSI, the internal architecture was further developed by Bogen and Wollan at Atmel Norway, a subsidiary of Atmel. The designers worked with compiler writers at IAR Systems to ensure that the AVR instruction set provided efficient compilation of high-level languages. Atmel does not stand for anything in particular; the creators of the AVR give no definitive answer as to. However, it is accepted that AVR stands for Alf and Vegard's RISC processor. Note that the use of "AVR" in this article refers to the 8-bit RISC line of Atmel AVR Microcontrollers. Among the first of the AVR line was the AT90S8515, which in a 40-pin DIP package has the same pinout as an 8051 microcontroller, including the external multiplexed address and data bus; the polarity of the RESET line was other than that the pinout was identical. The AVR 8-bit microcontroller architecture was introduced in 1997. By 2003, Atmel had shipped 500 million AVR flash microcontrollers; the Arduino platform, developed for simple electronics projects, was released in 2005 and featured ATmega8 AVR microcontrollers.
The AVR is a modified Harvard architecture machine, where program and data are stored in separate physical memory systems that appear in different address spaces, but having the ability to read data items from program memory using special instructions. AVRs are classified into following: tinyAVR – the ATtiny series 0.5–32 KB program memory 6–32-pin package Limited peripheral set megaAVR – the ATmega series 4–256 KB program memory 28–100-pin package Extended instruction set Extensive peripheral set XMEGA – the ATxmega series 16–384 KB program memory 44–64–100-pin package 32-pin package: XMEGA-E Extended performance features, such as DMA, "Event System", cryptography support Extensive peripheral set with ADCs Application-specific AVR megaAVRs with special features not found on the other members of the AVR family, such as LCD controller, USB controller, advanced PWM, CAN, etc. FPSLIC FPGA 5k to 40k gates SRAM for the AVR program code, unlike all other AVRs AVR core can run at up to 50 MHz 32-bit AVRs In 2006, Atmel released microcontrollers based on the 32-bit AVR32 architecture.
This was a different architecture unrelated to the 8-bit AVR, intended to compete with the ARM-based processors. It had a 32-bit data path, SIMD and DSP instructions, along with other audio- and video-processing features; the instruction set was similar to other RISC cores, but it was not compatible with the original AVR. Since support for AVR32 has been dropped from Linux as of kernel 4.12. Flash, EEPROM, SRAM are all integrated onto a single chip, removing the need for external memory in most applications; some devices have a parallel external bus option to allow adding additional data memory or memory-mapped devices. All devices have serial interfaces, which can be used to connect larger serial EEPROMs or flash chips. Program instructions are stored in non-volatile flash memory. Although the MCUs are 8-bit, each instruction takes two 16-bit words; the size of the program memory is indicated in the naming of the device itself. There is no provision for off-chip program memory. However, this limitation does not apply to the AT94 FPSLIC AVR/FPGA chips.
The data address space consists of the register file, I/O registers, SRAM. Some small models map the program ROM into the data address space, but larger models do not; the AVRs are classified as 8-bit RISC devices. In the tinyAVR and megaAVR variants of the AVR architecture, the working registers are mapped in as the first 32 memory addresses, followed by 64 I/O registers. In devices with many peripherals, these registers are followed by 160 “extended I/O” registers, only accessible as memory-mapped I/O. Actual SRAM starts after these register sections, at address 006016 or, in devices with "extended I/O", at 010016. Though there are separate addressing schemes and optimized opcodes for accessing the register file and the first 64 I/O registers, all can be addressed and manipulated as if they were in SRAM; the smallest of the tinyAVR variants use a reduced architecture with only 16 reg
Intel's i960 was a RISC-based microprocessor design that became popular during the early 1990s as an embedded microcontroller. It became a best-selling CPU in that segment, along with the competing AMD 29000. In spite of its success, Intel stopped marketing the i960 in the late 1990s, as a result of a settlement with DEC whereby Intel received the rights to produce the StrongARM CPU; the processor continues to be used for a few military applications. The i960 design was begun in response to the failure of Intel's iAPX 432 design of the early 1980s; the iAPX 432 was intended to directly support high-level languages that supported tagged, garbage-collected memory—such as Ada and Lisp—in hardware. Because of its instruction-set complexity, its multi-chip implementation, design flaws, the iAPX 432 was slow in comparison to other processors of its time. In 1984, Intel and Siemens started a joint project called BiiN, to create a high-end, fault-tolerant, object-oriented computer system programmed in Ada.
Many of the original i432 team members joined this project, although a new lead architect, Glenford Myers, was brought in from IBM. The intended market for the BiiN systems was high-reliability-computer users such as banks, industrial systems, nuclear power plants. Intel's major contribution to the BiiN system was a new processor design, influenced by the protected-memory concepts from the i432; the new design was to include a number of features to improve performance and avoid problems that had led to the i432's downfall. The first 960 processors entered the final stages of design, known as taping-out, in October 1985 and were sent to manufacturing that month, with the first working chips arriving in late 1985 and early 1986; the BiiN effort failed, due to market forces, the 960MX was left without a use. Myers attempted to save the design by extracting several subsets of the full capability architecture created for the BiiN system, he tried to convince Intel management to market the i960 as a general-purpose processor, both in place of the Intel 80286 and i386, as well as the emerging RISC market for Unix systems, including a pitch to Steve Jobs for use in the NeXT system.
Competition within and outside of Intel came not only from the i386 camp but from the i860 processor, yet another RISC processor design emerging within Intel at the time. Myers was unsuccessful at convincing Intel management to support the i960 as a general-purpose or Unix processor, but the chip found a ready market in early high-performance 32-bit embedded systems; the lead architect of i960 was superscalarity specialist Fred Pollack, the lead engineer of the Intel iAPX 432 and the lead architect of the i686 chip, the Pentium Pro. To avoid the performance issues that plagued the i432, the central i960 instruction-set architecture was a RISC design, only implemented in full in the i960MX; the memory subsystem was 33-bits wide—to accommodate a 32-bit word and a "tag" bit to implement memory protection in hardware. In many ways, the i960 followed the original Berkeley RISC design, notably in its use of register windows, an implementation-specific number of caches for the per-subroutine registers that allowed for fast subroutine calls.
The competing Stanford University design, MIPS, did not use this system, instead relying on the compiler to generate optimal subroutine call and return code. In common with most 32-bit designs, the i960 has a flat 32-bit memory space, with no memory segmentation; the i960 architecture anticipated a superscalar implementation, with instructions being dispatched to more than one unit within the processor. The "full" i960MX was never released for the non-military market, but the otherwise identical i960MC was used in high-end embedded applications; the i960MC included all of the features of the original BiiN system. A version of the RISC core without memory management or an FPU became the i960KA, the RISC core with an FPU became the i960KB; the versions were, identical internally—only the labeling was different. This meant the CPUs were much larger than necessary for the "actually supported" feature sets, as a result, more expensive to manufacture than they needed to be; the i960KA became successful as a low-cost 32-bit processor for the laser-printer market, as well as for early graphics terminals and other embedded applications.
Its success paid for future generations. The i960CA, first announced in July 1989, was the first pure RISC implementation of the i960 architecture, it featured a newly designed superscalar RISC core and added an unusual addressable on-chip cache, but lacked an FPU and MMU, as it was intended for high-performance embedded applications. The i960CA is considered to have been the first single-chip superscalar RISC implementation; the C-series included only one ALU, but could dispatch and execute an arithmetic instruction, a memory reference, a branch instruction at the same time, sustain two instructions per cycle under certain circumstances. The first versions released ran at 33 MHz, Intel promoted the chip as capable of 66 MIPS; the i960CA microarchitecture was designed in 1987–1988 and formally announced on September 12, 1989. In May 1992, came the i960CF, which included a larger instruction cache and added 1 KB of data cache, but was still without an FPU or MMU; the 80960Jx is a processor for embedded applications.
It features a 32-bit multiplexed address/data bus and data cache, 1K on-chip RAM
The IBM 3090 family was a high-end successor to the IBM System/370 series, thus indirectly the successor to the IBM System/360 launched 25 years earlier. Although the Feb. 12, 1985 initial announcement of the family's first two members, the Model 200 and Model 400, lacked explicit mention of both the name System/370 and the term backward compatibility, the aforementioned pair and the subsequently announced Models 120E, 150, 150E, 180, 180E, 200, 200E, 300, 300E, 400, 400E, 600E, 600J, 600S 3090 were described as using "ideas from the.. IBM 3033, extending them... It took... from the.. IBM 308X."The 400 and 600 were two 200s or 300s coupled together as one complex, could run in either single-system image mode or partitioned into two systems. By the late 1970s and early 1980s, patented technology allowed Amdahl mainframes of this era to be air-cooled, unlike IBM systems that required chilled water and its supporting infrastructure; the eight largest of the 18 models of the ES/9000 systems introduced in 1990 were water-cooled.
A modem for "remote service capabilities" was standard. In October 1985, IBM introduced an optional vector facility for the IBM 3090. IBM entered into partnerships with several universities to promote the use of the 3090 in scientific applications, efforts were made to convert code traditionally run on Cray computers. Along with the vector unit, IBM introduced their Engineering and Scientific Subroutines Library and a facility to run programs written for the discontinued 3838 array processor. IBM photo of 3090, facing operator console View of IBM 3090 600J system "box" IBM System/360 IBM System/370 Prasad, N. S.. IBM Mainframes: Architecture and Design. McGraw-Hill. ISBN 0070506868. — Chapter 10 describes the 3090. The IBM 3090