SIGGRAPH is the annual conference on computer graphics convened by the ACM SIGGRAPH organization. The first SIGGRAPH conference was in 1974; the conference is attended by tens of thousands of computer professionals. Past conferences have been held in Los Angeles, New Orleans, Boston and elsewhere in North America. SIGGRAPH Asia, a second yearly conference, has been held since 2008 in various Asian countries; the strength of SIGGRAPH comes from the chapters set all around the world. Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, motion picture, or video game industries. There are many booths for schools which specialize in computer graphics or interactivity. Dozens of research papers are presented each year, SIGGRAPH is considered the most prestigious forum for the publication of computer graphics research.
The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress; the papers accepted for presentation at SIGGRAPH are printed since 2003 in a special issue of the ACM Transactions on Graphics journal. Prior to 1992, SIGGRAPH papers were printed as part of the Computer Graphics publication. In addition to the papers, there are numerous panels of industry experts set up to discuss a wide variety of topics, from computer graphics to machine interactivity to education. SIGGRAPH offers many full- and half-day courses in state-of-the-art computer graphics topics, as well as shorter "sketch" presentations where artists and researchers discuss their latest work. In 1984, under LucasFilm Computer Group, John Lasseter's first computer animated short, The Adventures of André & Wally B. premiered at SIGGRAPH.
Pixar's first computer animated short, Luxo, Jr. debuted in 1986. Pixar has debuted numerous shorts at the conference since. SIGGRAPH has several awards programs to recognize outstanding contributions to computer graphics; the most prestigious is the Steven Anson Coons Award for Outstanding Creative Contributions to Computer Graphics. It has been awarded every two years since 1983 to recognize an individual's lifetime achievement in computer graphics; the following conference areas are the areas scheduled for SIGGRAPH 2012, as some conference areas vary annually. ACM Student Research Competition Art Gallery: presents digital and technologically mediated artworks Art Papers: features the artists and artwork, processes and theoretical frameworks for making art and contextualizing its place in society Birds of a Feather: informal presentations and demonstrations Computer Animation Festival: an annual festival for the world's most innovative and amazing digital film and video creators Courses: Attendees learn from the experts in the field and gain inside knowledge, critical to career advancement.
Emerging Technologies: presents innovative technologies and applications in several fields, from displays and input devices to collaborative environments and robotics, technologies that apply to film and game production Exhibition: presents the newest hardware systems, software tools, creative services from hundreds of companies International Resources: Focusing on the state of computer graphics in different regions of the world, it offers bilingual tours of conference programs, informal translation services, space for meetings and demonstrations. Job Fair: a place for employers to meet with thousands of job seekers Keynote Speakers: stories from the most influential practitioners in computer graphics, interactive techniques, related fields Panels: moderated discussions on important topics, with expert panelists chosen by the organizers to provide a wide range of perspectives Posters: presenting student, in-progress, late-breaking work Real-Time Live!: showcase for the latest trends and techniques for pushing the boundaries of interactive visuals Sandbox: provides an opportunity to get hands-on with the latest, most innovative real-time projects produced over the last 12 months SIGkids: engages local youngsters with outreach and on-site programs to excite and cultivate the next-next-generation SIGGRAPH Business Symposium SIGGRAPH Dailies: Each presenter has one minute to present an animation and describe the work.
Studio: a place for making and creating at SIGGRAPH Talks: presentations on recent achievements in all areas of computer graphics and interactive techniques, including art, animation, visual effects, interactivity and engineering Technical Papers: the premier international forum for disseminating new scholarly work in computer graphics and interactive techniques Technical Papers Fast Forward: summary of Technical Papers. SIGGRAPH Mobile: focusing on mobile computer graphics and its applications such as augmented reality and interactive apps. At SIGGRAPH ASIA this track is called Symposium of Apps. Association for Computing Machinery ACM SIGGRAPH ACM Transactions on Graphics Computer Graphics, a publication of ACM SIGGRAPH The list of computer science conferences contains other academic conferences in computer science. ACM SIGGRAPH website ACM SIGGRAPH conference publications ACM SIGGRAPH YouTube SIGGRAPH 2017 Conference, Los Angeles, CA SIGGRAPH Asia 2017 Conference, Thai
Natural user interface
In computing, a natural user interface, or NUI, or natural interface is a user interface, invisible, remains invisible as the user continuously learns complex interactions. The word natural is used because most computer interfaces use artificial control devices whose operation has to be learned. An NUI relies on a user being able to transition from novice to expert. While the interface requires learning, that learning is eased through design which gives the user the feeling that they are and continuously successful. Thus, "natural" refers to a goal in the user experience – that the interaction comes while interacting with the technology, rather than that the interface itself is natural; this is contrasted with the idea of an intuitive interface, referring to one that can be used without previous learning. Several design strategies have been proposed. One strategy is the use of a "reality user interface" known as "reality-based interfaces" methods. One example of an RUI strategy is to use a wearable computer to render real-world objects "clickable", i.e. so that the wearer can click on any everyday object so as to make it function as a hyperlink, thus merging cyberspace and the real world.
Because the term "natural" is evocative of the "natural world", RBI are confused for NUI, when in fact they are one means of achieving it. One example of a strategy for designing a NUI not based in RBI is the strict limiting of functionality and customization, so that users have little to learn in the operation of a device. Provided that the default capabilities match the user's goals, the interface is effortless to use; this is an overarching design strategy in Apple's iOS. Because this design is coincident with a direct-touch display, non-designers misattribute the effortlessness of interacting with the device to that multi-touch display, not to the design of the software where it resides. In the 1990s, Steve Mann developed a number of user-interface strategies using natural interaction with the real world as an alternative to a command-line interface or graphical user interface. Mann referred to this work as "natural user interfaces", "Direct User Interfaces", "metaphor-free computing". Mann's EyeTap technology embodies an example of a natural user interface.
Mann's use of the word "Natural" refers to both action that comes to human users, as well as the use of nature itself, i.e. physics, the natural environment. A good example of an NUI in both these senses is the hydraulophone when it is used as an input device, in which touching a natural element becomes a way of inputting data. More a class of musical instruments called "physiphones", so-named from the Greek words "physika", "physikos" and "phone" have been proposed as "Nature-based user interfaces". In 2006, Christian Moore established an open research community with the goal to expand discussion and development related to NUI technologies. In a 2008 conference presentation "Predicting the Past," August de los Reyes, a Principal User Experience Director of Surface Computing at Microsoft described the NUI as the next evolutionary phase following the shift from the CLI to the GUI. Of course, this too is an over-simplification, since NUIs include visual elements – and thus, graphical user interfaces.
A more accurate description of this concept would be to describe it as a transition from WIMP to NUI. In the CLI, users had to learn an artificial means of input, the keyboard, a series of codified inputs, that had a limited range of responses, where the syntax of those commands was strict; when the mouse enabled the GUI, users could more learn the mouse movements and actions, were able to explore the interface much more. The GUI relied on metaphors for interacting with on-screen content or objects. The'desktop' and'drag' for example, being metaphors for a visual interface, translated back into the strict codified language of the computer. An example of the misunderstanding of the term NUI was demonstrated at the Consumer Electronics Show in 2010. "Now a new wave of products is poised to bring natural user interfaces, as these methods of controlling electronics devices are called, to an broader audience."In 2010, Microsoft's Bill Buxton reiterated the importance of the NUI within Microsoft Corporation with a video discussing technologies which could be used in creating a NUI, its future potential.
In 2010, Daniel Wigdor and Dennis Wixon provided an operationalization of building natural user interfaces in their book. In it, they distinguish between natural user interfaces, the technologies used to achieve them, reality-based UI; when Bill Buxton was asked about the iPhone's interface, he responded "Multi-touch technologies have a long history. To put it in perspective, the original work undertaken by my team was done in 1984, the same year that the first Macintosh computer was released, we were not the first."Multi-Touch is a technology which could enable a natural user interface. However, most UI toolkits used to construct interfaces executed with such technology are traditional GUIs. One example is the work done by Jefferson Han on multi-touch interfaces. In a demonstration at TED in 2006, he showed a variety of means of interacting with on-screen content using both direct manipulations and gestures. For example, to shape an on-screen glutinous mass, Jeff literally'pinches' and prods and pokes it with his fingers.
In a GUI interface for a design application for example, a user would use the metaphor of'tools' to do this, for example, selecting a prod tool, or selecting two par
In computing and telecommunications, a menu is a list of options or commands presented to the user of a computer or communications system. A menu may either be only part of a more complex one. A user chooses an option from a menu by using an input device; some input methods require linear navigation: the user must move a cursor or otherwise pass from one menu item to another until reaching the selection. On a computer terminal, a reverse video bar may serve as the cursor. Touch user interfaces and menus that accept codes to select menu options without navigation are two examples of non-linear interfaces; some of the input devices used in menu interfaces are touchscreens, mice, remote controls, microphones. In a voice-activated system, such as interactive voice response, a microphone sends a recording of the user's voice to a speech recognition system, which translates it to a command. A computer using a command line interface may present a list of relevant commands with assigned short-cuts on the screen.
Entering the appropriate short-cut selects a menu item. A more sophisticated solution offers navigation using the mouse; the current selection can be activated by pressing the enter key. A computer using a graphical user interface presents menus with a combination of text and symbols to represent choices. By clicking on one of the symbols or text, the operator is selecting the instruction that the symbol represents. A context menu is a menu in which the choices presented to the operator are automatically modified according to the current context in which the operator is working. A common use of menus is to provide convenient access to various operations such as saving or opening a file, quitting a program, or manipulating data. Most widget toolkits provide some form of pop-up menu. Pull-down menus are the type used in menu bars, which are most used for performing actions, whereas pop-up menus are more to be used for setting a value, might appear anywhere in a window. According to traditional human interface guidelines, menu names were always supposed to be verbs, such as "file", "edit" and so on.
This has been ignored in subsequent user interface developments. A single-word verb however is sometimes unclear, so as to allow for multiple word menu names, the idea of a vertical menu was invented, as seen in NeXTSTEP. Menus are now seen in consumer electronics, starting with TV sets and VCRs that gained on-screen displays in the early 1990s, extending into computer monitors and DVD players. Menus allow the control of settings like tint, contrast and treble, other functions such as channel memory and closed captioning. Other electronics with text-only displays can have menus, anything from business telephone systems with digital telephones, to weather radios that can be set to respond only to specific weather warnings in a specific area. Other more recent electronics in the 2000s have menus, such as digital media players. Menus are sometimes hierarchically organized, allowing navigation through different levels of the menu structure. Selecting a menu entry with an arrow will expand it, showing a second menu with options related to the selected entry.
Usability of submenus has been criticized as difficult, because of the narrow height that must be crossed by the pointer. The steering law predicts that this movement will be slow, any error in touching the boundaries of the parent menu entry will hide the submenu; some techniques proposed to alleviate these errors are keeping the submenu open while moving the pointer in diagonal, using mega menus designed to enhance scannability and categorization of its contents. In computer menu functions or buttons, an appended ellipsis means that upon selection, another dialog will follow, where the user can or must make a choice. If the ellipse is missing, the function will be executed upon selection. "Save": the file will be overwritten without further input. "Save as...": in the following dialog, the user can, for example, select another location or file name or other file format. Drop-down menu Federal Standard 1037C Hamburger button Pie menu Radio button WIMP MenUA: A Design Space of Menu Techniques—Site that discusses various menu design techniques
A word processor is a computer program or device that provides for input, editing and output of text plus other features. Early word processors were stand-alone devices dedicated to the function, but current word processors are word processor programs running on general purpose computers; the functions of a word processor program fall somewhere between those of a simple text editor and a functioned desktop publishing program. However the distinctions between these three have changed over time, are somewhat unclear. From the outset, word processors did not develop out of computer technology. Rather, they evolved from the needs of writers; the history of word processing is the story of the gradual automation of the physical aspects of writing and editing, to the refinement of the technology to make it available to corporations and Individuals. The term word processing burst into American offices in early 1970s centered on the idea of reorganizing typists, but the meaning soon shifted toward automated text editing.
At first, the designers of word processing systems combined existing technologies with emerging ones to develop stand-alone equipment, creating a new business distinct from the emerging world of the personal computer. The concept of word processing arose from the more general data processing, which since the 1950s had been the application of computers to business administration. Through history, there have been 3 types of word processors: mechanical and software; the first word processing device was patented by Henry Mill for a machine, capable of "writing so and you could not distinguish it from a printing press". More than a century another patent appeared in the name of William Austin Burt for the typographer. In the late 19th century, Christopher Latham Sholes created the first recognizable typewriter that although it was a large size, described as a "literary piano"; these mechanical systems could not “process text” beyond changing the position of type, re-fill empty spaces or jump lines.
It was not until decades that the introduction of electricity and electronics into typewriters began to help the writer with the mechanical part. The term “word processing” itself was created in the 1950s by Ulrich Steinhilper, a German IBM typewriter sales executive. However, it did not make its appearance in 1960s office management or computing literatures, though many of the ideas and technologies to which it would be applied were well known, but by 1971 the term was recognized by the New York Times as a business "buzz word". Word processing paralleled the more general "data processing", or the application of computers to business administration, thus by 1972 discussion of word processing was common in publications devoted to business office management and technology, by the mid-1970s the term would have been familiar to any office manager who consulted business periodicals. By the late 1960s, IBM had developed the IBM MT/ST; this was a model of the IBM Selectric typewriter from the earlier part of this decade, but built into its own desk, integrated with magnetic tape recording and playback facilities, with controls and a bank of electrical relays.
The MT/ST automated word wrap. This device allowed rewriting text, written on another tape and you could collaborate, it was a revolution for the word processing industry. In 1969 the tapes were replaced by magnetic cards; these memory cards were introduced in the side of an extra device that accompanied the MT/ST, able to read and record the work. In the early 1970s, word processing became computer-based with the development of several innovations. Just before the arrival of the Personal Computer, IBM developed the "floppy disk". In the early 1970s word-processing systems with a CRT screen display editing were designed. At this time these stand-alone word processing systems were designed and marketed by several pioneering companies. Linolex Systems was founded in 1970 by Robert Oleksiak. Linolex based its technology on floppy drives and software, it was a computer-based system for application in the word processing businesses and it sold systems through its own sales force. With a base of installed systems in 500 plus customer sites, Linolex Systems sold 3 million units in 1975 — a year before Apple Computer, was first incorporated in 1976.
At this time, Lexitron Corporation produced a series of dedicated word processing microcomputers. Lexitron was the first to use a full size video display screen in its models by 1978. Lexitron used 5-1/4 inch floppy diskettes, which were the standard in the personal computer field; the program disk was inserted in one drive, the system booted up. The data diskette was put in the second drive; the operating system and the word processing program were combined in one program. Another of the early word processing adopters was Vydec, which created in 1973, the first modern text processor, the “Vydec Word Processing System”, it had built-in multiple functions like the ability to print it. The Vydec Word Processing System sold for $12,000 at the time; the Redactron Corporation designed and manufactured editing systems, including correcting/editing typewriters and card units, a word processor called the Data Secretary. The Burrough
The Xerox Alto is the first computer designed from its inception to support an operating system based on a graphical user interface using the desktop metaphor. The first machines were introduced on 1 March 1973, a decade before mass market GUI machines became available; the Alto is contained in a small cabinet and uses a custom central processing unit built from multiple SSI and MSI integrated circuits. Each machine cost tens of thousands of dollars despite its status as a personal computer. Only small numbers were built but by the late 1970s, about 1,000 were in use at various Xerox laboratories, about another 500 in several universities. Total production was about 2,000 systems; the Alto became well known in Silicon Valley and its GUI was seen as the future of computing. In 1979, Steve Jobs arranged a visit to Xerox PARC, in which Apple Computer personnel would receive a demonstration of the technology from Xerox in exchange for Xerox being able to purchase stock options in Apple. After two visits to see the Alto, Apple engineers used the concepts to introduce the Apple Lisa and Macintosh systems.
Xerox commercialized a modified version of the Alto concepts as the Xerox Star, first introduced in 1981. A complete office system including several workstations, storage and a laser printer cost as much as $100,000, like the Alto, the Star had little direct impact on the market; the Alto was conceived in 1972 in a memo written by Butler Lampson, inspired by the oN-Line System developed by Douglas Engelbart and Dustin Lindberg at SRI International. It was designed by Charles P. Thacker. Industrial Design and manufacturing was sub-contracted to Xerox, whose Special Programs Group team included Doug Stewart as Program Manager, Abbey Silverstone Operations, Bob Nishimura, Industrial Designer. An initial run of 30 units was produced by Xerox El Segundo, working with John Ellenby at PARC and Doug Stewart and Abbey Silverstone at El Segundo, who were responsible for re-designing the Alto's electronics. Due to the success of the pilot run, the team went on to produce 2,000 units over the next ten years.
Several Xerox Alto chassis are now on display at the Computer History Museum in Mountain View and running systems are on display at the Living Computer Museum in Seattle, Washington, at the Computer History Museum, in private hands. For his pioneering design and realization of the Alto, Charles P. Thacker was awarded the 2009 Turing Award of the Association for Computing Machinery on March 9, 2010; the 2004 Charles Stark Draper Prize was awarded to Thacker, Alan C. Kay, Butler Lampson, Robert W. Taylor for their work on Alto; the following description is based on the August 1976 Alto Hardware Manual by Xerox PARC. Alto uses a microcoded design, but unlike many computers, the microcode engine is not hidden from the programmer in a layered design. Applications such as Pinball take advantage of this to accelerate performance; the Alto has a bit-slice arithmetic logic unit based on the Texas Instruments' 74181 chip, a ROM control store with a writable control store extension and has 128 kB of main memory organized in 16-bit words.
Mass storage is provided by a hard disk drive that uses a removable 2.5 MB one-platter cartridge similar to those used by the IBM 2310. The base machine and one disk drive are housed in a cabinet about the size of a small refrigerator. Alto both ignored the lines between functional elements. Rather than a distinct central processing unit with a well-defined electrical interface to storage and peripherals, the Alto ALU interacts directly with hardware interfaces to memory and peripherals, driven by microinstructions that are output from the control store; the microcode machine supports up to 16 cooperative tasks, each with fixed priority. The emulator task executes the normal instruction set to. Other tasks serve the display, memory refresh, disk and other I/O functions; as an example, the bitmap display controller is little more. Ethernet is supported by minimal hardware, with a shift register that acts bidirectionally to serialize output words and deserialize input words, its speed was designed to be 3 Mbit/s because the microcode engine could not go faster and continue to support the video display, disk activity and memory refresh.
Unlike most minicomputers of the era, Alto does not support a serial terminal for user interface. Apart from an Ethernet connection, the Alto's only common output device is a bi-level cathode ray tube display with a tilt-and-swivel base, mounted in portrait orientation rather than the more common "landscape" orientation, its input devices are a custom detachable keyboard, a three-button mouse, an optional 5-key chorded keyboard. The last two items had been introduced by SRI's On-Line System. In the early mice, the buttons were three narrow bars, arranged top to bottom rather than side to side; the motion was sensed by two wheels perpendicular to each other. These were soon replaced with a ball-type mouse, invented by Ronald E. Rider and developed by Bill English; these were photo-mechanic
Google LLC is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, search engine, cloud computing and hardware. It is considered one of the Big Four technology companies, alongside Amazon and Facebook. Google was founded in 1998 by Larry Page and Sergey Brin while they were Ph. D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock, they incorporated Google as a held company on September 4, 1998. An initial public offering took place on August 19, 2004, Google moved to its headquarters in Mountain View, nicknamed the Googleplex. In August 2015, Google announced plans to reorganize its various interests as a conglomerate called Alphabet Inc. Google is Alphabet's leading subsidiary and will continue to be the umbrella company for Alphabet's Internet interests. Sundar Pichai was appointed CEO of Google.
The company's rapid growth since incorporation has triggered a chain of products and partnerships beyond Google's core search engine. It offers services designed for work and productivity, email and time management, cloud storage, instant messaging and video chat, language translation and navigation, video sharing, note-taking, photo organizing and editing; the company leads the development of the Android mobile operating system, the Google Chrome web browser, Chrome OS, a lightweight operating system based on the Chrome browser. Google has moved into hardware. Google has experimented with becoming an Internet carrier. Google.com is the most visited website in the world. Several other Google services figure in the top 100 most visited websites, including YouTube and Blogger. Google is the most valuable brand in the world as of 2017, but has received significant criticism involving issues such as privacy concerns, tax avoidance, antitrust and search neutrality. Google's mission statement is "to organize the world's information and make it universally accessible and useful".
The companies unofficial slogan "Don't be evil" was removed from the company's code of conduct around May 2018. Google began in January 1996 as a research project by Larry Page and Sergey Brin when they were both PhD students at Stanford University in Stanford, California. While conventional search engines ranked results by counting how many times the search terms appeared on the page, the two theorized about a better system that analyzed the relationships among websites, they called this new technology PageRank. Page and Brin nicknamed their new search engine "BackRub", because the system checked backlinks to estimate the importance of a site, they changed the name to Google. The domain name for Google was registered on September 15, 1997, the company was incorporated on September 4, 1998, it was based in the garage of a friend in California. Craig Silverstein, a fellow PhD student at Stanford, was hired as the first employee. Google was funded by an August 1998 contribution of $100,000 from Andy Bechtolsheim, co-founder of Sun Microsystems.
Google received money from three other angel investors in 1998: Amazon.com founder Jeff Bezos, Stanford University computer science professor David Cheriton, entrepreneur Ram Shriram. Between these initial investors and family Google raised around 1 million dollars, what allowed them to open up their original shop in Menlo Park, California After some additional, small investments through the end of 1998 to early 1999, a new $25 million round of funding was announced on June 7, 1999, with major investors including the venture capital firms Kleiner Perkins and Sequoia Capital. In March 1999, the company moved its offices to Palo Alto, home to several prominent Silicon Valley technology start-ups; the next year, Google began selling advertisements associated with search keywords against Page and Brin's initial opposition toward an advertising-funded search engine. To maintain an uncluttered page design, advertisements were text-based. In June 2000, it was announced that Google would become the default search engine provider for Yahoo!, one of the most popular websites at the time, replacing Inktomi.
In 2003, after outgrowing two other locations, the company leased an office complex from Silicon Graphics, at 1600 Amphitheatre Parkway in Mountain View, California. The complex became known as the Googleplex, a play on the word googolplex, the number one followed by a googol zeroes. Three years Google bought the property from SGI for $319 million. By that time, the name "Google
History of the graphical user interface
Some early cathode-ray-tube screens used a light pen, rather than a mouse, as the pointing device. The concept of a multi-panel windowing system was introduced by the first real-time graphic display systems for computers: the SAGE Project and Ivan Sutherland's Sketchpad. In the 1960s, Douglas Engelbart's Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System; this computer incorporated multiple windows used to work on hypertext. Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945. Much of the early research was based on. So, the design was based on the childlike primitives of eye-hand coordination, rather than use of command languages, user-defined macro procedures, or automated transformations of data as used by adult professionals. Engelbart's work directly led to the advances at Xerox PARC. Several people went from SRI to Xerox PARC in the early 1970s.
In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, was the first computer to demonstrate the desktop metaphor and graphical user interface, it was not a commercial product, but several thousand units were built and were used at PARC, as well as other XEROX offices, at several universities for many years. The Alto influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ, the Apple Lisa and Macintosh, the first Sun workstations; the GUI was first developed at Xerox PARC by Alan Kay, Larry Tesler, Dan Ingalls, David Smith, Clarence Ellis and a number of other researchers. It used windows and menus to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get cut & paste editor. In 1975, Xerox engineers demonstrated a Graphical User Interface "including icons and the first use of pop-up menus". In 1981 Xerox introduced a pioneering product, Star, a workstation incorporating many of PARC's innovations.
Although not commercially successful, Star influenced future developments, for example at Apple and Sun Microsystems. The Blit, a graphics terminal, was developed at Bell Labs in 1982. Lisp machines developed at MIT and commercialized by Symbolics and other manufacturers, were early high-end single user computer workstations with advanced graphical user interfaces and mouse as an input device. First workstations from Symbolics came to market in 1981, with more advanced designs in the subsequent years. Beginning in 1979, started by Steve Jobs and led by Jef Raskin, the Apple Lisa and Macintosh teams at Apple Computer continued to develop such ideas; the Lisa, released in 1983, featured a high-resolution stationery-based graphical interface atop an advanced hard disk based OS that featured such things as preemptive multitasking and graphically oriented inter-process communication. The comparatively simplified Macintosh, released in 1984 and designed to be lower in cost, was the first commercially successful product to use a multi-panel window interface.
A desktop metaphor was used, in. File directories looked like file folders. There were a set of desk accessories like a calculator and alarm clock that the user could place around the screen as desired; the Macintosh, in contrast to the Lisa, used a program-centric rather than document-centric design. Apple revisited the document-centric design, in a limited manner, much with OpenDoc. There is still some controversy over the amount of influence that Xerox's PARC work, as opposed to previous academic research, had on the GUIs of the Apple Lisa and Macintosh, but it is clear that the influence was extensive, because first versions of Lisa GUIs lacked icons; these prototype GUIs are at least mouse-driven, but ignored the WIMP concept. Screenshots of first GUIs of Apple Lisa prototypes show the early designs. Note that Apple engineers visited the PARC facilities and a number of PARC employees subsequently moved to Apple to work on the Lisa and Macintosh GUI. However, the Apple work extended PARC's adding manipulatable icons