Hypercube internetwork topology
Hypercube networks are a type of network topology used to connect multiple processors with memory modules and route data. Hypercube networks consist of 2m nodes; these nodes form the vertices of squares to create an internetwork connection. A hypercube is a multidimensional mesh network with two nodes in each dimension. Due to similarity, such topologies are grouped into a k-ary d-dimensional mesh topology family where d represents the number of dimensions and k represents the number of nodes in each dimension. Hypercube interconnection network is formed by connecting N nodes that can be expressed as a power of 2; this means if the network has n nodes it can be expressed as: N = 2 m where m is the number of bits that are required to label the nodes in the network. So, if there are 4 nodes in the network, 2 bits are needed to represent all the nodes in the network; the network is constructed by connecting the nodes that just differ by one bit in their binary representation. This is referred to as Binary labelling.
A 3D hypercube internetwork would be a cube with 12 edges. A 4D hypercube network can be created by duplicating two 3D networks, adding a most significant bit; the new added bit should be ` 0' for ` 1' for the other 3D hypercube. The corners of the respective one-bit changed MSBs are connected to create the higher hypercube network; this method can be used to construct any m-bit represented hypercube with -bit represented hypercube. Routing method for a hypercube network is referred to as E-Cube routing; the distance between two nodes in the network can be given by Hamming weight of the XOR-operation between their respective binary labels. The distance between Node 1 and Node 2 in the network given by: ( H a m m i n g _ w e i g h t = H a m m i n g _ w e i g h t = 2 E-Cube routing is a static routing method that employs XY-routing algorithm; this is referred to as Deterministic, Dimension Ordered Routing model. E-Cube routing works by traversing the network in the kth dimension where k is the least significant non-zero bit in the result of calculating distance.
For example, let the sender's label be ‘00’ and the receiver's label be ‘11’. So, the distance between them is 11 and the least significant non-zero bit is the LSB bit. Figuring out which way to go for a ‘0’ or ‘1’ is determined by XY routing algorithm. Different measures of performance are used to evaluate the efficiency of a hypercube network connection against various other network topologies; this defines the number of adjacent nodes to a particular node. These nodes should be immediate neighbors. In case of a hypercube the degree is n; this defines the maximum number of nodes that a message must pass through on its way from the source to the destination. This gives us the delay in transmitting a message across a network. In case of a hypercube the diameter is n; the distance between two nodes defined by the number of hops in the shortest path between two particular nodes. It is given by the formula - d a = ∑ d = 1 r N − 1 In case of Hypercubes the average distance is given as n/2; this is the least number of wires that you should cut in order to divide the network into two equal halves.
It is given as 2n-1 for Hypercubes
Tamiko Thiel is an internationally active American media artist who specializes in "exploring the interplay of place, the body and cultural identity". Tamiko Thiel attended Stanford University and graduated with a B. S. in Product Design Engineering with an emphasis on human factors design in 1979. She went on to receive her M. S. in Mechanical Engineering in 1983 from the Massachusetts Institute of Technology. There, she studied human-machine design at the school's Biomechanics Lab and computer graphics at the precursors to the Media Lab. In 1991, Thiel received her Diploma in Applied Graphics, specializing in video installation art, from the Academy of Fine Arts in Munich, Germany. Thiel's first career was in product design, she worked at Thinking Machines Corporation with Danny Hillis, Richard Feynman and Brewster Kahle, heading the design team that created the boolean n-cube hypercube chassis that defined the Connection Machine CM-1 and CM-2 supercomputers' appearance. From November 1994 to February 1996 she worked for Starbright World as the creative director and producer of the initial system for the Starbright World project, working with Steven Spielberg, to create an online interactive 3D virtual world for ill children.
Since Thiel has had many other exhibits, some of the most notable being her shows "Beyond Manzanar", "The Travels of Mariko Horo", "Shades of Absence". She is one of the founding members of Manifest. AR, a group of artists focused on augmented reality, with which she staged spontaneous interventions at Corcoran Gallery of Art in 2013, Tate Modern in 2012, the Venice Biennial in 2011, Museum of Modern Art in 2010,Thiel's artwork for the last 15 years has focused on "site specific virtual reality installations", her art has been displayed in international venues including the International Center of Photography, DUMBO, the Institute of Contemporary Art, the Corcoran Gallery of Art, ZKM, Tokyo Metropolitan Museum of Photography, Art Gwanju, Fondazione Querini Stampalia/Venice, Ars Electronica, SIGGRAPH and ISEA. To describe the venues in which Thiel's artwork is on display one must take into consideration that her chosen platform is augmented reality. With the use of Layar, an augmented reality viewer, Thiel's works can be layered over locations such as the New York Stock Exchange, the Tate Modern Museum in London, New York's Museum of Modern Art, the Berlin Wall, Piazza San Marco Venice and many other locations.
Rockefeller Foundation Cultural Innovation Fund Award - 2012 IBM Innovation Award for artistic creation in art and technology - 2009 World Technology Award in the category Art - 2009 Hauptstadtkulturfonds Award - 2007 City of Munich Prize for Junge Kunst/Neue Mediem - 2006 Wired Magazine and the Asian American Arts Foundation: Grant Award - 1998 Bay Area Video Coalition: Artist Equipment Access Award - 1996 Global Information Infrastructure Awards: Winner of Next Generation Award - 1996 Asahi Shimbun: Digital Entertainment Award - 1996 Cyberedge: Virtual Reality product of the Year Award - 1995 "Cyber-Animism and Augmented Dreams," Leonardo Electronic Almanac, April 2011. "The Design of the Connection Machine," The Designed World: Images, Environments. Richard Buchanan, Dennis Doordan and Victor Margolin, Ed. Berg, New York, pp. 155–166. "Where Stones Can Speak: Dramatic Encounters in Interactive 3D Virtual Reality," chapter in the book Third Person: Authoring and Exploring Vast Narratives, ed.
Pat Harrigan & Noah Wadrup-Fruin, MIT Press, Cambridge, MA, USA. "Life at the Interface of Art and Technology," ON SCREEN, 911 Media Arts Center, Seattle, WA. USA. Winter 2007, Vol. 18 No.1, pp. 32–34. "Beyond Manzanar: Creating Dramatic Structure in Ergodic Narratives," Published in the conference proceedings for Technologies for Interactive Digital Storytelling and Entertainment, Germany, June 24–26, 2004, Springer Berlin / Heidelberg. "Veiled Fantasies," Site Street Online Journal, Fall 2002 "Dramatic structure in interactive virtual reality," Aedo-ba, Villa Tosca Design Management Center, Milan, Nr. 03/04, Fall 2001 pp. 40–45. "Beyond Manzanar: Constructing Meaning in Interactive Virtual Reality," COSIGN 2001 Conference Proceedings, Holland. "Machine Sapiens," Ylem Newsletter, Vol. 15, No. 6, Nov./Dec. 1995. Pp. 5–6. "The Design of the Connection Machine," InterCommunication Magazine, InterCommunication Center of the NTT, Japan, No. 8, Spring 1994. Pp. 128–135. "The Design of the Connection Machine," DesignIssues, The MIT Press, Cambridge, MA, Vol. 10, No.
1, Spring 1994. Pp. 5–18. "Vijfenzestigduizend Processoren in Twaalf Dimensies," Computable, Netherlands, 26E Jaargang, Week 22, 4 June 1993, pp. 25, 27. "Machina Cogitans," Genetic Art - Artificial Life, ARS ELECTRONICA, Austria. Pp. 186–194 "The Connection Machine," AXIS Design Magazine, Number 45, Japan, 1992 With Houshmand, Zara. "Beyond Manzanar", Factorial,! Factorial Press, San Diego, CA, 2003 With Houshmand, Zara. "Beyond Manzanar," on front Page of NW Nikkei / North American Post, Vol. 18 No.17, April 21, 2001, pp. 1,5. With Houshmand, Zara. "Beyond Manzanar," SIGGRAPH 2001 Electronic Art and Animation Catalog and CD-ROM, ACM SIGGRAPH, New York, page 125. Tamiko Thiel, "Beyond Manzanar: Constructing Meaning in Interactive Virtual Reality", cosignconference.org
The Manhattan Project was a research and development undertaking during World War II that produced the first nuclear weapons. It was led by the United States with the support of Canada. From 1942 to 1946, the project was under the direction of Major General Leslie Groves of the U. S. Army Corps of Engineers. Nuclear physicist Robert Oppenheimer was the director of the Los Alamos Laboratory that designed the actual bombs; the Army component of the project was designated the Manhattan District. Along the way, the project absorbed Tube Alloys; the Manhattan Project began modestly in 1939, but grew to employ more than 130,000 people and cost nearly US$2 billion. Over 90% of the cost was for building factories and to produce fissile material, with less than 10% for development and production of the weapons. Research and production took place at more than 30 sites across the United States, the United Kingdom, Canada. Two types of atomic bombs were developed concurrently during the war: a simple gun-type fission weapon and a more complex implosion-type nuclear weapon.
The Thin Man gun-type design proved impractical to use with plutonium, therefore a simpler gun-type called Little Boy was developed that used uranium-235, an isotope that makes up only 0.7 percent of natural uranium. Chemically identical to the most common isotope, uranium-238, with the same mass, it proved difficult to separate the two. Three methods were employed for uranium enrichment: electromagnetic and thermal. Most of this work was performed at the Clinton Engineer Works at Tennessee. In parallel with the work on uranium was an effort to produce plutonium. After the feasibility of the world's first artificial nuclear reactor was demonstrated in Chicago at the Metallurgical Laboratory, it designed the X-10 Graphite Reactor at Oak Ridge and the production reactors in Hanford, Washington, in which uranium was irradiated and transmuted into plutonium; the plutonium was chemically separated from the uranium, using the bismuth phosphate process. The Fat Man plutonium implosion-type weapon was developed in a concerted design and development effort by the Los Alamos Laboratory.
The project was charged with gathering intelligence on the German nuclear weapon project. Through Operation Alsos, Manhattan Project personnel served in Europe, sometimes behind enemy lines, where they gathered nuclear materials and documents, rounded up German scientists. Despite the Manhattan Project's tight security, Soviet atomic spies penetrated the program; the first nuclear device detonated was an implosion-type bomb at the Trinity test, conducted at New Mexico's Alamogordo Bombing and Gunnery Range on 16 July 1945. Little Boy and Fat Man bombs were used a month in the atomic bombings of Hiroshima and Nagasaki, respectively. In the immediate postwar years, the Manhattan Project conducted weapons testing at Bikini Atoll as part of Operation Crossroads, developed new weapons, promoted the development of the network of national laboratories, supported medical research into radiology and laid the foundations for the nuclear navy, it maintained control over American atomic weapons research and production until the formation of the United States Atomic Energy Commission in January 1947.
The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, its theoretical explanation by Lise Meitner and Otto Frisch, made the development of an atomic bomb a theoretical possibility. There were fears that a German atomic bomb project would develop one first among scientists who were refugees from Nazi Germany and other fascist countries. In August 1939, Hungarian-born physicists Leó Szilárd and Eugene Wigner drafted the Einstein–Szilárd letter, which warned of the potential development of "extremely powerful bombs of a new type", it urged the United States to take steps to acquire stockpiles of uranium ore and accelerate the research of Enrico Fermi and others into nuclear chain reactions. They had it delivered to President Franklin D. Roosevelt. Roosevelt called on Lyman Briggs of the National Bureau of Standards to head the Advisory Committee on Uranium to investigate the issues raised by the letter. Briggs held a meeting on 21 October 1939, attended by Szilárd, Wigner and Edward Teller.
The committee reported back to Roosevelt in November that uranium "would provide a possible source of bombs with a destructiveness vastly greater than anything now known."The Advisory Committee on Uranium became the National Defense Research Committee Committee on Uranium when that organization was formed on 27 June 1940. Briggs proposed spending $167,000 on research into uranium the uranium-235 isotope, the discovered plutonium. On 28 June 1941, Roosevelt signed Executive Order 8807, which created the Office of Scientific Research and Development, with Vannevar Bush as its director; the office was empowered to engage in large engineering projects in addition to research. The NDRC Committee on Uranium became the S-1 Section of the OSRD. In Britain and Rudolf Peierls at the University of Birmingham had made a breakthrough investigating the critical mass of uranium-235 in June 1939, their calculations indicated that it was within an order of magnitude of 10 kilograms, small enough to be carried by a bomber of the day.
Their March 1940 Frisch–Peierls memorandum initiated the British atomic bomb project and its Maud Committee, which unanimously recommended pursuing the development of an atomic bomb
RAID is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of reliable mainframe disk drives referred to as "single large expensive disk". Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance; the different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives; the term "RAID" was invented by David Patterson, Garth A. Gibson, Randy Katz at the University of California, Berkeley in 1987.
In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks", presented at the SIGMOD conference, they argued that the top performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives, developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication, including the following: Mirroring was well established in the 1970s including, for example, Tandem NonStop Systems. In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4. Around 1983, DEC began. In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5. Around 1988, the Thinking Machines' DataVault used error correction codes in an array of disk drives.
A similar approach was used in the early 1960s on the IBM 353. Industry manufacturers redefined the RAID acronym to stand for "Redundant Array of Independent Disks". Many RAID levels employ an error protection scheme called "parity", a used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based on addition and multiplication in a particular Galois field or Reed–Solomon error correction. RAID can provide data security with solid-state drives without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this "hybrid RAID". A number of standard schemes have evolved; these are called levels. There were five RAID levels, but many variations have evolved, notably several nested levels and many non-standard levels.
RAID levels and their associated data formats are standardized by the Storage Networking Industry Association in the Common RAID Disk Drive Format standard: RAID 0 RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same, but because striping distributes the contents of each file among all disks in the set, the failure of any disk causes all files, the entire RAID 0 volume, to be lost. A broken spanned volume at least preserves the files on the unfailing disks; the benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of disks because, unlike spanned volumes and writes are done concurrently, the cost is complete vulnerability to drive failures. RAID 1 RAID 1 consists of data mirroring, without striping. Data is written identically to two drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first, improving performance.
Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID. Write throughput is always slower because every drive must be updated, the slowest drive limits the write performance; the array continues to operate as long. RAID 2 RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is stored on at least one parity drive; this level is of historical significance only. RAID 3 RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is stored on a dedicated parity drive. Although implementations exist, RAID 3 is not
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
The DataVault was Thinking Machines' mass storage system. It stored five gigabytes of data, expandable to ten gigabytes with transfer rates of 40 megabytes per second. Eight DataVaults could be operated in parallel for a combined data transfer rate of 320 megabytes per second for up to 80 gigabytes of data; each DataVault unit stored its data in an array of 39 individual disk drives with data was spread across the drives. Each 64-bit data chunk received from the I/O bus was split into two 32-bit words. After verifying parity, the DataVault controller added 7 bits of Error Correcting Code and stored the resulting 39 bits on 39 individual drives. Subsequent failure of any one of the 39 drives would not impair reading of the data, since the ECC code allows any single bit error to be detected and corrected. Although operation is possible with a single failed drive, three spare drives were available to replace failed units until they are repaired; the ECC codes permit 100% recovery of the data on any one failed disk, allowing a new copy of this data to be reconstructed and written onto the replacement disk.
Once this recovery is complete, the data base is considered to be healed. In today's terminology this would be labeled a RAID-2 subsystem, it shipped before the label RAID was formed. The DataVault was a striking example of industrial design. Instead of the usual rectilinear box, the cabinet had a gentle curve that made it look like an information desk or a bartender's station. US patent 4899342, "Method and apparatus for operating multi-unit array of memories", issued 1990-02-06, assigned to Thinking Machines Corporation
William Daniel "Danny" Hillis is an American inventor, entrepreneur and writer, known for his work in computer science. He is best known as the founder of Thinking Machines Corporation, a parallel supercomputer manufacturer, subsequently was a fellow at Walt Disney Imagineering. More Hillis co-founded Applied Minds, the technology R&D think-tank, he is co-founder of Applied Invention, an interdisciplinary group of engineers and artists that develops technology solutions in partnership with leading companies and entrepreneurs. Hillis is a visiting professor at the MIT Media Lab, Judge Widney Professor of Engineering and Medicine at the University of Southern California, Professor of Research Medicine at the Keck School of Medicine of USC, Research Professor of Engineering at the USC Viterbi School of Engineering, he is the principal investigator of the National Cancer Institute's Physical Sciences in Oncology Laboratory at USC. Born September 25, 1956, in Baltimore, Danny Hillis spent much of his childhood living overseas, in Europe and Asia.
He attended the Massachusetts Institute of Technology and received his bachelor of science in mathematics in 1978. As an undergraduate, he worked at the MIT Logo Laboratory under Seymour Papert, developing computer hardware and software for children. During this time, he designed computer-oriented toys and games for the Milton Bradley Company. While still in college, he co-founded Terrapin Inc. a producer of computer software for elementary schools. As a graduate student at the MIT Computer Science and Artificial Intelligence Laboratory, Hillis designed tendon-controlled robot arms and a touch-sensitive robot "skin."During his college years, Hillis built a computer composed of Tinkertoys. It has been displayed at the Boston Computer Museum and the Boston Museum of Science, is exhibited at the Computer History Museum in Mountain View, California. At MIT, Hillis began to study the physical limitations of computation and the possibility of building parallel computers; this work culminated in the design of a massively parallel computer with 64,000 processors.
He named it the Connection Machine, it became the topic of his PhD, for which he received the 1985 Association for Computing Machinery Doctoral Dissertation award. Hillis earned his doctorate as a Hertz Foundation Fellow at the Massachusetts Institute of Technology, under the mentorship of Marvin Minsky, Claude Shannon and Gerald Sussman, receiving his PhD in 1988, he served as an adjunct professor at the MIT Media Lab, where he wrote The Pattern on the Stone. Hillis has founded a number of creative technology companies, most notable Thinking Machines Corporation, Applied Minds, Metaweb Technologies, Applied Proteomics, Applied Invention. Hillis is a prolific inventor, holding over 300 patents in fields including parallel computers, touch interfaces, disk arrays, forgery prevention methods and mechanical devices, bio-medical techniques, RAID disk arrays, multicore multiprocessors and for wormhole routing in parallel processing; as a graduate student at MIT, Hillis co-founded Thinking Machine Corporation to produce and market parallel computers, developing a series of influential products called the Connection Machine.
The Connection Machine was used in demanding data-intensive applications. It was used by the Stanford Exploration Project for oil exploration and for pioneering data mining applications by American Express, as well as many scientific applications at organizations including Schlumberger, Harvard University, University of Tokyo, the Los Alamos National Laboratory, NASA, Sandia National Laboratories, National Center for Supercomputer Applications, Army High Performance Computing Research Center, University of California Berkeley, University of Wisconsin at Madison, Syracuse University. In addition to designing the company's major products, Hillis worked with users of his machine, applying it to problems in astrophysics, aircraft design, financial analysis, computer graphics, medical imaging, image understanding, materials science and subatomic physics. At Thinking Machines, he built a team of scientists and engineers, including people in the field as well as those who became leaders and innovators in multiple industries.
The team included Sydney Brenner, Richard Feynman, Brewster Kahle, Eric Lander. Among the users of Thinking Machines computers was Sergey Brin, who went on to found Google, who used the Connection Machine CM-2 to write parallel processing software while an undergraduate at University of Maryland. In 1996, Hillis joined The Walt Disney Company in the newly created role of Disney Fellow and as Vice President and Development at Disney Imagineering, he developed new technologies and business strategies for Disney's theme parks, motion pictures, consumer products businesses. He designed new theme park rides, a full-sized walking dinosaur, various micro mechanical devices. In 2000, Hillis co-founded the R&D think-tank Applied Minds with his Disney colleague Bran Ferren. Drawing on the founders' interdisciplinary backgrounds, Applied Minds built a team of engineers and designers that provided design and technology services for clients; the uniquely creative environment and the diverse projects it undertook gained Applied Minds abundant media attention.
"It's. Only here, all the candy plugs in," said an article in Wired magazine. Work done at the firm covered the gamut of industries and application domains, including satellites, military helicopters, educational facilities. Wh