OCLC Online Computer Library Center, Incorporated d/b/a OCLC is an American nonprofit cooperative organization "dedicated to the public purposes of furthering access to the world's information and reducing information costs". It was founded in 1967 as the Ohio College Library Center. OCLC and its member libraries cooperatively produce and maintain WorldCat, the largest online public access catalog in the world. OCLC is funded by the fees that libraries have to pay for its services. OCLC maintains the Dewey Decimal Classification system. OCLC began in 1967, as the Ohio College Library Center, through a collaboration of university presidents, vice presidents, library directors who wanted to create a cooperative computerized network for libraries in the state of Ohio; the group first met on July 5, 1967 on the campus of the Ohio State University to sign the articles of incorporation for the nonprofit organization, hired Frederick G. Kilgour, a former Yale University medical school librarian, to design the shared cataloging system.
Kilgour wished to merge the latest information storage and retrieval system of the time, the computer, with the oldest, the library. The plan was to merge the catalogs of Ohio libraries electronically through a computer network and database to streamline operations, control costs, increase efficiency in library management, bringing libraries together to cooperatively keep track of the world's information in order to best serve researchers and scholars; the first library to do online cataloging through OCLC was the Alden Library at Ohio University on August 26, 1971. This was the first online cataloging by any library worldwide. Membership in OCLC is based on use of services and contribution of data. Between 1967 and 1977, OCLC membership was limited to institutions in Ohio, but in 1978, a new governance structure was established that allowed institutions from other states to join. In 2002, the governance structure was again modified to accommodate participation from outside the United States.
As OCLC expanded services in the United States outside Ohio, it relied on establishing strategic partnerships with "networks", organizations that provided training and marketing services. By 2008, there were 15 independent United States regional service providers. OCLC networks played a key role in OCLC governance, with networks electing delegates to serve on the OCLC Members Council. During 2008, OCLC commissioned two studies to look at distribution channels. In early 2009, OCLC negotiated new contracts with the former networks and opened a centralized support center. OCLC provides bibliographic and full-text information to anyone. OCLC and its member libraries cooperatively produce and maintain WorldCat—the OCLC Online Union Catalog, the largest online public access catalog in the world. WorldCat has holding records from private libraries worldwide; the Open WorldCat program, launched in late 2003, exposed a subset of WorldCat records to Web users via popular Internet search and bookselling sites.
In October 2005, the OCLC technical staff began a wiki project, WikiD, allowing readers to add commentary and structured-field information associated with any WorldCat record. WikiD was phased out; the Online Computer Library Center acquired the trademark and copyrights associated with the Dewey Decimal Classification System when it bought Forest Press in 1988. A browser for books with their Dewey Decimal Classifications was available until July 2013; until August 2009, when it was sold to Backstage Library Works, OCLC owned a preservation microfilm and digitization operation called the OCLC Preservation Service Center, with its principal office in Bethlehem, Pennsylvania. The reference management service QuestionPoint provides libraries with tools to communicate with users; this around-the-clock reference service is provided by a cooperative of participating global libraries. Starting in 1971, OCLC produced catalog cards for members alongside its shared online catalog. OCLC commercially sells software, such as CONTENTdm for managing digital collections.
It offers the bibliographic discovery system WorldCat Discovery, which allows for library patrons to use a single search interface to access an institution's catalog, database subscriptions and more. OCLC has been conducting research for the library community for more than 30 years. In accordance with its mission, OCLC makes its research outcomes known through various publications; these publications, including journal articles, reports and presentations, are available through the organization's website. OCLC Publications – Research articles from various journals including Code4Lib Journal, OCLC Research, Reference & User Services Quarterly, College & Research Libraries News, Art Libraries Journal, National Education Association Newsletter; the most recent publications are displayed first, all archived resources, starting in 1970, are available. Membership Reports – A number of significant reports on topics ranging from virtual reference in libraries to perceptions about library funding. Newsletters – Current and archived newsletters for the library and archive community.
Presentations – Presentations from both guest speakers and OCLC research from conferences and other events. The presentations are organized into five categories: Conference presentations, Dewey presentations, Distinguished Seminar Series, Guest presentations, Research staff
Lawrence Livermore National Laboratory
Lawrence Livermore National Laboratory is a federal research facility in Livermore, United States, founded by the University of California, Berkeley in 1952. A Federally Funded Research and Development Center, it is funded by the U. S. Department of Energy and managed and operated by Lawrence Livermore National Security, LLC, a partnership of the University of California, Bechtel, BWX Technologies, AECOM, Battelle Memorial Institute in affiliation with the Texas A&M University System. In 2012, the laboratory had the synthetic chemical element livermorium named after it. LLNL is self-described as "a premier research and development institution for science and technology applied to national security." Its principal responsibility is ensuring the safety and reliability of the nation's nuclear weapons through the application of advanced science and technology. The Laboratory applies its special expertise and multidisciplinary capabilities to preventing the proliferation and use of weapons of mass destruction, bolstering homeland security and solving other nationally important problems, including energy and environmental security, basic science and economic competitiveness.
The Laboratory is located on a one-square-mile site at the eastern edge of Livermore. It operates a 7,000 acres remote experimental test site, called Site 300, situated about 15 miles southeast of the main lab site. LLNL has an annual budget of about $1.5 billion and a staff of 5,800 employees. LLNL was established in 1952 as the University of California Radiation Laboratory at Livermore, an offshoot of the existing UC Radiation Laboratory at Berkeley, it was intended to spur innovation and provide competition to the nuclear weapon design laboratory at Los Alamos in New Mexico, home of the Manhattan Project that developed the first atomic weapons. Edward Teller and Ernest Lawrence, director of the Radiation Laboratory at Berkeley, are regarded as the co-founders of the Livermore facility; the new laboratory was sited at a former naval air station of World War II. It was home to several UC Radiation Laboratory projects that were too large for its location in the Berkeley Hills above the UC campus, including one of the first experiments in the magnetic approach to confined thermonuclear reactions.
About half an hour southeast of Berkeley, the Livermore site provided much greater security for classified projects than an urban university campus. Lawrence tapped age 32, to run Livermore. Under York, the Lab had four main programs: Project Sherwood, Project Whitney, diagnostic weapon experiments, a basic physics program. York and the new lab embraced the Lawrence "big science" approach, tackling challenging projects with physicists, chemists and computational scientists working together in multidisciplinary teams. Lawrence died in August 1958 and shortly after, the university's board of regents named both laboratories for him, as the Lawrence Radiation Laboratory; the Berkeley and Livermore laboratories have had close relationships on research projects, business operations, staff. The Livermore Lab was established as a branch of the Berkeley laboratory; the Livermore lab was not severed administratively from the Berkeley lab until 1971. To this day, in official planning documents and records, Lawrence Berkeley National Laboratory is designated as Site 100, Lawrence Livermore National Lab as Site 200, LLNL's remote test location as Site 300.
The laboratory was renamed Lawrence Livermore Laboratory in 1971. On October 1, 2007 LLNS assumed management of LLNL from the University of California, which had managed and operated the Laboratory since its inception 55 years before; the laboratory was honored in 2012 by having the synthetic chemical element livermorium named after it. The LLNS takeover of the laboratory has been controversial. In May 2013, an Alameda County jury awarded over $2.7 million to five former laboratory employees who were among 430 employees LLNS laid off during 2008. The jury found that LLNS breached a contractual obligation to terminate the employees only for "reasonable cause." The five plaintiffs have pending age discrimination claims against LLNS, which will be heard by a different jury in a separate trial. There are 125 co-plaintiffs awaiting trial on similar claims against LLNS; the May 2008 layoff was the first layoff at the laboratory in nearly 40 years. On March 14, 2011, the City of Livermore expanded the city's boundaries to annex LLNL and move it within the city limits.
The unanimous vote by the Livermore city council expanded Livermore's southeastern boundaries to cover 15 land parcels covering 1,057 acres that comprise the LLNL site. The site was an unincorporated area of Alameda County; the LLNL campus continues to be owned by the federal government. From its inception, Livermore focused on new weapon design concepts; the lab persevered and its subsequent designs proved successful. In 1957, the Livermore Lab was selected to develop the warhead for the Navy's Polaris missile; this warhead required numerous innovations to fit a nuclear warhead into the small confines of the missile nosecone. During the Cold War, many Livermore-designed warheads entered service; these were used in missiles ranging in size from the Lance surface-to-surface tactical missile to the megaton-class Spartan antiballistic missile. Over the years, LLNL designed the following warheads: W27 (Regulus cruise missile.
Charles E. Leiserson
Charles Eric Leiserson is a computer scientist, specializing in the theory of parallel computing and distributed computing, practical applications thereof. As part of this effort, he developed the Cilk multithreaded language, he invented the fat-tree interconnection network, a hardware-universal interconnection network used in many supercomputers, including the Connection Machine CM5, for which he was network architect. He helped pioneer the development of VLSI theory, including the retiming method of digital optimization with James B. Saxe and systolic arrays with H. T. Kung, he conceived of the notion of cache-oblivious algorithms, which are algorithms that have no tuning parameters for cache size or cache-line length, but use cache near-optimally. He developed the Cilk language for multithreaded programming, which uses a provably good work-stealing algorithm for scheduling. Leiserson coauthored the standard algorithms textbook Introduction to Algorithms together with Thomas H. Cormen, Ronald L. Rivest, Clifford Stein.
Leiserson received a B. S. degree in computer science and mathematics from Yale University in 1975 and a Ph. D. degree in computer science from Carnegie Mellon University in 1981, where his advisors were Jon Bentley and H. T. Kung, he joined the faculty of the Massachusetts Institute of Technology, where he is now a Professor. In addition, he is a principal in the Theory of Computation research group in the MIT Computer Science and Artificial Intelligence Laboratory, he was Director of Research and Director of System Architecture for Akamai Technologies, he was Founder and Chief Technology Officer of Cilk Arts, Inc. a start-up that developed Cilk technology for multicore computing applications. Leiserson's dissertation, Area-Efficient VLSI Computation, won the first ACM Doctoral Dissertation Award. In 1985, the National Science Foundation awarded him a Presidential Young Investigator Award, he is a Fellow of the Association for Computing Machinery, the American Association for the Advancement of Science, the Institute of Electrical and Electronics Engineers, the Society for Industrial and Applied Mathematics.
He received the 2014 Taylor L. Booth Education Award from the IEEE Computer Society "for worldwide computer science education impact through writing a best-selling algorithms textbook, developing courses on algorithms and parallel programming." He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development." He was cited for "distinguished mentoring of computer science leaders and students." He received the 2013 ACM Paris Kanellakis Theory and Practice Award for "contributions to robust parallel and distributed computing." Thinking Machines Corporation Thomas H. Cormen Ronald L. Rivest Clifford Stein Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-03141-7. Cormen, Thomas H.. Introduction to Algorithms. MIT Press and McGraw-Hill. ISBN 978-0-262-53196-2. Cormen, Thomas H.. Introduction to Algorithms. MIT Press. ISBN 9780-262-03384-8.
Home page Brief Biography Charles Leiserson Playlist Appearance on WMBR's Dinnertime Sampler radio show October 27, 2004
Network topology is the arrangement of the elements of a communication network. Network topology can be used to define or describe the arrangement of various types of telecommunication networks, including command and control radio networks, industrial fieldbusses, computer networks. Network topology is the topological structure of a network and may be depicted physically or logically, it is an application of graph theory wherein communicating devices are modeled as nodes and the connections between the devices are modeled as links or lines between the nodes. Physical topology is the placement of the various components of a network, while logical topology illustrates how data flows within a network. Distances between nodes, physical interconnections, transmission rates, or signal types may differ between two different networks, yet their topologies may be identical. A network’s physical topology is a particular concern of the physical layer of the OSI model. Examples of network topologies are found in local area networks, a common computer network installation.
Any given node in the LAN has one or more physical links to other devices in the network. A wide variety of physical topologies have been used in LANs, including ring, bus and star. Conversely, mapping the data flow between the components determines the logical topology of the network. In comparison, Controller Area Networks, common in vehicles, are distributed control system networks of one or more controllers interconnected with sensors and actuators over, invariably, a physical bus topology. Two basic categories of network topologies exist, logical topologies; the transmission medium layout used to link devices is the physical topology of the network. For conductive or fiber optical mediums, this refers to the layout of cabling, the locations of nodes, the links between the nodes and the cabling; the physical topology of a network is determined by the capabilities of the network access devices and media, the level of control or fault tolerance desired, the cost associated with cabling or telecommunication circuits.
In contrast, logical topology is the way that the signals act on the network media, or the way that the data passes through the network from one device to the next without regard to the physical interconnection of the devices. A network's logical topology is not the same as its physical topology. For example, the original twisted pair Ethernet using repeater hubs was a logical bus topology carried on a physical star topology. Token ring is wired as a physical star from the media access unit. Physically, AFDX can be a cascaded star topology of multiple dual redundant Ethernet switches. Logical topologies are closely associated with media access control methods and protocols; some networks are able to dynamically change their logical topology through configuration changes to their routers and switches. The transmission media used to link devices to form a computer network include electrical cables, optical fiber, radio waves. In the OSI model, these are defined at layers 2 -- the physical layer and the data link layer.
A adopted family of transmission media used in local area network technology is collectively known as Ethernet. The media and protocol standards that enable communication between networked devices over Ethernet are defined by IEEE 802.3. Ethernet transmits data over both fiber cables. Wireless LAN standards use radio waves. Power line communication uses a building's power cabling to transmit data; the orders of the following wired technologies are from slowest to fastest transmission speed. Coaxial cable is used for cable television systems, office buildings, other work-sites for local area networks; the cables consist of copper or aluminum wire surrounded by an insulating layer, which itself is surrounded by a conductive layer. The insulation helps minimize distortion. Transmission speed ranges from 200 million bits per second to more than 500 million bits per second. ITU-T G.hn technology uses existing home wiring to create a high-speed local area network. Signal traces on printed circuit boards are common for board-level serial communication between certain types integrated circuits, a common example being SPI.
Ribbon cable has been a cost-effective media for serial protocols within metallic enclosures or rolled within copper braid or foil, over short distances, or at lower data rates. Several serial network protocols can be deployed without shielded or twisted pair cabling, that is, with "flat" or "ribbon" cable, or a hybrid flat/twisted ribbon cable, should EMC, bandwidth constraints permit: RS-232, RS-422, RS-485, CAN, GPIB, SCSI, etc. Twisted pair wire is the most used medium for all telecommunication. Twisted-pair cabling consist of copper wires. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer network cablin
Yellowstone was the inaugural supercomputer at the NCAR-Wyoming Supercomputing Center in Cheyenne, Wyoming. It was installed and readied for production in the summer of 2012; the Yellowstone supercomputing cluster was decommissioned on December 31, 2017, being replaced by its successor Cheyenne. Yellowstone was a capable petascale system designed for conducting breakthrough scientific research in the interdisciplinary field of Earth system science. Scientists used the computer and its associated resources to model and analyze complex processes in the atmosphere, ice caps, throughout the Earth system, accelerating scientific research in climate change, severe weather, geomagnetic storms, carbon sequestration, aviation safety and many other topics. Funded by the National Science Foundation and the State and University of Wyoming, operated by the National Center for Atmospheric Research, Yellowstone’s purpose was to improve the predictive power of Earth system science simulation to benefit decision-making and planning for society.
Yellowstone was a 1.5-petaflops IBM iDataPlex cluster computer with 4,536 dual-socket compute nodes that contained 9,072, 2.6-GHz Intel Xeon E5-2670 8-core processors, its aggregate memory size was 145 terabytes. The nodes interconnected in a full fat tree network via a Mellanox FDR InfiniBand switching fabric. System software includes the Red Hat Enterprise Linux operating system for Scientific Computing, LSF Batch Subsystem and Resource Manager, IBM General Parallel File System. Yellowstone was integrated with many other high-performance computing resources in the NWSC; the central feature of this supercomputing architecture was its shared file system that streamlined science workflows by providing computation and visualization work spaces common to all resources. This common data storage pool, called the GLobally Accessible Data Environment, provides 36.4 petabytes of online disk capacity shared by the supercomputer, two data analysis and visualization cluster computers, data servers for both local and remote users, a data archive with the capacity to store 320 petabytes of research data.
High-speed networks connect this Yellowstone environment to science gateways, data transfer services, remote visualization resources, XSEDE sites, partner sites around the world. This integration of computing resources, file systems, data storage, broadband networks allowed scientists to simulate future geophysical scenarios at high resolution analyze and visualize them on one computing complex; this improves scientific productivity by avoiding the delays associated with moving large quantities of data between separate systems. Further, this reduces the volume of data that needs to be transferred to researchers at their home institutions; the Yellowstone environment at NWSC makes more than 600 million processor-hours available each year to researchers in the Earth system sciences. Supercomputer architecture Supercomputer operating systems "Yellowstone" "Wyoming supercomputer moves in" "Supercomputer will help researchers map climate change down to the local level" "Yellowstone Super First to Crunch Local Climate Models" "NCAR's Data-Centric Supercomputing Environment: Yellowstone" "IBM Yellowstone Supercomputer To Study Climate Change" "IBM Installs Sandy Bridge EP Supercomputer for NCAR" "IBM working on NCAR supercomputer."
"U. S. weather boffins tap IBM for 1.6 petaflops super" "NCAR-Wyoming Supercomputing Center website" "NCAR-Wyoming Supercomputing Center Fact Sheet" "NCAR-Wyoming Supercomputing Center - UW website"
Radar is a detection system that uses radio waves to determine the range, angle, or velocity of objects. It can be used to detect aircraft, spacecraft, guided missiles, motor vehicles, weather formations, terrain. A radar system consists of a transmitter producing electromagnetic waves in the radio or microwaves domain, a transmitting antenna, a receiving antenna and a receiver and processor to determine properties of the object. Radio waves from the transmitter reflect off the object and return to the receiver, giving information about the object's location and speed. Radar was developed secretly for military use by several nations in the period before and during World War II. A key development was the cavity magnetron in the UK, which allowed the creation of small systems with sub-meter resolution; the term RADAR was coined in 1940 by the United States Navy as an acronym for RAdio Detection And Ranging The term radar has since entered English and other languages as a common noun, losing all capitalization.
The modern uses of radar are diverse, including air and terrestrial traffic control, radar astronomy, air-defense systems, antimissile systems, marine radars to locate landmarks and other ships, aircraft anticollision systems, ocean surveillance systems, outer space surveillance and rendezvous systems, meteorological precipitation monitoring and flight control systems, guided missile target locating systems, ground-penetrating radar for geological observations, range-controlled radar for public health surveillance. High tech radar systems are associated with digital signal processing, machine learning and are capable of extracting useful information from high noise levels. Radar is a key technology that the self-driving systems are designed to use, along with sonar and other sensors. Other systems similar to radar make use of other parts of the electromagnetic spectrum. One example is "lidar". With the emergence of driverless vehicles, Radar is expected to assist the automated platform to monitor its environment, thus preventing unwanted incidents.
As early as 1886, German physicist Heinrich Hertz showed that radio waves could be reflected from solid objects. In 1895, Alexander Popov, a physics instructor at the Imperial Russian Navy school in Kronstadt, developed an apparatus using a coherer tube for detecting distant lightning strikes; the next year, he added a spark-gap transmitter. In 1897, while testing this equipment for communicating between two ships in the Baltic Sea, he took note of an interference beat caused by the passage of a third vessel. In his report, Popov wrote that this phenomenon might be used for detecting objects, but he did nothing more with this observation; the German inventor Christian Hülsmeyer was the first to use radio waves to detect "the presence of distant metallic objects". In 1904, he demonstrated the feasibility of detecting a ship in dense fog, but not its distance from the transmitter, he obtained a patent for his detection device in April 1904 and a patent for a related amendment for estimating the distance to the ship.
He got a British patent on September 23, 1904 for a full radar system, that he called a telemobiloscope. It operated on a 50 cm wavelength and the pulsed radar signal was created via a spark-gap, his system used the classic antenna setup of horn antenna with parabolic reflector and was presented to German military officials in practical tests in Cologne and Rotterdam harbour but was rejected. In 1915, Robert Watson-Watt used radio technology to provide advance warning to airmen and during the 1920s went on to lead the U. K. research establishment to make many advances using radio techniques, including the probing of the ionosphere and the detection of lightning at long distances. Through his lightning experiments, Watson-Watt became an expert on the use of radio direction finding before turning his inquiry to shortwave transmission. Requiring a suitable receiver for such studies, he told the "new boy" Arnold Frederic Wilkins to conduct an extensive review of available shortwave units. Wilkins would select a General Post Office model after noting its manual's description of a "fading" effect when aircraft flew overhead.
Across the Atlantic in 1922, after placing a transmitter and receiver on opposite sides of the Potomac River, U. S. Navy researchers A. Hoyt Taylor and Leo C. Young discovered that ships passing through the beam path caused the received signal to fade in and out. Taylor submitted a report, suggesting that this phenomenon might be used to detect the presence of ships in low visibility, but the Navy did not continue the work. Eight years Lawrence A. Hyland at the Naval Research Laboratory observed similar fading effects from passing aircraft. Before the Second World War, researchers in the United Kingdom, Germany, Japan, the Netherlands, the Soviet Union, the United States, independently and in great secrecy, developed technologies that led to the modern version of radar. Australia, New Zealand, South Africa followed prewar Great Britain's radar development, Hungary generated its radar technology during the war. In France in 1934, following systematic studies on the split-anode magnetron, the research branch of the Compagnie Générale de Télégraphie Sans Fil headed by Maurice Ponte with Henri Gutton, Sylvain Berline and M. Hugon, began developing an obstacle-locatin
PowerPC is a reduced instruction set computing instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC, as an evolving instruction set, has since 2006 been named Power ISA, while the old name lives on as a trademark for some implementations of Power Architecture-based processors. PowerPC was the cornerstone of AIM's PReP and Common Hardware Reference Platform initiatives in the 1990s. Intended for personal computers, the architecture is well known for being used by Apple's Power Macintosh, PowerBook, iMac, iBook, Xserve lines from 1994 until 2006, when Apple migrated to Intel's x86, it has since become a niche in personal computers, but remains popular for embedded and high-performance processors. Its use in 7th generation of video game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in third party AmigaOS 4 personal computers. PowerPC is based on IBM's earlier POWER instruction set architecture, retains a high level of compatibility with it.
The history of RISC began with IBM's 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products becoming the 16-register IBM ROMP processor used in the IBM RT PC. The RT PC was a rapid design implementing the RISC architecture. Between the years of 1982–1984, IBM started a project to build the fastest microprocessor on the market; the result is the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990. The original POWER microprocessor, one of the first superscalar RISC implementations, is a high performance, multi-chip design. IBM soon realized that a single-chip microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work began on a one-chip POWER microprocessor, designated the RSC. In early 1991, IBM realized its design could become a high-volume microprocessor used across the industry. Apple had realized the limitations and risks of its dependency upon a single CPU vendor at a time when Motorola was falling behind on delivering the 68040 CPU.
Furthermore, Apple had conducted its own research and made an experimental quad-core CPU design called Aquarius, which convinced the company's technology leadership that the future of computing was in the RISC methodology. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture. Soon after, being one of Motorola's largest customers of desktop-class microprocessors, asked Motorola to join the discussions due to their long relationship, Motorola having had more extensive experience with manufacturing high-volume microprocessors than IBM, to form a second source for the microprocessors; this three-way collaboration between Apple, IBM, Motorola became known as the AIM alliance. In 1991, the PowerPC was just one facet of a larger alliance among these three companies. At the time, most of the personal computer industry was shipping systems based on the Intel 80386 and 80486 chips, which have a complex instruction set computer architecture, development of the Pentium processor was well underway.
The PowerPC chip was one of several joint ventures involving the three alliance members, in their efforts to counter the growing Microsoft-Intel dominance of personal computing. For Motorola, POWER looked like an unbelievable deal, it allowed the company to sell a tested and powerful RISC CPU for little design cash on its own part. It maintained ties with an important customer and seemed to offer the possibility of adding IBM too, which might buy smaller versions from Motorola instead of making its own. At this point Motorola had its own RISC design in the form of the 88000, doing poorly in the market. Motorola was doing well with its 68000 family and the majority of the funding was focused on this; the 88000 effort was somewhat starved for resources. The 88000 was in production, however; the 88000 had achieved a number of embedded design wins in telecom applications. If the new POWER one-chip version could be made bus-compatible at a hardware level with the 88000, that would allow both Apple and Motorola to bring machines to market far faster since they would not have to redesign their board architecture.
The result of these various requirements is the PowerPC specification. The differences between the earlier POWER instruction set and that of PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02. Since 1991, IBM had a long-standing desire for a unifying operating system that would host all existing operating systems as personalities upon one microkernel. From 1991 to 1995, the company designed and aggressively evangelized what would become Workplace OS targeting PowerPC; when the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors. Microsoft released Windows NT 3.51 for the architecture, used in Motorola's