Honeywell International Inc. is an American multinational conglomerate company that makes a variety of commercial and consumer products, engineering services and aerospace systems for a wide variety of customers, from private consumers to major corporations and governments. The company operates four business units, known as Strategic Business Units – Honeywell Aerospace and Building Technologies and Productivity Solutions, Honeywell Performance Materials and Technologies. Honeywell is a Fortune 100 company. In 2018, Honeywell ranked 77th in the Fortune 500. Honeywell has a global workforce of 130,000, of whom 58,000 are employed in the United States; the company is headquartered in New Jersey. Its current chief executive officer is Darius Adamczyk; the company and its corporate predecessors were part of the Dow Jones Industrial Average Index from December 7, 1925 until February 9, 2008. The company's current name, Honeywell International Inc. is the product of a merger in which Honeywell Inc. was acquired by the much larger AlliedSignal in 1999.
The company headquarters were consolidated with AlliedSignal's headquarters in Morristown, New Jersey. In 2015, the headquarters were moved to Morris Plains. On November 30, 2018, Honeywell announced that its corporate headquarters would be moved to Charlotte. Honeywell has many brands that commercial and retail consumers may recognize, including its line of home thermostats and Garrett turbochargers. In addition to consumer home products, Honeywell itself produces thermostats, security alarm systems, air cleaners and dehumidifiers; the company licenses its brand name for use in various retail products made by partner manufacturers, including air conditioners, fans, security safes, home generators, paper shredders. Although Mark Honeywell’s Heating Specialty Company was not established until 1906, today’s Honeywell traces its roots back to 1885 when the Swiss-born Albert Butz invented the damper-flapper, a thermostat for coal furnaces, to automatically regulate heating systems; the following year he founded the Butz Thermo-Electric Regulator Company.
In 1888, after a falling out with his investors, Butz left the company and transferred the patents to the legal firm Paul and Merwin, who renamed the company the Consolidated Temperature Controlling Company. As the years passed, CTCC struggled with growing debts, they underwent several name changes in an attempt to keep the business afloat. After the company was renamed to the Electric Heat Regulator Company in 1893, W. R. Sweatt, a stockholder in the company, was sold "an extensive list of patents" and named secretary-treasurer.:22 On February 23, 1898 he bought out the remaining shares of the company from the other stockholders. In 1906, Mark Honeywell founded the Honeywell Heating Specialty Company in Wabash, Indiana, to manufacture and market his invention, the mercury seal generator; as Honeywell’s company grew it began to clash with the renamed Minneapolis Heat Regulator Company. This led to the merging of both companies into the publicly held Minneapolis-Honeywell Regulator Company in 1927.
Honeywell was named the company's first president, alongside W. R. Sweatt as its first chairman. W. R. Sweatt and his son Harold provided 75 years of uninterrupted leadership for the company. W. R. Sweatt survived rough spots and turned an innovative idea – thermostatic heating control – into a thriving business. Harold, who took over in 1934, led Honeywell through a period of growth and global expansion that set the stage for Honeywell to become a global technology leader; the merger into the Minneapolis-Honeywell Regulator Company proved to be a saving grace for the corporation. The combined assets were valued at over $3.5 million, with less than $1 million in liabilities just months before Black Monday.:49 In 1931, Minneapolis-Honeywell began a period of expansion and acquisition when they purchased Time-O-Stat Controls Company, giving the company access to a greater number of patents to be used in their controls systems. 1934 marked Minneapolis-Honeywell’s first foray into the international market, when they acquired the Brown Instrument Company, inherited their relationship with the Yamatake Company of Tokyo, a Japan-based distributor.:51 Later that same year, Minneapolis-Honeywell would start distributorships across Canada, as well as one in the Netherlands, their first European office.
This expansion into international markets continued in 1936, with their first distributorship in London, as well as their first foreign assembly facility being established in Canada. By 1937, ten years after the merger, Minneapolis-Honeywell had over 3,000 employees, with $16 million in annual revenue. Having survived the Depression, Minneapolis-Honeywell was approached by the US military for engineering and manufacturing projects. In 1941, Minneapolis-Honeywell developed a superior tank periscope and camera stabilizers, as well as the C-1 autopilot; the C-1 revolutionized precision bombing in the war effort, was used on the two B-29 bombers that dropped atomic bombs on Japan in 1945. The success of these projects led Minneapolis-Honeywell to open an Aero division in Chicago on October 5, 1942.:73 This division was responsible for the development of the formation stick to control autopilots, more accurate gas gauges for planes, the turbo supercharger.:79 In 1950, Minneapolis-Honeywell’s Aero division was contracted for the controls on the first US nuclear submarine, USS Nautilus.:88 The following year, the company acquired Intervox Company for
Multi-channel memory architecture
In the fields of digital electronics and computer hardware, multi-channel memory architecture is a technology that increases the data transfer rate between the DRAM memory and the memory controller by adding more channels of communication between them. Theoretically this multiplies the data rate by the number of channels present. Dual-channel memory employs two channels; the technique goes back as far as the 1960s having been used in IBM System/360 Model 91 and in CDC 6600. Modern high-end processors like the Intel i7 Extreme and AMD Ryzen Threadripper series, along with various Xeons support quad-channel memory. In March 2010, AMD released Socket G34 and Magny-Cours Opteron 6100 series processors with support for quad-channel memory. In 2006, Intel released chipsets that support quad-channel memory for its LGA771 platform and in 2011 for its LGA2011 platform. Microcomputer chipsets with more channels were designed. Dual-channel-enabled memory controllers in a PC system architecture use two 64-bit data channels.
Dual-channel should not be confused with double data rate, in which data exchange happens twice per DRAM clock. The two technologies are independent of each other, many motherboards use both by using DDR memory in a dual-channel configuration. Dual-channel architecture requires a dual-channel-capable motherboard and two or more DDR, DDR2, DDR3, DDR4, or DDR5 memory modules; the memory modules are installed into matching banks, which are color-coded on the motherboard. These separate channels allow the memory controller access to each memory module. Identical memory modules are not required, but are recommended for best dual-channel operation. Motherboards supporting dual-channel memory layouts have color-coded DIMM sockets. Coloring schemes are not standardized and have opposing meanings, depending on the motherboard manufacturer's intentions and actual motherboard design. Matching colors may either indicate that the sockets belong to the same channel, or they may be used to indicate that DIMM pairs should be installed to the same color.
The motherboard's manual will provide an explanation of how to install memory for that particular unit. A matched pair of memory modules may be placed in the first bank of each channel, a different-capacity pair of modules in the second bank. Modules rated at different speeds can be run in dual-channel mode, although the motherboard will run all memory modules at the speed of the slowest module; some motherboards, have compatibility issues with certain brands or models of memory when attempting to use them in dual-channel mode. For this reason, it is advised to use identical pairs of memory modules, why most memory manufacturers now sell "kits" of matched-pair DIMMs. Several motherboard manufacturers only support configurations where a "matched pair" of modules are used. A matching pair needs to match in: Capacity. Certain Intel chipsets support different capacity chips in what they call Flex Mode: the capacity that can be matched is run in dual-channel, while the remainder runs in single-channel.
Speed. If speed is not the same, the lower speed of the two modules will be used; the higher latency of the two modules will be used. Same CAS Latency or Column Address Strobe. Number of chips and sides. Matching size of rows and columns. Dual-channel architecture is a technology implemented on motherboards by the motherboard manufacturer and does not apply to memory modules. Theoretically any matched pair of memory modules may be used in either single- or dual-channel operation, provided the motherboard supports this architecture. Theoretically, dual-channel configurations double the memory bandwidth when compared to single-channel configurations; this should not be confused with double data rate memory, which doubles the usage of DRAM bus by transferring data both on the rising and falling edges of the memory bus clock signals. Tom's Hardware found little significant difference between single-channel and dual-channel configurations in synthetic and gaming benchmarks. In its tests, dual-channel gave at best a 5% speed increase in memory-intensive tasks.
Another comparison by Laptop logic resulted in a similar conclusion for integrated graphics. The test results published by Tom's Hardware had a discrete graphics comparison. Another benchmark performed by TweakTown, using SiSoftware Sandra, measured around 70% increase in performance of a quadruple-channel configuration, when compared to a dual-channel configuration. Other tests performed by TweakTown on the same subject showed no significant differences in performance, leading to a conclusion that not all benchmark software is up to the task of exploiting increased parallelism offered by the multi-channel memory configurations. Dual-channel was conceived as a way to maximize memory throughput by combining two 64-bit buses into a single 128-bit bus; this is retrospectively called the "ganged" mode. However, due to lackluster performance gains in consumer applications, more modern implementations of dual-channel use the "unganged" mode by default, which maintains two 64-bit memory buses but allows independent access to each channel, in support of multithreading with multi-core processors."Ganged" versus "unganged" difference could be envisioned as an analogy with the way RAID 0 works, when compared to JBOD.
With RAID 0
Dell EMC is an American multinational corporation headquartered in Hopkinton, United States. Dell EMC sells data storage, information security, analytics, cloud computing and other products and services that enable organizations to store, manage and analyze data. Dell EMC's target markets include large companies and small- and medium-sized businesses across various vertical markets; the company's stock was added to the New York Stock Exchange on April 6, 1986, was listed on the S&P 500 index. EMC was acquired by Dell in 2016, it was renamed to Dell EMC. Dell uses the EMC name with some of its products. EMC, founded in 1979 by Richard Egan, Roger Marino & John Curly, introduced its first 64-kilobyte memory boards for the Prime Computer in 1981 and continued with the development of memory boards for other computer types. In the mid-1980s the company expanded beyond memory to other computer data storage types and networked storage platforms. EMC began shipping its flagship product, the Symmetrix, in 1990.
Symmetrix was the main reason for EMC's rapid growth in the 1990s, both in size and value, from a company valued in the hundreds of millions of dollars to a multi-billion dollar company. Michael Ruettgers joined EMC in 1988 and served as CEO from 1992 until January 2001. Under Ruettgers' leadership, EMC revenues grew from $120 million to nearly $9 billion 10 years and the company shifted its focus from memory boards to storage systems. Ruettgers was named one of BusinessWeek's "World's Top 25 Executives"; some of EMC's growth was via acquisitions of small companies. On October 12, 2015, Dell Inc. announced its intent to acquire EMC in a cash-and-stock deal valued at $67 billion, considered the largest-ever acquisition in the technology sector. This would combine Dell's enterprise server, personal computer, mobile businesses with EMC's enterprise storage business. Dell would pay $24.05 per share of EMC, $9.05 per share of tracking stock in VMware. On September 7, 2016, Dell Inc. completed the merger with EMC Corp. which involved the issuance of $45.9 billion in debt and $4.4 billion common stock.
In addition to those of the majority-owned Pivotal company, Dell EMC sells products and services, including products from other Dell Technologies companies, designed to allow IT departments to move to a cloud computing model and to analyze big data. The following table includes the listing and timeline of EMC Corporation's major acquisitions of other companies since 1996. In 2012, EMC sponsored The Human Face of Big Data, a globally crowd-sourced media project focusing on the ability to collect, analyze and visualize vast amounts of data in real time; the Human Face of Big Data, produced by Rick Smolan and Jennifer Erwitt, includes "a number of fascinating stories... represent some of the most innovative applications of data that are shaping our future". Official website
Silicon Graphics, Inc. was an American high-performance computing manufacturer, producing computer hardware and software. Founded in Mountain View, California in November 1981 by Jim Clark, its initial market was 3D graphics computer workstations, but its products and market positions developed over time. Early systems were based on the Geometry Engine that Clark and Marc Hannah had developed at Stanford University, were derived from Clark's broader background in computer graphics; the Geometry Engine was the first very-large-scale integration implementation of a geometry pipeline, specialized hardware that accelerated the "inner-loop" geometric computations needed to display three-dimensional images. For much of its history, the company focused on 3D imaging and was a major supplier of both hardware and software in this market. Silicon Graphics reincorporated as a Delaware corporation in January 1990. Through the mid to late-1990s, the improving performance of commodity Wintel machines began to erode SGI's stronghold in the 3D market.
The porting of Maya to other platforms is a major event in this process. SGI made several attempts to address this, including a disastrous move from their existing MIPS platforms to the Intel Itanium, as well as introducing their own Linux-based Intel IA-32 based workstations and servers that failed in the market. In the mid-2000s the company repositioned itself as a supercomputer vendor, a move that failed. On April 1, 2009, SGI filed for Chapter 11 bankruptcy protection and announced that it would sell all of its assets to Rackable Systems, a deal finalized on May 11, 2009, with Rackable assuming the name Silicon Graphics International; the remains of Silicon Graphics, Inc. became Graphics Properties Holdings, Inc. James H. Clark left his position as an electrical engineering associate professor at Stanford University to found SGI in 1982 along with a group of seven graduate students and research staff from Stanford: Kurt Akeley, David J. Brown, Tom Davis, Rocky Rhodes, Marc Hannah, Herb Kuta, Mark Grossman.
Ed McCracken was CEO of Silicon Graphics from 1984 to 1997. During those years, SGI grew from annual revenues of $5.4 million to $3.7 billion. The addition of 3D graphic capabilities to PCs, the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGI's core markets; the porting of Maya to Linux, Mac OS X and Microsoft Windows further eroded the low end of SGI's product line. In response to challenges faced in the marketplace and a falling share price Ed McCracken was fired and SGI brought in Richard Belluzzo to replace him. Under Belluzzo's leadership a number of initiatives were taken which are considered to have accelerated the corporate decline. One such initiative was trying to sell workstations running Windows NT called Visual Workstations instead of just ones which ran IRIX, the company's version of UNIX; this put the company in more direct competition with the likes of Dell, making it more difficult to justify a price premium. The product line abandoned a few years later.
SGI's premature announcement of its migration from MIPS to Itanium and its abortive ventures into IA-32 architecture systems damaged SGI's credibility in the market. In 1999, in an attempt to clarify their current market position as more than a graphics company, Silicon Graphics Inc. changed its corporate identity to "SGI", although its legal name was unchanged. At the same time, SGI announced a new logo consisting of only the letters "sgi" in a proprietary font called "SGI", created by branding and design consulting firm Landor Associates, in collaboration with designer Joe Stitzlein. SGI continued to use the "Silicon Graphics" name for its workstation product line, re-adopted the cube logo for some workstation models. In November 2005, SGI announced that it had been delisted from the New York Stock Exchange because its common stock had fallen below the minimum share price for listing on the exchange. SGI's market capitalization dwindled from a peak of over seven billion dollars in 1995 to just $120 million at the time of delisting.
In February 2006, SGI noted. In mid-2005, SGI hired Alix Partners to advise it on returning to profitability and received a new line of credit. SGI announced it was postponing its scheduled annual December stockholders meeting until March 2006, it proposed a reverse stock split to deal with the de-listing from the New York Stock Exchange. In January 2006, SGI hired Dennis McKenna as its new chairman of the board of directors. Mr. McKenna succeeded Robert Bishop. On May 8, 2006, SGI announced that it had filed for Chapter 11 bankruptcy protection for itself and U. S. subsidiaries as part of a plan to reduce debt by $250 million. Two days the U. S. Bankruptcy Court approved its first day motions and its use of a $70 million financing facility provided by a group of its bondholders. Foreign subsidiaries were unaffected. On September 6, 2006, SGI announced the end of development for the MIPS/IRIX line and the IRIX operating system. Production would end on December 29 and the last orders would be fulfilled by March 2007.
Support for these products would end after December 2013. SGI emerged from bankruptcy protection on October 17, 2006, its stock symbol at that point, SGID.pk, was canceled, new stock was issued on the NASDAQ exchange under the symbol SGIC. This new stock was distributed to the company's creditors, the SGID common stockh
In computing, a server is a computer program or a device that provides functionality for other programs or devices, called "clients". This architecture is called the client–server model, a single overall computation is distributed across multiple processes or devices. Servers can provide various functionalities called "services", such as sharing data or resources among multiple clients, or performing computation for a client. A single server can serve multiple clients, a single client can use multiple servers. A client process may run on the same device or may connect over a network to a server on a different device. Typical servers are database servers, file servers, mail servers, print servers, web servers, game servers, application servers. Client–server systems are today most implemented by the request–response model: a client sends a request to the server, which performs some action and sends a response back to the client with a result or acknowledgement. Designating a computer as "server-class hardware" implies that it is specialized for running servers on it.
This implies that it is more powerful and reliable than standard personal computers, but alternatively, large computing clusters may be composed of many simple, replaceable server components. The use of the word server in computing comes from queueing theory, where it dates to the mid 20th century, being notably used in Kendall, the paper that introduced Kendall's notation. In earlier papers, such as the Erlang, more concrete terms such as " operators" are used. In computing, "server" dates at least to RFC 5, one of the earliest documents describing ARPANET, is contrasted with "user", distinguishing two types of host: "server-host" and "user-host"; the use of "serving" dates to early documents, such as RFC 4, contrasting "serving-host" with "using-host". The Jargon File defines "server" in the common sense of a process performing service for requests remote, with the 1981 version reading: SERVER n. A kind of DAEMON which performs a service for the requester, which runs on a computer other than the one on which the server runs.
Speaking, the term server refers to a computer program or process. Through metonymy, it refers to a device used for running several server programs. On a network, such a device is called a host. In addition to server, the words serve and service are used, though servicer and servant are not; the word service may refer to either the abstract form of e.g. Web service. Alternatively, it may refer to a computer program that turns a computer into a server, e.g. Windows service. Used as "servers serve users", in the sense of "obey", today one says that "servers serve data", in the same sense as "give". For instance, web servers "serve web pages to users" or "service their requests"; the server is part of the client–server model. The nature of communication between a client and server is response; this is in contrast with peer-to-peer model. In principle, any computerized process that can be used or called by another process is a server, the calling process or processes is a client, thus any general purpose computer connected to a network can host servers.
For example, if files on a device are shared by some process, that process is a file server. Web server software can run on any capable computer, so a laptop or a personal computer can host a web server. While request–response is the most common client–server design, there are others, such as the publish–subscribe pattern. In the publish–subscribe pattern, clients register with a pub–sub server, subscribing to specified types of messages. Thereafter, the pub–sub server forwards matching messages to the clients without any further requests: the server pushes messages to the client, rather than the client pulling messages from the server as in request–response; the purpose of a server is to share data as well as to distribute work. A server computer can serve its own computer programs as well; the following table shows several scenarios. The entire structure of the Internet is based upon a client–server model. High-level root nameservers, DNS, routers direct the traffic on the internet. There are millions of servers connected to the Internet, running continuously throughout the world and every action taken by an ordinary Internet user requires one or more interactions with one or more server.
There are exceptions. Hardware requirement for servers vary depending on the server's purpose and its software. Since servers are accessed over a network, many run unattended without a computer monitor or input device, audio hardware and USB interfaces. Many servers do not have a graphical user interface, they are managed remotely. Remote management can be conducted via various methods including Microsoft Management Console, PowerShell, SSH and browser-based out-of-band management systems such as Dell's iDRAC or HP's iLo. Large traditional single servers would need to be run for long periods without interruption. Ava
In a computer system, a chipset is a set of electronic components in an integrated circuit known as a "Data Flow Management System" that manages the data flow between the processor and peripherals. It is found on the motherboard. Chipsets are designed to work with a specific family of microprocessors; because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance. In computing, the term chipset refers to a set of specialized chips on a computer's motherboard or an expansion card. In personal computers, the first chipset for the IBM PC AT of 1984 was the NEAT chipset developed by Chips and Technologies for the Intel 80286 CPU. In home computers, game consoles and arcade-game hardware of the 1980s and 1990s, the term chipset was used for the custom audio and graphics chips. Examples include SEGA's System 16 chipset; the term chipset refers to a specific pair of chips on the motherboard: the northbridge and the southbridge.
The northbridge links the CPU to high-speed devices RAM and graphics controllers, the southbridge connects to lower-speed peripheral buses. In many modern chipsets, the southbridge contains some on-chip integrated peripherals, such as Ethernet, USB, audio devices. Motherboards and their chipsets come from different manufacturers; as of 2015, manufacturers of chipsets for x86 motherboards include AMD, Intel, NVIDIA, SiS and VIA Technologies. Apple computers and Unix workstations have traditionally used custom-designed chipsets; some server manufacturers develop custom chipsets for their products. In the 1980s, Chips and Technologies pioneered the manufacturing of chipsets for PC-compatible computers. Computer systems produced since often share used chipsets across disparate computing specialties. For example, the NCR 53C9x, a low-cost chipset implementing a SCSI interface to storage devices, could be found in Unix machines such as the MIPS Magnum, embedded devices, personal computers. Traditionally in x86 computers, the processor's primary connection to the rest of the machine was through the motherboard chipset's northbridge.
The northbridge was directly responsible for communications with high-speed devices and conversely any system communication back to the processor. This connection between the processor and northbridge is designated the front side bus. Requests to resources not directly controlled by the northbridge were offloaded to the southbridge, with the northbridge being an intermediary between the processor and the southbridge; the southbridge handled "everything else" lower-speed peripherals and board functions such as USB, parallel and serial communications. The connection between the northbridge and southbridge was the PCI bus. Before 2003, any interaction between a CPU and main memory or an expansion device such as a graphics card — whether AGP, PCI or integrated into the motherboard — was directly controlled by the northbridge IC on behalf of the processor; this made processor performance dependent on the system chipset the northbridge's memory performance and ability to shuttle this information back to the processor.
In 2003, however, AMD's introduction of the Athlon 64-bit series of processors changed this. The Athlon64 marked the introduction of an integrated memory controller being incorporated into the processor itself thus allowing the processor to directly access and handle memory, negating the need for a traditional northbridge to do so. Intel followed suit in 2008 with the release of its Core i series the X58 platform. In newer processors integration has further increased through the inclusion of the system's primary PCIe controller and integrated graphics directly on the CPU itself; as fewer functions are left un-handled by the processor, chipset vendors have condensed the remaining northbridge and southbridge functions into a single chip. Intel's version of this is the "Platform Controller Hub" an enhanced southbridge for the remaining peripherals—as traditional northbridge duties, such as memory controller, expansion bus interface and on-board video controller, are integrated into the CPU die itself.
However, the Platform Controller Hub was integrated into the processor package as a second die for mobile variants of the Skylake processors. Acer Laboratories Incorporated Comparison of AMD chipsets Comparison of ATI chipsets Comparison of Nvidia chipsets List of Intel chipsets Northbridge Redpine Signals Silicon Integrated Systems Southbridge Very-large-scale integration or VLSI VIA chipsets
Data General was one of the first minicomputer firms from the late 1960s. Three of the four founders were former employees of Digital Equipment Corporation, their first product, the Data General Nova, was a 16-bit minicomputer. This used their own operating system, Data General RDOS, in conjunction with programming languages like "Data General Business Basic" they provided a multi-user operating system with record locking and built-in databases far ahead of many contemporary systems; the Nova was followed by the Supernova and Eclipse product lines, all of which were used in many applications for the next two decades. The company employed an original equipment manufacturer sales strategy to sell to third parties who incorporated Data General computers into the OEM's specific product lines. A series of missteps in the 1980s, including missing the advance of microcomputers despite the launch of the microNOVA in 1977, the Data General-One portable computer in 1984, led to a decline in the company's market share.
The company did continue into the 1990s, was acquired by EMC Corporation in 1999. Data General was founded by several engineers from Digital Equipment Corporation who were frustrated with DEC's management and left to form their own company; the chief founders were Edson de Castro, Henry Burkhardt III, Richard Sogge of Digital Equipment, Herbert Richman of Fairchild Semiconductor. The company was founded in Hudson, Massachusetts in 1968. Edson de Castro was the chief engineer in charge of the PDP-8, DEC's line of inexpensive computers that created the minicomputer market, it was designed to be used in laboratory equipment settings. Many PDP-8s still operated decades later. De Castro, convinced he could do one better, began work on his new 16-bit design; the result was released in 1969 as the Nova. Designed to be rack-mounted to the PDP-8 machines, it was smaller in height and ran faster. Announced as "the best small computer in the world", the Nova gained a following in scientific and educational markets, made the company flush with cash, although Data General had to defend itself from misappropriation of its trade secrets.
With the initial success of the Nova, Data General went public in the fall of 1969. The Nova, like the PDP-8, used a simple accumulator-based architecture, it lacked general registers and the stack-pointer functionality of the more advanced PDP-11, as did competing products, such as the HP 1000. The original Nova was soon followed by the faster SuperNova later by several minor versions based on the SuperNova core; the last major version, the Nova 4, was released in 1978. During this period the Nova generated 20% annual growth rates for the company, becoming a star in the business community and generating US$100 million in sales in 1975. In 1977, DG launched; the Nova series plays a important role as instruction-set inspiration to Charles P. Thacker and others at Xerox PARC during their construction of the Xerox Alto. In 1974, the Nova was supplanted by the Eclipse. Based on many of the same concepts as the Nova, it included support for virtual memory and multitasking more suitable to the small office environment.
For this reason, the Eclipse was packaged differently, in a floor-standing case resembling a small refrigerator. Production problems with the Eclipse led to a rash of lawsuits in the late 1970s. Newer versions of the machine were pre-ordered by many of DG's customers, which were never delivered. Many customers sued Data General after more than a year of waiting, charging the company with breach of contract, while others canceled their orders and went elsewhere; the Eclipse was intended to replace the Nova outright, evidenced by the fact that the Nova 3 series, released at the same time and utilizing the same internal architecture as the Eclipse, was phased out the next year. Strong demand continued for the Nova series, resulting in the Nova 4 as a result of the continuing problems with the Eclipse. In 1977, Digital announced the VAX series, their first 32-bit minicomputer line, described as "super-minis"; the first products would not ship until February 1978. This coincided with the aging 16-bit products.
Data General launched their own 32-bit effort in 1976 to build what they called the "world's best 32-bit machine", known internally as the "Fountainhead Project". When Digital's VAX-11/780 was shipped in February 1978, Fountainhead was not yet ready to deliver a machine, due to problems in project management. DG's customers left for the VAX world. Soon afterwards, Data General launched a hyperactive 32-bit effort based on the Eclipse. A 32-bit project named "Fountainhead" was underway. References to "the Eagle project" and "Project Eagle" co-exist. Eagle was Data General's response to the February 1978 delivery of the first VAX. By late 1979, it became clear that Eagle would deliver before Fountainhead, igniting an intense turf war within the company for shrinking project funds. In the meantime, customers abandoned Data General in droves, driven not only by the delivery problems with the original Eclipse, but the power and versatility of Digital's new VAX line; the Eagle Project was the subject of Tracy Kidder's Pulitz