A computer is a device that can be instructed to carry out sequences of arithmetic or logical operations automatically via computer programming. Modern computers have the ability to follow generalized sets of called programs; these programs enable computers to perform an wide range of tasks. A "complete" computer including the hardware, the operating system, peripheral equipment required and used for "full" operation can be referred to as a computer system; this term may as well be used for a group of computers that are connected and work together, in particular a computer network or computer cluster. Computers are used as control systems for a wide variety of industrial and consumer devices; this includes simple special purpose devices like microwave ovens and remote controls, factory devices such as industrial robots and computer-aided design, general purpose devices like personal computers and mobile devices such as smartphones. The Internet is run on computers and it connects hundreds of millions of other computers and their users.
Early computers were only conceived as calculating devices. Since ancient times, simple manual devices like the abacus aided people in doing calculations. Early in the Industrial Revolution, some mechanical devices were built to automate long tedious tasks, such as guiding patterns for looms. More sophisticated electrical machines did specialized analog calculations in the early 20th century; the first digital electronic calculating machines were developed during World War II. The speed and versatility of computers have been increasing ever since then. Conventionally, a modern computer consists of at least one processing element a central processing unit, some form of memory; the processing element carries out arithmetic and logical operations, a sequencing and control unit can change the order of operations in response to stored information. Peripheral devices include input devices, output devices, input/output devices that perform both functions. Peripheral devices allow information to be retrieved from an external source and they enable the result of operations to be saved and retrieved.
According to the Oxford English Dictionary, the first known use of the word "computer" was in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I haue read the truest computer of Times, the best Arithmetician that euer breathed, he reduceth thy dayes into a short number." This usage of the term referred to a human computer, a person who carried out calculations or computations. The word continued with the same meaning until the middle of the 20th century. During the latter part of this period women were hired as computers because they could be paid less than their male counterparts. By 1943, most human computers were women. From the end of the 19th century the word began to take on its more familiar meaning, a machine that carries out computations; the Online Etymology Dictionary gives the first attested use of "computer" in the 1640s, meaning "one who calculates". The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' is from 1897."
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean "programmable digital electronic computer" dates from "1945 under this name. Devices have been used to aid computation for thousands of years using one-to-one correspondence with fingers; the earliest counting device was a form of tally stick. Record keeping aids throughout the Fertile Crescent included calculi which represented counts of items livestock or grains, sealed in hollow unbaked clay containers; the use of counting rods is one example. The abacus was used for arithmetic tasks; the Roman abacus was developed from devices used in Babylonia as early as 2400 BC. Since many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, markers moved around on it according to certain rules, as an aid to calculating sums of money; the Antikythera mechanism is believed to be the earliest mechanical analog "computer", according to Derek J. de Solla Price.
It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, has been dated to c. 100 BC. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use; the planisphere was a star chart invented by Abū Rayhān al-Bīrūnī in the early 11th century. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was an analog computer capable of working out several different kinds of problems in spherical astronomy. An astrolabe incorporating a mechanical calendar computer and gear-wheels was invented by Abi Bakr of Isfahan, Persia in 1235. Abū Rayhān al-Bīrūnī invented the first mechanical geared lunisolar calendar astrolabe, an early fixed-wired knowledge processing machine with a gear train and gear-wheels, c. 1000 AD.
The sector, a calculating instrument used for solving problems in proportion, trigonometry and division, for various functions, such as squares and cube roots, was developed in
Multics is an influential early time-sharing operating system, based on the concept of a single-level memory. All modern operating systems were influenced by Multics – through Unix, created by some of the people who had worked on Multics – either directly or indirectly. Initial planning and development for Multics started in Cambridge, Massachusetts, it was a cooperative project led by MIT along with General Electric and Bell Labs. It was developed on the GE 645 computer, specially designed for it. Multics was conceived as a commercial product for General Electric, became one for Honeywell, albeit not successfully. Due to its many novel and valuable ideas, Multics had a significant impact on computer science despite its faults. Multics had numerous features intended to ensure high availability so that it would support a computing utility similar to the telephone and electricity utilities. Modular hardware structure and software architecture were used to achieve this; the system could grow in size by adding more of the appropriate resource, be it computing power, main memory, or disk storage.
Separate access control lists on every file provided flexible information sharing, but complete privacy when needed. Multics had a number of standard mechanisms to allow engineers to analyze the performance of the system, as well as a number of adaptive performance optimization mechanisms. Multics implemented a single-level store for data access, discarding the clear distinction between files and process memory; the memory of a process consisted of segments that were mapped into its address space. To read or write to them, the process used normal central processing unit instructions, the operating system took care of making sure that all the modifications were saved to disk. In POSIX terminology, it was. All memory in the system was part of some segment. One disadvantage of this was that the size of segments was limited to 256 kilowords, just over 1 MiB; this was due to the particular hardware architecture of the machines on which Multics ran, having a 36-bit word size and index registers of half that size.
Extra code had to be used to work on files larger than this, called multisegment files. In the days when one megabyte of memory was prohibitively expensive, before large databases and huge bitmap graphics, this limit was encountered. Another major new idea of Multics was dynamic linking, in which a running process could request that other segments be added to its address space, segments which could contain code that it could execute; this allowed applications to automatically use the latest version of any external routine they called, since those routines were kept in other segments, which were dynamically linked only when a process first tried to begin execution in them. Since different processes could use different search rules, different users could end up using different versions of external routines automatically. With the appropriate settings on the Multics security facilities, the code in the other segment could gain access to data structures maintained in a different process. Thus, to interact with an application running in part as a daemon, a user's process performed a normal procedure-call instruction to a code segment to which it had dynamically linked.
The code in that segment could modify data maintained and used in the daemon. When the action necessary to commence the request was completed, a simple procedure return instruction returned control of the user's process to the user's code; the single-level store and dynamic linking are still not available to their full power in other used operating systems, despite the rapid and enormous advance in the computer field since the 1960s. They are becoming more accepted and available in more limited forms, for example, dynamic linking. Multics supported aggressive on-line reconfiguration: central processing units, memory banks, disk drives, etc. could be added and removed while the system continued operating. At the MIT system, where most early software development was done, it was common practice to split the multiprocessor system into two separate systems during off-hours by incrementally removing enough components to form a second working system, leaving the rest still running the original logged-in users.
System software development testing could be done on the second system the components of the second system were added back to the main user system, without having shut it down. Multics supported multiple CPUs. Multics was the first major operating system. Despite this, early versions of Multics were broken into repeatedly; this led to further work that made the system much more secure and prefigured modern security engineering techniques. Break-ins became rare once the second-generation hardware base was adopted. Multics was the first operating system to provide a hierarchical file system, file names could be of alm
Mainframe computers or mainframes are computers used by large organizations for critical applications. They are larger and have more processing power than some other classes of computers: minicomputers, servers and personal computers; the term referred to the large cabinets called "main frames" that housed the central processing unit and main memory of early computers. The term was used to distinguish high-end commercial machines from less powerful units. Most large-scale computer system architectures were established in the 1960s, but continue to evolve. Mainframe computers are used as servers. Modern mainframe design is characterized less by raw computational speed and more by: Redundant internal engineering resulting in high reliability and security Extensive input-output facilities with the ability to offload to separate engines Strict backward compatibility with older software High hardware and computational utilization rates through virtualization to support massive throughput. Hot-swapping of hardware, such as processors and memory.
Their high stability and reliability enable these machines to run uninterrupted for long periods of time, with mean time between failures measured in decades. Mainframes have high availability, one of the primary reasons for their longevity, since they are used in applications where downtime would be costly or catastrophic; the term reliability and serviceability is a defining characteristic of mainframe computers. Proper planning and implementation is required to realize these features. In addition, mainframes are more secure than other computer types: the NIST vulnerabilities database, US-CERT, rates traditional mainframes such as IBM Z, Unisys Dorado and Unisys Libra as among the most secure with vulnerabilities in the low single digits as compared with thousands for Windows, UNIX, Linux. Software upgrades require setting up the operating system or portions thereof, are non-disruptive only when using virtualizing facilities such as IBM z/OS and Parallel Sysplex, or Unisys XPCL, which support workload sharing so that one system can take over another's application while it is being refreshed.
In the late 1950s, mainframes had only a rudimentary interactive interface, used sets of punched cards, paper tape, or magnetic tape to transfer data and programs. They operated in batch mode to support back office functions such as payroll and customer billing, most of which were based on repeated tape-based sorting and merging operations followed by line printing to preprinted continuous stationery; when interactive user terminals were introduced, they were used exclusively for applications rather than program development. Typewriter and Teletype devices were common control consoles for system operators through the early 1970s, although supplanted by keyboard/display devices. By the early 1970s, many mainframes acquired interactive user terminals operating as timesharing computers, supporting hundreds of users along with batch processing. Users gained access through keyboard/typewriter terminals and specialized text terminal CRT displays with integral keyboards, or from personal computers equipped with terminal emulation software.
By the 1980s, many mainframes supported graphic display terminals, terminal emulation, but not graphical user interfaces. This form of end-user computing became obsolete in the 1990s due to the advent of personal computers provided with GUIs. After 2000, modern mainframes or phased out classic "green screen" and color display terminal access for end-users in favour of Web-style user interfaces; the infrastructure requirements were drastically reduced during the mid-1990s, when CMOS mainframe designs replaced the older bipolar technology. IBM claimed that its newer mainframes reduced data center energy costs for power and cooling, reduced physical space requirements compared to server farms. Modern mainframes can run multiple different instances of operating systems at the same time; this technique of virtual machines allows applications to run as if they were on physically distinct computers. In this role, a single mainframe can replace higher-functioning hardware services available to conventional servers.
While mainframes pioneered this capability, virtualization is now available on most families of computer systems, though not always to the same degree or level of sophistication. Mainframes can add or hot swap system capacity without disrupting system function, with specificity and granularity to a level of sophistication not available with most server solutions. Modern mainframes, notably the IBM zSeries, System z9 and System z10 servers, offer two levels of virtualization: logical partitions and virtual machines. Many mainframe customers run two machines: one in their primary data center, one in their backup data center—fully active active, or on standby—in case there is a catastrophe affecting the first building. Test, development and production workload for applications and databases can run on a single machine, except for large demands where the capacity of one machine might be limiting; such a two-mainframe installation can support continuous business service, avoiding both planned and unplanned outages.
In practice many customers use multiple mainframes linked either by Parallel Sysplex and shared DASD, or with shared, geographically dispersed storage provided by EMC
Honeywell 6000 series
The Honeywell 6000 series computers were rebadged versions of General Electric's 600-series mainframes manufactured by Honeywell International, Inc. from 1970 to 1989. Honeywell acquired the line when it purchased GE's computer division in 1970 and continued to develop them under a variety of names for many years; the high-end model was the 6080, with performance 1 MIPS. Smaller models were the 6070, 6060, 6050, 6040, 6030. In 1973 a low-end 6025 was introduced; the even-numbered models included an Enhanced Instruction Set feature, which added decimal arithmetic and storage-to-storage operations to the original word-oriented architecture. In 1973 Honeywell introduced the 6180, a 6000-series machine with addressing modifications to support the Multics operating system. In 1974 Honeywell released the 68/80 which added cache memory in each processor and support for a large directly addressable memory. In 1975 the 6000-series systems were renamed as Level 66, which were faster and offered larger memories.
In 1977 the line was again renamed 66/DPS, in 1979 to DPS-8, again with a small performance improvement to 1.7 MIPS. The Multics model was the DPS-8/M. In 1989, Honeywell sold its computer division to the French company Groupe Bull who continued to market compatible machines. 6000-series systems were said to be "memory oriented" — a system controller in each memory module arbitrated requests from other system components. Memory modules contained 128 K words of 1.2 μs 36-bit words. Each module provided two-way interleaved memory. Devices called Input/Output Multiplexers served as intelligent I/O controllers for communication with most peripherals; the IOM supported two different types of peripheral channels: Common Peripheral Channels could handle data transfer rates up to 650,000 cps. The 6000 supported multiple IOMs; each processor and IOM had four ports for connection to memory. Memory protection and relocation was accomplished using a base and bounds register in the processor, the Base Address Register.
The IOM was passed the contents of the BAR for each I/O request, allowing it to use virtual rather than physical addresses. A variety of communications controllers could be used with the system; the older DATANET-30 and the DATANET 305— intended for smaller systems with up to twelve terminals attached to an IOM. The DATANET 355 processor attached directly to the system controller in a memory module and was capable of supporting up to 200 terminals; the CPU operated on 36-bit words, addresses were 18 bits. The Accumulator Register was 72 bits, or could be accessed separately as two 36-bit registers or four 18-bit registers. An eight-bit Exponent Register contained the exponent for floating point operations. There were eight eighteen-bit index registers X0 through X7; the 18-bit Base Address Register contained the base address and number of 1024-word blocks assigned to the program. The system included several special-purpose registers: an 18-bit Instruction Counter and a 27-bit Timer Register with a resolution of 2 μs.
Sets of special registers were used for fault debugging. The EIS instruction set added eight additional 24-bit registers AR0 through AR7; these registers contained an 18-bit word address, a 2-bit address of a character within the word, a 4-bit address of a bit within the character. Address register format: 1 11 2 2 0 7 89 0 3 +-------------------+--+----+ | Word | C| Bit| +-------------------+--+----+ The 6000-series machine's basic instruction set had more than 185 single-address one-word instructions; the basic instructions were one word. The addresses pointed to operand descriptors which contained the actual operand address and additional information. Basic instruction format: 1 1 2 2 2 2 3 0 7 8 6 7 8 9 5 +-------------------+-----------+-+------+ | Y | OP |I| Tag | +-------------------+-----------+-+------+ Y is the address field. OP is the opcode, the additional bit 27 is the opcode extension bit. I is the interrupt inhibit bit. Tag indicates the type of address modification to be performed.
The EIS instructions were two-word to four-word instructions depending on the specific instruction. EIS instruction format: 1 1 2 2 2 2 3 word 0 7 8 6 7 8 9 5 +-------------------+-----------+-+------+ 0 | Variable field | OP |I| MF1 | +-------------------+-----------+-+------+ 1 | Operand descriptor 1 or indirect word | +----------------------------------------+ 2. Operand descriptor 2 or indirect word. +- - - - - - - - - - - - - - - - - - - - + 3. Operand descriptor indirect word. +- - - - - - - - - - - - - - - - - - - - + Variable field contains information relating to the specific instruction. OP is the EIS opcode.. I is the interrupt inhibit bit. MF1 describes the address modification to be performed for descriptor 1. If operands 2 and 3 are present the variable field contains MF2 and MF3. Multiple levels of indirect addressing were supported. Indirect addresses had the same format as instructions, the address modification indicated by the tag field of t
In computing, time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking at the same time. Its introduction in the 1960s and emergence as the prominent model of computing in the 1970s represented a major technological shift in the history of computing. By allowing a large number of users to interact concurrently with a single computer, time-sharing lowered the cost of providing computing capability, made it possible for individuals and organizations to use a computer without owning one, promoted the interactive use of computers and the development of new interactive applications; the earliest computers were expensive devices, slow in comparison to models. Machines were dedicated to a particular set of tasks and operated by control panels, the operator manually entering small programs via switches in order to load and run a series of programs; these programs might take hours, or weeks, to run. As computers grew in speed, run times dropped, soon the time taken to start up the next program became a concern.
Batch processing methodologies evolved to decrease these "dead periods" by queuing up programs so that as soon as one program completed, the next would start. To support a batch processing operation, a number of comparatively inexpensive card punch or paper tape writers were used by programmers to write their programs "offline"; when typing was complete, the programs were submitted to the operations team, which scheduled them to be run. Important programs were started quickly; when the program run was completed, the output was returned to the programmer. The complete process might take days; the alternative of allowing the user to operate the computer directly was far too expensive to consider. This was; this situation limited interactive development to those organizations that could afford to waste computing cycles: large universities for the most part. Programmers at the universities decried the behaviors that batch processing imposed, to the point that Stanford students made a short film humorously critiquing it.
They experimented with new ways to interact directly with the computer, a field today known as human–computer interaction. Time-sharing was developed out of the realization that while any single user would make inefficient use of a computer, a large group of users together would not; this was due to the pattern of interaction: Typically an individual user entered bursts of information followed by long pauses but a group of users working at the same time would mean that the pauses of one user would be filled by the activity of the others. Given an optimal group size, the overall process could be efficient. Small slices of time spent waiting for disk, tape, or network input could be granted to other users; the concept is claimed to have been first described by John Backus in the 1954 summer session at MIT, by Bob Bemer in his 1957 article "How to consider a computer" in Automatic Control Magazine. In a paper published in December 1958 by W. F. Bauer, he wrote that "The computers would handle a number of problems concurrently.
Organizations would have input-output equipment installed on their own premises and would buy time on the computer much the same way that the average household buys power and water from utility companies." Implementing a system able to take advantage of this was difficult. Batch processing was a methodological development on top of the earliest systems. Since computers still ran single programs for single users at any time, the primary change with batch processing was the time delay between one program and the next. Developing a system that supported multiple users at the same time was a different concept; the "state" of each user and their programs would have to be kept in the machine, switched between quickly. This would take up computer cycles, on the slow machines of the era this was a concern. However, as computers improved in speed, in size of core memory in which users' states were retained, the overhead of time-sharing continually decreased speaking; the first project to implement time-sharing of user programs was initiated by John McCarthy at MIT in 1959 planned on a modified IBM 704, on an additionally modified IBM 709.
One of the deliverables of the project, known as the Compatible Time-Sharing System or CTSS, was demonstrated in November 1961. CTSS has a good claim to be the first time-sharing system and remained in use until 1973. Another contender for the first demonstrated time-sharing system was PLATO II, created by Donald Bitzer at a public demonstration at Robert Allerton Park near the University of Illinois in early 1961, but this was a special purpose system. Bitzer has long said that the PLATO project would have gotten the patent on time-sharing if only the University of Illinois had not lost the patent for 2 years. JOSS began time-sharing service in January 1964; the first commercially successful time-sharing system was the Dartmouth Time Sharing System. Throughout the late 1960s and the 1970s, computer terminals were multiplexed onto large institutional mainframe computers, which in many implementations sequentially polled the terminals to see whether any additional data was available or action was requested by the computer user.
Technology in interconnections were interrupt driven, some of these used parallel data trans
Honeywell International Inc. is an American multinational conglomerate company that makes a variety of commercial and consumer products, engineering services and aerospace systems for a wide variety of customers, from private consumers to major corporations and governments. The company operates four business units, known as Strategic Business Units – Honeywell Aerospace and Building Technologies and Productivity Solutions, Honeywell Performance Materials and Technologies. Honeywell is a Fortune 100 company. In 2018, Honeywell ranked 77th in the Fortune 500. Honeywell has a global workforce of 130,000, of whom 58,000 are employed in the United States; the company is headquartered in New Jersey. Its current chief executive officer is Darius Adamczyk; the company and its corporate predecessors were part of the Dow Jones Industrial Average Index from December 7, 1925 until February 9, 2008. The company's current name, Honeywell International Inc. is the product of a merger in which Honeywell Inc. was acquired by the much larger AlliedSignal in 1999.
The company headquarters were consolidated with AlliedSignal's headquarters in Morristown, New Jersey. In 2015, the headquarters were moved to Morris Plains. On November 30, 2018, Honeywell announced that its corporate headquarters would be moved to Charlotte. Honeywell has many brands that commercial and retail consumers may recognize, including its line of home thermostats and Garrett turbochargers. In addition to consumer home products, Honeywell itself produces thermostats, security alarm systems, air cleaners and dehumidifiers; the company licenses its brand name for use in various retail products made by partner manufacturers, including air conditioners, fans, security safes, home generators, paper shredders. Although Mark Honeywell’s Heating Specialty Company was not established until 1906, today’s Honeywell traces its roots back to 1885 when the Swiss-born Albert Butz invented the damper-flapper, a thermostat for coal furnaces, to automatically regulate heating systems; the following year he founded the Butz Thermo-Electric Regulator Company.
In 1888, after a falling out with his investors, Butz left the company and transferred the patents to the legal firm Paul and Merwin, who renamed the company the Consolidated Temperature Controlling Company. As the years passed, CTCC struggled with growing debts, they underwent several name changes in an attempt to keep the business afloat. After the company was renamed to the Electric Heat Regulator Company in 1893, W. R. Sweatt, a stockholder in the company, was sold "an extensive list of patents" and named secretary-treasurer.:22 On February 23, 1898 he bought out the remaining shares of the company from the other stockholders. In 1906, Mark Honeywell founded the Honeywell Heating Specialty Company in Wabash, Indiana, to manufacture and market his invention, the mercury seal generator; as Honeywell’s company grew it began to clash with the renamed Minneapolis Heat Regulator Company. This led to the merging of both companies into the publicly held Minneapolis-Honeywell Regulator Company in 1927.
Honeywell was named the company's first president, alongside W. R. Sweatt as its first chairman. W. R. Sweatt and his son Harold provided 75 years of uninterrupted leadership for the company. W. R. Sweatt survived rough spots and turned an innovative idea – thermostatic heating control – into a thriving business. Harold, who took over in 1934, led Honeywell through a period of growth and global expansion that set the stage for Honeywell to become a global technology leader; the merger into the Minneapolis-Honeywell Regulator Company proved to be a saving grace for the corporation. The combined assets were valued at over $3.5 million, with less than $1 million in liabilities just months before Black Monday.:49 In 1931, Minneapolis-Honeywell began a period of expansion and acquisition when they purchased Time-O-Stat Controls Company, giving the company access to a greater number of patents to be used in their controls systems. 1934 marked Minneapolis-Honeywell’s first foray into the international market, when they acquired the Brown Instrument Company, inherited their relationship with the Yamatake Company of Tokyo, a Japan-based distributor.:51 Later that same year, Minneapolis-Honeywell would start distributorships across Canada, as well as one in the Netherlands, their first European office.
This expansion into international markets continued in 1936, with their first distributorship in London, as well as their first foreign assembly facility being established in Canada. By 1937, ten years after the merger, Minneapolis-Honeywell had over 3,000 employees, with $16 million in annual revenue. Having survived the Depression, Minneapolis-Honeywell was approached by the US military for engineering and manufacturing projects. In 1941, Minneapolis-Honeywell developed a superior tank periscope and camera stabilizers, as well as the C-1 autopilot; the C-1 revolutionized precision bombing in the war effort, was used on the two B-29 bombers that dropped atomic bombs on Japan in 1945. The success of these projects led Minneapolis-Honeywell to open an Aero division in Chicago on October 5, 1942.:73 This division was responsible for the development of the formation stick to control autopilots, more accurate gas gauges for planes, the turbo supercharger.:79 In 1950, Minneapolis-Honeywell’s Aero division was contracted for the controls on the first US nuclear submarine, USS Nautilus.:88 The following year, the company acquired Intervox Company for
The Intel 80486 known as the i486 or 486, is a higher performance follow-up to the Intel 80386 microprocessor. The 80486 was introduced in 1989 and was the first pipelined x86 design as well as the first x86 chip to use more than a million transistors, due to a large on-chip cache and an integrated floating-point unit, it represents a fourth generation of binary compatible CPUs since the original 8086 of 1978. A 50 MHz 80486 executes around 40 million instructions per second on average and is able to reach 50 MIPS peak performance; the 80486 was announced at Spring Comdex in April 1989. At the announcement, Intel stated that samples would be available in the third quarter of 1989 and production quantities would ship in the fourth quarter of 1989; the first 80486-based PCs were announced in late 1989, but some advised that people wait until 1990 to purchase an 80486 PC because there were early reports of bugs and software incompatibilities. The instruction set of the i486 is similar to its predecessor, the Intel 80386, with the addition of only a few extra instructions, such as CMPXCHG which implements a compare-and-swap atomic operation and XADD, a fetch-and-add atomic operation returning the original value.
From a performance point of view, the architecture of the i486 is a vast improvement over the 80386. It has an on-chip unified instruction and data cache, an on-chip floating-point unit and an enhanced bus interface unit. Due to the tight pipelining, sequences of simple instructions could sustain a single clock cycle throughput; these improvements yielded a rough doubling in integer ALU performance over the 386 at the same clock rate. A 16-MHz 80486 therefore had a performance similar to a 33-MHz 386, the older design had to reach 50 MHz to be comparable with a 25-MHz 80486 part. An 8 KB on-chip SRAM cache stores the most used instructions and data; the 386 supported a slower off-chip cache. An enhanced external bus protocol to enable cache coherency and a new burst mode for memory accesses to fill a cacheline of 16 bytes within 5 bus cycles; the 386 needed 8 bus cycles to transfer the same amount of data. Coupled pipelining completes a simple instruction like ALU reg,reg or ALU reg,im every clock cycle.
The 386 needed two clock cycles to do this. Integrated FPU with a dedicated local bus. Improved MMU performance. New instructions: XADD, BSWAP, CMPXCHG, INVD, WBINVD, INVLPG. Just as in the 80386, a simple flat 4 GB memory model could be implemented by setting all "segment selector" registers to a neutral value in protected mode, or setting "segment registers" to zero in real mode, using only the 32-bit "offset registers" as a linear 32-bit virtual address bypassing the segmentation logic. Virtual addresses were normally mapped onto physical addresses by the paging system except when it was disabled. Just as with the 80386, circumventing memory segmentation could improve performance in some operating systems and applications. On a typical PC motherboard, either four matched 30-pin SIMMs or one 72-pin SIMM per bank were required to fit the 80486's 32-bit data bus; the address bus used 30-bits complemented by four byte-select pins to allow for any 8/16/32-bit selection. This meant. There are several suffixes and variants..
Other variants include: Intel RapidCAD: a specially packaged Intel 486DX and a dummy floating-point unit designed as pin-compatible replacements for an Intel 80386 processor and 80387 FPU. i486SL-NM: i486SL based on i486SX. I487SX: i486DX with one extra pin sold as an FPU upgrade to i486SX systems. I486 OverDrive: i486SX, i486SX2, i486DX2 or i486DX4. Marked as upgrade processors, some models had different pinouts or voltage-handling abilities from "standard" chips of the same speed stepping. Fitted to a coprocessor or "OverDrive" socket on the motherboard, worked the same as the i487SX; the specified maximal internal clock frequency ranged from 16 to 100 MHz. The 16 MHz i486SX model was used by Dell Computers. One of the few 80486 models specified for a 50 MHz bus had overheating problems and was moved to the 0.8-micrometre fabrication process. However, problems continued when the 486DX-50 was installed in local-bus systems due to the high bus speed, making it rather unpopular with mainstream consumers, as local-bus video was considered a requirement at the time, though it remained popular with users of EISA systems.
The 486DX-50 was soon eclipsed by the clock-doubled i486DX2, which although running the internal CPU logic at twice the external bus speed, was slower due to the external bus running at only 25 MHz. The 486DX2 at 66 MHz was faster than the 486DX-50, overall. More powerful 80486 iterations such as the OverDrive and DX4 were less popular, as they came out after Intel had re