The Intel 80386 known as i386 or just 386, is a 32-bit microprocessor introduced in 1985. The first versions had 275,000 transistors and were the CPU of many workstations and high-end personal computers of the time; as the original implementation of the 32-bit extension of the 80286 architecture, the 80386 instruction set, programming model, binary encodings are still the common denominator for all 32-bit x86 processors, termed the i386-architecture, x86, or IA-32, depending on context. The 32-bit 80386 can execute most code intended for the earlier 16-bit processors such as 8086 and 80286 that were ubiquitous in early PCs. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386. A 33 MHz 80386 was measured to operate at about 11.4 MIPS. The 80386 was introduced in October 1985, while manufacturing of the chips in significant quantities commenced in June 1986. Mainboards for 80386-based computer systems were cumbersome and expensive at first, but manufacturing was rationalized upon the 80386's mainstream adoption.
The first personal computer to make use of the 80386 was designed and manufactured by Compaq and marked the first time a fundamental component in the IBM PC compatible de facto standard was updated by a company other than IBM. In May 2006, Intel announced that 80386 production would stop at the end of September 2007. Although it had long been obsolete as a personal computer CPU, Intel and others had continued making the chip for embedded systems; such systems using an 80386 or one of many derivatives are common in aerospace technology and electronic musical instruments, among others. Some mobile phones used the 80386 processor, such as BlackBerry 950 and Nokia 9000 Communicator; the processor was a significant evolution in the x86 architecture, extended a long line of processors that stretched back to the Intel 8008. The predecessor of the 80386 was the Intel 80286, a 16-bit processor with a segment-based memory management and protection system; the 80386 added a 32-bit architecture and a paging translation unit, which made it much easier to implement operating systems that used virtual memory.
It offered support for register debugging. The 80386 featured three operating modes: protected mode and virtual mode; the protected mode, which debuted in the 286, was extended to allow the 386 to address up to 4 GB of memory. The all new virtual 8086 mode made it possible to run one or more real mode programs in a protected environment, although some programs were not compatible; the ability for a 386 to be set up to act like it had a flat memory model in protected mode despite the fact that it uses a segmented memory model in all modes would arguably be the most important feature change for the x86 processor family until AMD released x86-64 in 2003. Several new instructions have been added to 386: BSF, BSR, BT, BTS, BTR, BTC, CDQ, CWDE, LFS, LGS, LSS, MOVSX, MOVZX, SETcc, SHLD, SHRD. Two new segment registers have been added for general-purpose programs, single Machine Status Word of 286 grew into eight control registers CR0–CR7. Debug registers DR0–DR7 were added for hardware breakpoints. New forms of MOV instruction are used to access them.
Chief architect in the development of the 80386 was John H. Crawford, he was responsible for extending the 80286 architecture and instruction set to 32-bit, led the microprogram development for the 80386 chip. The 80486 and P5 Pentium line of processors were descendants of the 80386 design; the following data types are directly supported and thus implemented by one or more 80386 machine instructions. 8-bit integer, either signed or unsigned. 16-bit integer, either signed or unsigned. 32-bit integer, either signed or unsigned. 64-bit integer, either signed or unsigned. Offset, a 16- or 32-bit displacement referring to a memory location. Pointer, a 16-bit selector together with a 16- or 32-bit offset. Character. String, a sequence of 8-, 16- or 32-bit words. BCD, decimal digits represented by unpacked bytes. Packed BCD, two BCD digits in one byte; the following 80386 assembly source code is for a subroutine named _strtolower that copies a null-terminated ASCIIZ character string from one location to another, converting all alphabetic characters to lower case.
The string is copied one byte at a time. The example code uses the EBP register to establish a call frame, an area on the stack that contains all of the parameters and local variables for the execution of the subroutine; this kind of calling convention supports reentrant and recursive code and has been used by Algol-like languages since the late 1950s. A flat memory model is assumed that the DS and ES segments address the same region of memory. In 1988, Intel introduced the 80386SX, most referred to as the 386SX, a cut-down version of the 80386 with a 16-bit data bus intended for lower-cost PCs aimed at the home and small-business markets, while the 386DX would remain the high-end variant used in workstations and other demanding tasks; the CPU remained 32-bit internally, but the 16-bit
IBM Personal Computer
The IBM Personal Computer known as the IBM PC, is the original version and progenitor of the IBM PC compatible hardware platform. It is IBM model number 5150, was introduced on August 12, 1981, it was created by a team of engineers and designers under the direction of Don Estridge of the IBM Entry Systems Division in Boca Raton, Florida. The generic term "personal computer" was in use years before 1981, applied as early as 1972 to the Xerox PARC's Alto, but because of the success of the IBM Personal Computer, the term "PC" came to mean more a desktop microcomputer compatible with IBM's Personal Computer branded products. Since the machine was based on open architecture, within a short time of its introduction, third-party suppliers of peripheral devices, expansion cards, software proliferated. "IBM compatible" became an important criterion for sales growth. International Business Machines, one of the world's largest companies, had a 62% share of the mainframe computer market in 1982. In the late 1970s the new personal computer industry was dominated by the Commodore PET, Atari 8-bit family, Apple II, Tandy Corporation's TRS-80, various CP/M machines.
With $150 million in sales by 1979 and projected annual growth of more than 40% in the early 1980s, the microcomputer market was large enough for IBM's attention. Other large technology companies such as Hewlett-Packard, Texas Instruments, Data General had entered it, some large IBM customers were buying Apples, so the company saw introducing its own personal computer as both an experiment in a new market and a defense against rivals and small. In 1980 and 1981 rumors spread of an IBM personal computer a miniaturized version of the IBM System/370, while Matsushita acknowledged that it had discussed with IBM the possibility of manufacturing a personal computer for the American company; the Japanese project, codenamed "Go", ended before the 1981 release of the American-designed IBM PC codenamed "Chess", but two simultaneous projects further confused rumors about the forthcoming product. Data General and TI's small computers were not successful, but observers expected AT&T to soon enter the computer industry, other large companies such as Exxon, Montgomery Ward and Sony were designing their own microcomputers.
Xerox produced the 820 to introduce a personal computer before IBM, becoming the second Fortune 500 company after Tandy to do so, had its Xerox PARC laboratory's sophisticated technology. Whether IBM had waited too long to enter an industry in which Tandy and others were successful was unclear. An observer stated that "IBM bringing out a personal computer would be like teaching an elephant to tap dance." Successful microcomputer company Vector Graphic's fiscal 1980 revenue was $12 million. A single IBM computer in the early 1960s cost as much as $9 million, occupied one quarter acre of air-conditioned space, had a staff of 60 people; the "Colossus of Armonk" only sold through its own sales force, had no experience with resellers or retail stores, did not introduce the first product designed to work with non-IBM equipment until 1980. Another observer claimed that IBM made decisions so that, when tested, "what they found is that it would take at least nine months to ship an empty box"; as with other large computer companies, its new products required about four to five years for development.
IBM had to learn how to develop, mass-produce, market new computers. While the company traditionally let others pioneer a new market—IBM released its first commercial computer a year after Remington Rand's UNIVAC in 1951, but within five years had 85% of the market—the personal-computer development and pricing cycles were much faster than for mainframes, with products designed in a few months and obsolete quickly. Many in the microcomputer industry resented IBM's power and wealth, disliked the perception that an industry founded by startups needed a latecomer so staid that it had a strict dress code and employee songbook; the potential importance to microcomputers of a company so prestigious, that a popular saying in American companies stated "No one got fired for buying IBM", was nonetheless clear. InfoWorld, which described itself as "The Newsweekly for Microcomputer Users", stated that "for my grandmother, for millions of people like her, IBM and computer are synonymous". Byte stated in an editorial just before the announcement of the IBM PC: Rumors abound about personal computers to come from giants such as Digital Equipment Corporation and the General Electric Company.
But there is no contest. IBM's new personal computer... is far and away the media star, not because of its features, but because it exists at all. When the number eight company in the Fortune 500 enters the field, news... The influence of a personal computer made by a company whose name has come to mean "computer" to most of the world is hard to contemplate; the editorial acknowledged that "some factions in our industry have looked upon IBM as the'enemy'", but concluded with optimism: "I want to see personal computing take a giant step." Desktop sized programmable calculators by HP had evolved into the HP 9830 BASIC language computer by 1972. In 1972–1973 a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Po
The PlayStation 2 is a home video game console, developed by Sony Computer Entertainment. It is the successor to the original PlayStation console and is the second iteration in the PlayStation lineup of consoles, it was released in 2000 and competed with Sega's Dreamcast, Nintendo's GameCube and Microsoft's Xbox in the sixth generation of video game consoles. Announced in 1999, the PlayStation 2 offered backwards compatibility for its predecessor's DualShock controller, as well as for its games; the PlayStation 2 is the best-selling video game console of all time, selling over 155 million units, with 150 million confirmed by Sony in 2011. More than 3,874 game titles have been released for the PS2 since launch, more than 1.5 billion copies have been sold. Sony manufactured several smaller, lighter revisions of the console known as Slimline models in 2004. In 2006, Sony announced and launched its successor, the PlayStation 3. With the release of its successor, the PlayStation 2 remained popular well into the seventh generation and continued to be produced until January 4, 2013, when Sony announced that the PlayStation 2 had been discontinued after 12 years of production – one of the longest runs for a video game console.
Despite the announcement, new games for the console continued to be produced until the end of 2013, including Final Fantasy XI: Seekers of Adoulin for Japan, FIFA 13 for North America, Pro Evolution Soccer 2014 for Europe. Repair services for the system in Japan ended on September 7, 2018. Though Sony has kept details of the PlayStation 2's development secret, work on the console began around the time that the original PlayStation was released. Insiders stated that it was developed in the U. S. West Coast by former members of Argonaut Software. By 1997 word had leaked to the press that the console would have backwards compatibility with the original PlayStation, a built-in DVD player, Internet connectivity. Sony announced the PlayStation 2 on March 1, 1999; the video game console was positioned as a competitor to Sega's Dreamcast, the first sixth-generation console to be released, although the main rivals of the PS2 were Nintendo's GameCube and Microsoft's Xbox. The Dreamcast itself launched successfully in North America that year, selling over 500,000 units within two weeks.
Soon after the Dreamcast's North American launch, Sony unveiled the PlayStation 2 at the Tokyo Game Show on September 20, 1999. Sony showed playable demos of upcoming PlayStation 2 games including Gran Turismo 2000 and Tekken Tag Tournament – which showed the console's graphic abilities and power; the PS2 was launched in March 2000 in Japan, October in North America, November in Europe. Sales of the console and accessories pulled in $250 million on the first day, beating the $97 million made on the first day of the Dreamcast. Directly after its release, it was difficult to find PS2 units on retailer shelves due to manufacturing delays. Another option was purchasing the console online through auction websites such as eBay, where people paid over a thousand dollars for the console; the PS2 sold well on the basis of the strength of the PlayStation brand and the console's backward compatibility, selling over 980,000 units in Japan by March 5, 2000, one day after launch. This allowed the PS2 to tap the large install base established by the PlayStation – another major selling point over the competition.
Sony added new development kits for game developers and more PS2 units for consumers. The PS2's built-in functionality expanded its audience beyond the gamer, as its debut pricing was the same or less than a standalone DVD player; this made the console a low cost entry into the home theater market. The success of the PS2 at the end of 2000 caused Sega problems both financially and competitively, Sega announced the discontinuation of the Dreamcast in March 2001, just 18 months after its successful launch; the PS2 remained as the only active sixth generation console for over 6 months, before it would face competition from newer rivals. Many analysts predicted a close three-way matchup among the three consoles. While the PlayStation 2 theoretically had the weakest specification of the three, it had a head start due to its installed base plus strong developer commitment, as well as a built-in DVD player. While the PlayStation 2's initial games lineup was considered mediocre, this changed during the 2001 holiday season with the release of several blockbuster games that maintained the PS2's sales momentum and held off its newer rivals.
Sony countered the Xbox by temporarily securing PlayStation 2 exclusives for anticipated games such as the Grand Theft Auto series and Metal Gear Solid 2: Sons of Liberty. Sony cut the price of the console in May 2002 from US$299 to $199 in North America, making it the same price as the GameCube and $100 less than the Xbox, it planned to cut the price in Japan around that time. It cut the price twice in Japan in 2003. In 2006, Sony cut the cost of the console in anticipation of the release of the PlayStation 3. Sony, unlike Sega with its Dreamcast placed little emphasis on online gaming during its first few years, although that changed upon the launch of the online-capable Xbox. Coinciding with the release of Xbox Live, Sony released the PlayStation Network Adapter in late 2002, with several online first–party titles released alongside it, such as SOCOM: U. S. Navy SEALs to demon
The Motorola 68000 is a 16/32-bit CISC microprocessor, which implements a 32-bit instruction set, with 32-bit registers and 32-bit internal data bus, but with a 16-bit data ALU and two 16-bit arithmetic ALUs and a 16-bit external data bus and marketed by Motorola Semiconductor Products Sector. Introduced in 1979 with HMOS technology as the first member of the successful 32-bit Motorola 68000 series, it is software forward-compatible with the rest of the line despite being limited to a 16-bit wide external bus. After 39 years in production, the 68000 architecture is still in use; the 68000 grew out of the MACSS project, begun in 1976 to develop an new architecture without backward compatibility. It would be a higher-power sibling complementing the existing 8-bit 6800 line rather than a compatible successor. In the end, the 68000 did retain a bus protocol compatibility mode for existing 6800 peripheral devices, a version with an 8-bit data bus was produced. However, the designers focused on the future, or forward compatibility, which gave the 68000 design a head start against 32-bit instruction set architectures.
For instance, the CPU registers are 32 bits wide, though few self-contained structures in the processor itself operate on 32 bits at a time. The MACSS team drew on the influence of minicomputer processor design, such as the PDP-11 and VAX systems, which were microcode-based. In the mid 1970s, the 8-bit microprocessor manufacturers raced to introduce the 16-bit generation. National Semiconductor had been first with its IMP-16 and PACE processors in 1973–1975, but these had issues with speed. Intel had worked on their advanced 16/32-bit Intel iAPX 432 since 1975 and their Intel 8086 since 1976. Arriving late to the 16-bit arena afforded the new processor more transistors, 32-bit macroinstructions, acclaimed general ease of use; the original MC68000 was fabricated using an HMOS process with a 3.5 µm feature size. Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades were 4, 6, 8 MHz. 10 MHz chips became available during 1981, 12.5 MHz chips by June 1982.
The 16.67 MHz "12F" version of the MC68000, the fastest version of the original HMOS chip, was not produced until the late 1980s. The 68k instruction set was well suited to implement Unix, the 68000 and its successors became the dominant CPUs for Unix-based workstations including Sun workstations and Apollo/Domain workstations; the 68000 was used for mass-market computers such as the Apple Lisa, Macintosh and Atari ST. The 68000 was used in Microsoft Xenix systems, as well as an early NetWare Unix-based Server; the 68000 was used in the first generation of desktop laser printers, including the original Apple Inc. LaserWriter and the HP LaserJet. In 1982, the 68000 received an update to its ISA allowing it to support virtual memory and to conform to the Popek and Goldberg virtualization requirements; the updated chip was called the 68010. A further extended version, which exposed 31 bits of the address bus, was produced in small quantities as the 68012. To support lower-cost systems and control applications with smaller memory sizes, Motorola introduced the 8-bit compatible MC68008 in 1982.
This was a 68000 with a smaller address bus. After 1982, Motorola devoted more attention to the 88000 projects. Several other companies were second-source manufacturers of the HMOS 68000; these included Hitachi, who shrank the feature size to 2.7 µm for their 12.5 MHz version, Rockwell, Thomson/SGS-Thomson, Toshiba. Toshiba was a second-source maker of the CMOS 68HC000. Encrypted variants of the 68000, being the Hitachi FD1089 and FD1094, store decryption keys for opcodes and opcode data in battery-backed memory and were used in certain Sega arcade systems including System 16 to prevent piracy and illegal bootleg games; the 68HC000, the first CMOS version of the 68000, was designed by Hitachi and jointly introduced in 1985. Motorola's version was called the MC68HC000, while Hitachi's was the HD68HC000; the 68HC000 was offered at speeds of 8–20 MHz. Except for using CMOS circuitry, it behaved identically to the HMOS MC68000, but the change to CMOS reduced its power consumption; the original HMOS MC68000 consumed around 1.35 watts at an ambient temperature of 25 °C, regardless of clock speed, while the MC68HC000 consumed only 0.13 watts at 8 MHz and 0.38 watts at 20 MHz.
Apple selected the 68HC000 for use in the Macintosh Portable. Motorola replaced the MC68008 with the MC68HC001 in 1990; this chip resembled the 68HC000 in most respects, but its data bus could operate in either 16-bit or 8-bit mode, depending on the value of an input pin at reset. Thus, like the 68008, it could be used in systems with cheaper 8-bit memories; the evolution of the 68000 focused on more modern embedded control applications and on-chip peripherals. The 68EC000 chip and SCM68000 core expanded the address bus to 32 bits, removed the M6800 peripheral bus, excluded the MOVE from SR instruction from user mode programs. In 1996, Motorola updated the standalone core with static circuitry, drawing only 2 µW in l
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. For this reason, floating-point computation is found in systems which include small and large real numbers, which require fast processing times. A number is, in general, represented to a fixed number of significant digits and scaled using an exponent in some fixed base. A number that can be represented is of the following form: significand × base exponent, where significand is an integer, base is an integer greater than or equal to two, exponent is an integer. For example: 1.2345 = 12345 ⏟ significand × 10 ⏟ base − 4 ⏞ exponent. The term floating point refers to the fact that a number's radix point can "float"; this position is indicated as the exponent component, thus the floating-point representation can be thought of as a kind of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of different orders of magnitude: e.g. the distance between galaxies or the diameter of an atomic nucleus can be expressed with the same unit of length.
The result of this dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, since the 1990s, the most encountered representations are those defined by the IEEE; the speed of floating-point operations measured in terms of FLOPS, is an important characteristic of a computer system for applications that involve intensive mathematical calculations. A floating-point unit is a part of a computer system specially designed to carry out operations on floating-point numbers. A number representation specifies some way of encoding a number as a string of digits. There are several mechanisms. In common mathematical notation, the digit string can be of any length, the location of the radix point is indicated by placing an explicit "point" character there. If the radix point is not specified the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit.
In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a certain range—typically between 1 and 10, with the radix point appearing after the first digit; the scaling factor, as a power of ten, is indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is 152,853.5047 seconds, a value that would be represented in standard-form scientific notation as 1.528535047×105 seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: A signed digit string of a given length in a given base; this digit string is referred to mantissa, or coefficient. The length of the significand determines the precision; the radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit, or to the right of the rightmost digit.
This article follows the convention that the radix point is set just after the most significant digit. A signed integer exponent. To derive the value of the floating-point number, the significand is multiplied by the base raised to the power of the exponent, equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 as an example, the number 152,853.5047, which has ten decimal digits of precision, is represented as the significand 1,528,535,047 together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give 1.528535047×105, or 152,853.5047. In storing such a number, the base need not be stored, since it will be the same for the entire range of supported numbers, can thus be inferred. Symbolically, this final value is: s b p − 1 × b e, where s is the
The Amiga is a family of personal computers introduced by Commodore in 1985. The original model was part of a wave of 16- and 32-bit computers that featured 256 KB or more of RAM, mouse-based GUIs, improved graphics and audio over 8-bit systems; this wave included the Atari ST—released the same year—Apple's Macintosh, the Apple IIGS. Based on the Motorola 68000 microprocessor, the Amiga differed from its contemporaries through the inclusion of custom hardware to accelerate graphics and sound, including sprites and a blitter, a pre-emptive multitasking operating system called AmigaOS; the Amiga 1000 was released in July 1985, but a series of production problems kept it from becoming available until early 1986. The best selling model, the Amiga 500, was introduced in 1987 and became one of the leading home computers of the late 1980s and early 1990s with four to six million sold; the A3000, introduced in 1990, started the second generation of Amiga systems, followed by the A500+, the A600 in March 1992.
As the third generation, the A1200 and the A4000 were released in late 1992. The platform became popular for gaming and programming demos, it found a prominent role in the desktop video, video production, show control business, leading to video editing systems such as the Video Toaster. The Amiga's native ability to play back multiple digital sound samples made it a popular platform for early tracker music software; the powerful processor and ability to access several megabytes of memory enabled the development of several 3D rendering packages, including LightWave 3D, Aladdin4D, TurboSilver and Traces, a predecessor to Blender. Although early Commodore advertisements attempt to cast the computer as an all-purpose business machine when outfitted with the Amiga Sidecar PC compatibility add-on, the Amiga was most commercially successful as a home computer, with a wide range of games and creative software. Poor marketing and the failure of the models to repeat the technological advances of the first systems meant that the Amiga lost its market share to competing platforms, such as the fourth generation game consoles and the dropping prices of IBM PC compatibles which gained 256-color VGA graphics in 1987.
Commodore went bankrupt in April 1994 after the Amiga CD32 model failed in the marketplace. Since the demise of Commodore, various groups have marketed successors to the original Amiga line, including Genesi, Eyetech, ACube Systems Srl and A-EON Technology. AmigaOS has influenced replacements and compatible systems such as MorphOS, AmigaOS 4 and AROS. "The Amiga was so far ahead of its time that nobody—including Commodore's marketing department—could articulate what it was all about. Today, it's obvious the Amiga was the first multimedia computer, but in those days it was derided as a game machine because few people grasped the importance of advanced graphics and video. Nine years vendors are still struggling to make systems that work like 1985 Amigas." Jay Miner joined Atari in the 1970s to develop custom integrated circuits, led development of the Atari 2600's TIA. As soon as its development was complete, the team began developing a much more sophisticated set of chips, CTIA, ANTIC and POKEY, that formed the basis of the Atari 8-bit family.
With the 8-bit line's launch in 1979, the team once again started looking at a next generation chipset. Nolan Bushnell had sold the company to Warner Communications in 1978, the new management was much more interested in the existing lines than development of new products that might cut into their sales. Miner wanted to start work with the new Motorola 68000, but management was only interested in another 6502 based system. Miner left the company, for a time, the industry. In 1979, Larry Kaplan founded Activision. In 1982, Kaplan was approached by a number of investors. Kaplan hired Miner to run the hardware side of the newly formed company, "Hi-Toro"; the system was code-named "Lorraine" in keeping with Miner's policy of giving systems female names, in this case the company president's wife, Lorraine Morse. When Kaplan left the company late in 1982, Miner was promoted to head engineer and the company relaunched as Amiga Corporation. A breadboard prototype was completed by late 1983, shown at the January 1984 Consumer Electronics Show.
At the time, the operating system was not ready, so the machine was demonstrated with the Boing Ball demo. A further developed version of the system was demonstrated at the June 1984 CES and shown to many companies in hopes of garnering further funding, but found little interest in a market, in the final stages of the North American video game crash of 1983. In March, Atari expressed a tepid interest in Lorraine for its potential use in a games console or home computer tentatively known as the 1850XLD, but the talks were progressing and Amiga was running out of money. A temporary arrangement in June led to a $500,000 loan from Atari to Amiga to keep the company going; the terms required the loan to be repaid at the end of the month, otherwise Amiga would forfeit the Lorraine design to Atari. During 1983, Atari lost over $1 million a week, due to the combined effects of the crash and the ongoing price war in the home computer market. By the end of the year, Warner was desperate to sell the company.
In January 1984, Jack Tramiel resigned from Commodore due to internal battles over the future direction of the company. A number of Commodore employees followed him to Tramiel Technology; this included a number of the senior technical staff, where they began development of a 68000-based machine of the
Digital signal processor
A digital signal processor is a specialized microprocessor, with its architecture optimized for the operational needs of digital signal processing. The goal of DSP is to measure, filter or compress continuous real-world analog signals. Most general-purpose microprocessors can execute digital signal processing algorithms but may not be able to keep up with such processing continuously in real-time. Dedicated DSPs have better power efficiency, thus they are more suitable in portable devices such as mobile phones because of power consumption constraints. DSPs use special memory architectures that are able to fetch multiple data or instructions at the same time. Digital signal processing algorithms require a large number of mathematical operations to be performed and on a series of data samples. Signals are converted from analog to digital, manipulated digitally, converted back to analog form. Many DSP applications have constraints on latency. Most general-purpose microprocessors and operating systems can execute DSP algorithms but are not suitable for use in portable devices such as mobile phones and PDAs because of power efficiency constraints.
A specialized digital signal processor, will tend to provide a lower-cost solution, with better performance, lower latency, no requirements for specialised cooling or large batteries. Such performance improvements have led to the introduction of digital signal processing in commercial communications satellites where hundreds or thousands of analog filters, frequency converters and so on are required to receive and process the uplinked signals and ready them for downlinking, can be replaced with specialised DSPs with a significant benefits to the satellites' weight, power consumption, complexity/cost of construction and flexibility of operation. For example, the SES-12 and SES-14 satellites from operator SES, both intended for launch in 2017, were built by Airbus Defence and Space with 25% of capacity using DSP; the architecture of a digital signal processor is optimized for digital signal processing. Most support some of the features as an applications processor or microcontroller, since signal processing is the only task of a system.
Some useful features for optimizing DSP algorithms are outlined below. By the standards of general-purpose processors, DSP instruction sets are highly irregular. Both traditional and DSP-optimized instruction sets are able to compute any arbitrary operation but an operation that might require multiple ARM or x86 instructions to compute might require only one instruction in a DSP optimized instruction set. One implication for software architecture is that hand-optimized assembly-code routines are packaged into libraries for re-use, instead of relying on advanced compiler technologies to handle essential algorithms. With modern compiler optimizations hand-optimized assembly code is more efficient and many common algorithms involved in DSP calculations are hand-written in order to take full advantage of the architectural optimizations. Multiply–accumulates operations used extensively in all kinds of matrix operations convolution for filtering dot product polynomial evaluation Fundamental DSP algorithms depend on multiply–accumulate performance FIR filters Fast Fourier transform Instructions to increase parallelism: SIMD VLIW superscalar architecture Specialized instructions for modulo addressing in ring buffers and bit-reversed addressing mode for FFT cross-referencing Digital signal processors sometimes use time-stationary encoding to simplify hardware and increase coding efficiency.
Multiple arithmetic units may require memory architectures to support several accesses per instruction cycle Special loop controls, such as architectural support for executing a few instruction words in a tight loop without overhead for instruction fetches or exit testing Saturation arithmetic, in which operations that produce overflows will accumulate at the maximum values that the register can hold rather than wrapping around. Sometimes various sticky bits operation modes are available. Fixed-point arithmetic is used to speed up arithmetic processing Single-cycle operations to increase the benefits of pipelining Floating-point unit integrated directly into the datapath Pipelined architecture Highly parallel multiplier–accumulators Hardware-controlled looping, to reduce or eliminate the overhead required for looping operations In engineering, hardware architecture refers to the identification of a system's physical components and their interrelationships; this description called a hardware design model, allows hardware designers to understand how their components fit into a system architecture and provides to software component designers important information needed for software development and integration.
Clear definition of a hardware architecture allows the various traditional engineering disciplines to work more together to develop and manufacture new machines and components. Hardware is als