Computing is any activity that uses computers. It includes developing hardware and software, using computers to manage and process information and entertain. Computing is a critically important, integral component of modern industrial technology. Major computing disciplines include computer engineering, software engineering, computer science, information systems, information technology; the ACM Computing Curricula 2005 defined "computing" as follows: "In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. Thus, computing includes designing and building hardware and software systems for a wide range of purposes; the list is endless, the possibilities are vast." and it defines five sub-disciplines of the computing field: computer science, computer engineering, information systems, information technology, software engineering. However, Computing Curricula 2005 recognizes that the meaning of "computing" depends on the context: Computing has other meanings that are more specific, based on the context in which the term is used.
For example, an information systems specialist will view computing somewhat differently from a software engineer. Regardless of the context, doing computing well can be complicated and difficult; because society needs people to do computing well, we must think of computing not only as a profession but as a discipline. The term "computing" has sometimes been narrowly defined, as in a 1989 ACM report on Computing as a Discipline: The discipline of computing is the systematic study of algorithmic processes that describe and transform information: their theory, design, efficiency and application; the fundamental question underlying all computing is "What can be automated?" The term "computing" is synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, before that, to human computers; the history of computing is longer than the history of computing hardware and modern computing technology and includes the history of methods intended for pen and paper or for chalk and slate, with or without the aid of tables.
Computing is intimately tied to the representation of numbers. But long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization; these concepts include one-to-one correspondence, comparison to a standard, the 3-4-5 right triangle. The earliest known tool for use in computation was the abacus, it was thought to have been invented in Babylon circa 2400 BC, its original style of usage was by lines drawn in sand with pebbles. Abaci, of a more modern design, are still used as calculation tools today; this was the first known calculation aid - preceding Greek methods by 2,000 years. The first recorded idea of using digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Symbolic Analysis of Relay and Switching Circuits" introduced the idea of using electronics for Boolean algebraic operations. A computer is a machine that manipulates data according to a set of instructions called a computer program.
The program has an executable form. The same program in its human-readable source code form, enables a programmer to study and develop a sequence of steps known as an algorithm; because the instructions can be carried out in different types of computers, a single set of source instructions converts to machine instructions according to the central processing unit type. The execution process carries out the instructions in a computer program. Instructions express, they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions. Computer software or just "software", is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures and its documentation concerned with the operation of a data processing system.
Program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software. The term was coined to contrast with the old term hardware. In contrast to hardware, software is intangible. Software is sometimes used in a more narrow sense, meaning application software only. Application software known as an "application" or an "app", is a computer software designed to help the user to perform specific tasks. Examples include enterprise software, accounting software, office suites, graphics software and media players. Many application programs deal principally with documents. Apps may be published separately; some users need never install one. Application software is contrasted with system software and middleware, which manage and integrate a computer's capabilities, but
Color, or colour, is the characteristic of human visual perception described through color categories, with names such as red, yellow, blue, or purple. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the visible spectrum. Color categories and physical specifications of color are associated with objects through the wavelength of the light, reflected from them; this reflection is governed by the object's physical properties such as light absorption, emission spectra, etc. By defining a color space, colors can be identified numerically by coordinates, which in 1931 were named in global agreement with internationally agreed color names like mentioned above by the International Commission on Illumination; the RGB color space for instance is a color space corresponding to human trichromacy and to the three cone cell types that respond to three bands of light: long wavelengths, peaking near 564–580 nm. There may be more than three color dimensions in other color spaces, such as in the CMYK color model, wherein one of the dimensions relates to a color's colorfulness).
The photo-receptivity of the "eyes" of other species varies from that of humans and so results in correspondingly different color perceptions that cannot be compared to one another. Honeybees and bumblebees for instance have trichromatic color vision sensitive to ultraviolet but is insensitive to red. Papilio butterflies may have pentachromatic vision; the most complex color vision system in the animal kingdom has been found in stomatopods with up to 12 spectral receptor types thought to work as multiple dichromatic units. The science of color is sometimes called chromatics, colorimetry, or color science, it includes the study of the perception of color by the human eye and brain, the origin of color in materials, color theory in art, the physics of electromagnetic radiation in the visible range. Electromagnetic radiation is characterized by its intensity; when the wavelength is within the visible spectrum, it is known as "visible light". Most light sources emit light at many different wavelengths.
Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary among different species, to a lesser extent among individuals within the same species. In each such class the members are called metamers of the color in question; the familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate wavelengths for various pure spectral colors; the wavelengths listed are as measured in vacuum. The color table should not be interpreted as a definitive list—the pure spectral colors form a continuous spectrum, how it is divided into distinct colors linguistically is a matter of culture and historical contingency.
A common list identifies six main bands: red, yellow, green and violet. Newton's conception included a seventh color, between blue and violet, it is possible that what Newton referred to as blue is nearer to what today is known as cyan, that indigo was the dark blue of the indigo dye, being imported at the time. The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably; the color of an object depends on both the physics of the object in its environment and the characteristics of the perceiving eye and brain. Physically, objects can be said to have the color of the light leaving their surfaces, which depends on the spectrum of the incident illumination and the reflectance properties of the surface, as well as on the angles of illumination and viewing; some objects not only reflect light, but transmit light or emit light themselves, which contributes to the color. A viewer's perception of the object's color depends not only on the spectrum of the light leaving its surface, but on a host of contextual cues, so that color differences between objects can be discerned independent of the lighting spectrum, viewing angle, etc.
This effect is known as color constancy. Some generalizations of the physics can be drawn, neglecting perceptual effects for now: Light arriving at an opaque surface is either reflected "specularly", scattered, or absorbed – or some combination of these. Opaque objects that do not reflect specularly have their color determined by which wavelengths of light they scatter strongly. If objects scatter all wavelengths with r
Hypertext Transfer Protocol
The Hypertext Transfer Protocol is an application protocol for distributed, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can access, for example by a mouse click or by tapping the screen in a web browser. HTTP was developed to facilitate the World Wide Web. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of HTTP standards was coordinated by the Internet Engineering Task Force and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments; the first definition of HTTP/1.1, the version of HTTP in common use, occurred in RFC 2068 in 1997, although this was made obsolete by RFC 2616 in 1999 and again by the RFC 7230 family of RFCs in 2014. A version, the successor HTTP/2, was standardized in 2015, is now supported by major web servers and browsers over Transport Layer Security using Application-Layer Protocol Negotiation extension where TLS 1.2 or newer is required.
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server; the client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client; the response contains completion status information about the request and may contain requested content in its message body. A web browser is an example of a user agent. Other types of user agent include the indexing software used by search providers, voice browsers, mobile apps, other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites benefit from web cache servers that deliver content on behalf of upstream servers to improve response time.
Web browsers cache accessed web resources and reuse them, when possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. HTTP is an application layer protocol designed within the framework of the Internet protocol suite, its definition presumes an underlying and reliable transport layer protocol, Transmission Control Protocol is used. However, HTTP can be adapted to use unreliable protocols such as the User Datagram Protocol, for example in HTTPU and Simple Service Discovery Protocol. HTTP resources are identified and located on the network by Uniform Resource Locators, using the Uniform Resource Identifiers schemes http and https. URIs and hyperlinks in HTML documents form interlinked hypertext documents. HTTP/1.1 is a revision of the original HTTP. In HTTP/1.0 a separate connection to the same server is made for every resource request. HTTP/1.1 can reuse a connection multiple times to download images, stylesheets, etc after the page has been delivered.
HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead. The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a text-based web browser. Berners-Lee first proposed the "WorldWideWeb" project in 1989—now known as the World Wide Web; the first version of the protocol had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page; the first documented version of HTTP was HTTP V0.9. Dave Raggett led the HTTP Working Group in 1995 and wanted to expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields.
RFC 1945 introduced and recognized HTTP V1.0 in 1996. The HTTP WG planned to publish new standards in December 1995 and the support for pre-standard HTTP/1.1 based on the developing RFC 2068 was adopted by the major browser developers in early 1996. By March that year, pre-standard HTTP/1.1 was supported in Arena, Netscape 2.0, Netscape Navigator Gold 2.01, Mosaic 2.7, Lynx 2.5, in Internet Explorer 2.0. End-user adoption of the new browsers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet were HTTP 1.1 compliant. That same web hosting company reported that by June 1996, 65% of all browsers accessing their servers were HTTP/1.1 compliant. The HTTP/1.1 standard as defined in RFC 2068 was released in January 1997. Improvements and updates to the HTTP/1.1 standard were released under RFC 2616 in June 1999. In 2007, the HTTPbis Working Group was formed, in part, to revise and clarify the HTTP/1.1 specification. In June 2014, the WG released an updated six-part specification obsoleting RFC 2616: RFC 7230, HTTP/1.1: Message Syntax and Routing RFC 7231, HTTP/1.1: Semantics and Content RFC 7232, HTTP/1.1: Conditional Requests RFC 7233, HTTP/1.1: Range Requests RFC 7234, HTTP/1.1: Caching RFC 7235, HTTP/1
Classic RISC pipeline
In the history of computer hardware, some early reduced instruction set computer central processing units used a similar architectural solution, now called a classic RISC pipeline. Those CPUs were: MIPS, SPARC, Motorola 88000, the notional CPU DLX invented for education; each of these classic scalar RISC designs tried to execute one instruction per cycle. The main common concept of each design was a five-stage execution instruction pipeline. During operation, each pipeline stage worked on one instruction at a time; each of these stages consisted of an initial set of flip-flops and combinational logic that operated on the outputs of those flip-flops. The Instruction Cache on these machines had a latency of one cycle, meaning that if the instruction was in the cache, it would be ready on the next clock cycle. During the Instruction Fetch stage, a 32-bit instruction was fetched from the cache; the Program Counter, or PC, is a register. It feeds into the PC predictor, which sends the Program Counter to the Instruction Cache to read the current instruction.
At the same time, the PC predictor predicts the address of the next instruction by incrementing the PC by 4. This prediction was always wrong in the case of a taken jump, or exception. Machines would use more complicated and accurate algorithms to guess the next instruction address. Unlike earlier microcoded machines, the first RISC machines had no microcode. Once fetched from the instruction cache, the instruction bits were shifted down the pipeline, so that simple combinational logic in each pipeline stage could produce the control signals for the datapath directly from the instruction bits; as a result little decoding is done in the stage traditionally called the decode stage. A consequence of this lack of decoding meant however that more instruction bits had to be used specifying what the instruction should do, that leaves fewer bits for things like register indices. All MIPS, SPARC, DLX instructions have at most two register inputs. During the decode stage, these two register names are identified within the instruction, the two registers named are read from the register file.
In the MIPS design, the register file had 32 entries. At the same time the register file was read, instruction issue logic in this stage determined if the pipeline was ready to execute the instruction in this stage. If not, the issue logic would cause both the Decode stage to stall. On a stall cycle, the stages would prevent their initial flip-flops from accepting new bits. If the instruction decoded was a branch or jump, the target address of the branch or jump was computed in parallel with reading the register file; the branch condition is computed after the register file is read, if the branch is taken or if the instruction is a jump, the PC predictor in the first stage is assigned the branch target, rather than the incremented PC, computed. It should be noted that some architectures made use of the ALU in the Execute stage, at the cost of decrease instruction throughput; the decode stage ended up with quite a lot of hardware: MIPS had the possibility of branching if two registers were equal, so a 32-bit-wide AND tree ran in series after the register file read, making a long critical path through this stage.
The branch target computation required a 16 bit add and a 14 bit incrementer. Resolving the branch in the decode stage made it possible to have just a single-cycle branch mispredict penalty. Since branches were often taken, it was important to keep this penalty low; the Execute stage is. This stage consists of an Arithmetic and Logic Unit, a bit shifter, it may include a multiple cycle multiplier and divider. The Arithmetic and Logic Unit is responsible for performing boolean operations and for performing integer addition and subtraction. Besides the result, the ALU provides status bits such as whether or not the result was 0, or if an overflow occurred; the bit shifter is responsible for shift and rotations. Instructions on these simple RISC machines can be divided into three latency classes according to the type of the operation: Register-Register Operation: Add, subtract and logical operations. During the execute stage, the two arguments were fed to a simple ALU, which generated the result by the end of the execute stage.
Memory Reference. All loads from memory. During the execute stage, the ALU added the two arguments to produce a virtual address by the end of the cycle. Multi-cycle Instructions. Integer multiply and divide and all floating-point operations. During the execute stage, the operands to these operations were fed to the multi-cycle multiply/divide unit; the rest of the pipeline was free to continue execution. To avoid complicating the writeback stage and issue logic, multicycle instruction wrote their results to a separate set of registers. If data memory needs to be accessed, it is done so in this stage. During this stage, single cycle latency instructions have their results forwarded to the next stage; this forwarding ensures that both one and two cycle instructions always write their results in the same stage of the pipeline so that just one write port to the register file can be used, it is always available. For direct mapped and tagged data caching
FIFO (computing and electronics)
FIFO is an acronym for first in, first out, a method for organising and manipulating a data buffer, where the oldest entry, or'head' of the queue, is processed first. It is analogous to processing a queue with first-come, first-served behaviour: where the people leave the queue in the order in which they arrive. FCFS is the jargon term for the FIFO operating system scheduling algorithm, which gives every process central processing unit time in the order in which it is demanded. FIFO's opposite is LIFO, last-in-first-out, where the youngest entry or'top of the stack' is processed first. A priority queue may adopt similar behaviour temporarily or by default. Queueing theory encompasses these methods for processing data structures, as well as interactions between strict-FIFO queues. Depending on the application, a FIFO could be implemented as a hardware shift register, or using different memory structures a circular buffer or a kind of list. For information on the abstract data structure, see Queue.
Most software implementations of a FIFO queue are not thread safe and require a locking mechanism to verify the data structure chain is being manipulated by only one thread at a time. The following code shows a linked list FIFO C++ language implementation. In practice, a number of list implementations exist, including popular Unix systems C sys/queue.h macros or the C++ standard library std::list template, avoiding the need for implementing the data structure from scratch. The ends of a FIFO queue are referred to as head and tail. A controversy exists regarding those terms: To many people, items should enter a queue at the tail, remain in the queue until they reach the head and leave the queue from there; this point of view is justified by analogy with queues of people waiting for some kind of service and parallels the use of front and back in the above example. Other people believe that items enter a queue at the head and leave at the tail, in the manner of food passing through a snake. Queues written in that way appear in places that could be considered authoritative, such as the operating system Linux.
In computing environments that support the pipes and filters model for interprocess communication, a FIFO is another name for a named pipe. Disk controllers can use the FIFO as a disk scheduling algorithm to determine the order in which to service disk I/O requests. Communication network bridges and routers used in computer networks use FIFOs to hold data packets en route to their next destination. At least one FIFO structure is used per network connection; some devices feature multiple FIFOs for and independently queuing different types of information. FIFOs are used in electronic circuits for buffering and flow control between hardware and software. In its hardware form, a FIFO consists of a set of read and write pointers and control logic. Storage may be static random access memory, flip-flops, latches or any other suitable form of storage. For FIFOs of non-trivial size, a dual-port SRAM is used, where one port is dedicated to writing and the other to reading. A synchronous FIFO is a FIFO where the same clock is used for both writing.
An asynchronous FIFO uses different clocks for writing. Asynchronous FIFOs introduce metastability issues. A common implementation of an asynchronous FIFO uses a Gray code for the read and write pointers to ensure reliable flag generation. One further note concerning flag generation is that one must use pointer arithmetic to generate flags for asynchronous FIFO implementations. Conversely, one may use either a leaky bucket approach or pointer arithmetic to generate flags in synchronous FIFO implementations. Examples of FIFO status flags include: full, empty full empty, etc; the first known FIFO implemented in electronics was done by Peter Alfke in 1969 at Fairchild Semiconductors. Peter Alfke was a director at Xilinx. A hardware FIFO is used for synchronization purposes, it is implemented as a circular queue, thus has two pointers: Read pointer / read address register Write pointer / write address registerRead and write addresses are both at the first memory location and the FIFO queue is empty.
FIFO empty When the read address register reaches the write address register, the FIFO triggers the empty signal. FIFO full When the write address register reaches the read address register, the FIFO triggers the full signal. In both cases, the read and write addresses end up being equal. To distinguish between the two situations, a simple and robust solution is to add one extra bit for each read and write address, inverted each time the address wraps. With this set up, the disambiguation conditions are: When the read address register equals the write address register, the FIFO is empty; when the read address LSBs equal the write address LSBs and the extra MSBs are different, the FIFO is full. FINO Garbage in, garbage out Cummings et al. Simulation and Synthesis Techniques for Asynchronous FIFO Design with Asynchronous Pointer Comparisons, SNUG San Jose 2002
An assembly line is a manufacturing process in which parts are added as the semi-finished assembly moves from workstation to workstation where the parts are added in sequence until the final assembly is produced. By mechanically moving the parts to the assembly work and moving the semi-finished assembly from work station to work station, a finished product can be assembled faster and with less labor than by having workers carry parts to a stationary piece for assembly. Assembly lines are common methods of assembling complex items such as automobiles and other transportation equipment, household appliances and electronic goods. Assembly lines are designed for the sequential organization of workers, tools or machines, parts; the motion of workers is minimized to the extent possible. All parts or assemblies are handled either by conveyors or motorized vehicles such as fork lifts, or gravity, with no manual trucking. Heavy lifting is done by machines such as fork lifts; each worker performs one simple operation.
According to Henry Ford: The principles of assembly are these: Place the tools and the men in the sequence of the operation so that each component part shall travel the least possible distance while in the process of finishing. Use work slides or some other form of carrier so that when a workman completes his operation, he drops the part always in the same place—which place must always be the most convenient place to his hand—and if possible have gravity carry the part to the next workman for his own. Use sliding assembling lines by which the parts to be assembled are delivered at convenient distances. Consider the assembly of a car: assume that certain steps in the assembly line are to install the engine, install the hood, install the wheels. In traditional production, only one car would be assembled at a time. If engine installation takes 20 minutes, hood installation takes five minutes, wheels installation takes 10 minutes a car can be produced every 35 minutes. In an assembly line, car assembly is split between several stations.
When one station is finished with a car, it passes it on to the next. By having three stations, a total of three different cars can be operated on at the same time, each one at a different stage of its assembly. After finishing its work on the first car, the engine installation crew can begin working on the second car. While the engine installation crew works on the second car, the first car can be moved to the hood station and fitted with a hood to the wheels station and be fitted with wheels. After the engine has been installed on the second car, the second car moves to the hood assembly. At the same time, the third car moves to the engine assembly; when the third car's engine has been mounted, it can be moved to the hood station. Assuming no loss of time when moving a car from one station to another, the longest stage on the assembly line determines the throughput so a car can be produced every 20 minutes, once the first car taking 35 minutes has been produced. Before the Industrial Revolution, most manufactured products were made individually by hand.
A single craftsman or team of craftsmen would create each part of a product. They would use their tools such as files and knives to create the individual parts, they would assemble them into the final product, making cut-and-try changes in the parts until they fit and could work together. Division of labor was practiced in China where state run monopolies mass-produced metal agricultural implements, china and weapons centuries before it appeared in Europe on the eve of the Industrial Revolution. Adam Smith discussed the division of labour in the manufacture of pins at length in his book The Wealth of Nations; the Venetian Arsenal, dating to about 1104, operated similar to a production line. Ships were fitted by the various shops they passed. At the peak of its efficiency in the early 16th century, the Arsenal employed some 16,000 people who could produce nearly one ship each day, could fit out and provision a newly built galley with standardized parts on an assembly-line basis. Although the Arsenal lasted until the early Industrial Revolution, production line methods did not become common then.
The Industrial Revolution led to a proliferation of invention. Many industries, notably textiles, firearms and watches, horse-drawn vehicles, railway locomotives, sewing machines, bicycles, saw expeditious improvement in materials handling and assembly during the 19th century, although modern concepts such as industrial engineering and logistics had not yet been named; the automatic flour mill built by Oliver Evans in 1785 was called the beginning of modern bulk material handling by Roe. Evans's mill used a leather belt bucket elevator, screw conveyors, canvas belt conveyors, other mechanical devices to automate the process of making flour; the innovation spread to other breweries. The earliest industrial example of a linear and continuous assembly process is the Portsmouth Block Mills, built between 1801 and 1803. Marc Isambard Brunel, with the help of Henry Maudslay and others, designed 22 types of machine tools to make the parts for the rigging blocks used by the Royal Navy; this factory was so successful that it remained in use until the 1960s, with the workshop still visible at HM Dockyard in Portsm
A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor that can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by dispatching multiple instructions to different execution units on the processor, it therefore allows for more throughput. Each execution unit is not a separate processor, but an execution resource within a single CPU such as an arithmetic logic unit. In Flynn's taxonomy, a single-core superscalar processor is classified as an SISD processor, though a single-core superscalar processor that supports short vector operations could be classified as SIMD. A multi-core superscalar processor is classified as an MIMD processor. While a superscalar CPU is also pipelined and pipelining execution are considered different performance enhancement techniques; the former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases.
The superscalar technique is traditionally associated with several identifying characteristics: Instructions are issued from a sequential instruction stream The CPU dynamically checks for data dependencies between instructions at run time The CPU can execute multiple instructions per clock cycle Seymour Cray's CDC 6600 from 1966 is mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe; the Motorola MC88100, the Intel i960CA and the AMD 29000-series 29050 microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be used to include multiple execution units. Except for CPUs used in low-power applications, embedded systems, battery-powered devices all general-purpose CPUs developed since about 1998 are superscalar; the P5 Pentium was the first superscalar x86 processor. The simplest processors are scalar processors.
Each instruction executed by a scalar processor manipulates one or two data items at a time. By contrast, each instruction executed by a vector processor operates on many data items. An analogy is the difference between vector arithmetic. A superscalar processor is a mixture of the two; each instruction processes one data item, but there are multiple execution units within each CPU thus multiple instructions can be processing separate data items concurrently. Superscalar CPU design emphasizes improving the instruction dispatcher accuracy, allowing it to keep the multiple execution units in use at all times; this has become important as the number of units has increased. While early superscalar CPUs would have two ALUs and a single FPU, a design such as the PowerPC 970 includes four ALUs, two FPUs, two SIMD units. If the dispatcher is ineffective at keeping all of these units fed with instructions, the performance of the system will be no better than that of a simpler, cheaper design. A superscalar processor sustains an execution rate in excess of one instruction per machine cycle.
But processing multiple instructions concurrently does not make an architecture superscalar, since pipelined, multiprocessor or multi-core architectures achieve that, but with different methods. In a superscalar CPU the dispatcher reads instructions from memory and decides which ones can be run in parallel, dispatching each to one of the several execution units contained inside a single CPU. Therefore, a superscalar processor can be envisioned having multiple parallel pipelines, each of, processing instructions from a single instruction thread. Available performance improvement from superscalar techniques is limited by three key areas: The degree of intrinsic parallelism in the instruction stream; the complexity and time cost of dependency checking logic and register renaming circuitry The branch instruction processing. Existing binary executable programs have varying degrees of intrinsic parallelism. In some cases instructions can be executed simultaneously. In other cases they are inter-dependent: one instruction impacts either resources or results of the other.
The instructions a = b + c. However, the instructions a = b + c.