A microprocessor is a computer processor that incorporates the functions of a central processing unit on a single integrated circuit, or at most a few integrated circuits. The microprocessor is a multipurpose, clock driven, register based, digital integrated circuit that accepts binary data as input, processes it according to instructions stored in its memory, provides results as output. Microprocessors contain sequential digital logic. Microprocessors operate on symbols represented in the binary number system; the integration of a whole CPU onto a single or a few integrated circuits reduced the cost of processing power. Integrated circuit processors are produced in large numbers by automated processes, resulting in a low unit price. Single-chip processors increase reliability because there are many fewer electrical connections that could fail; as microprocessor designs improve, the cost of manufacturing a chip stays the same according to Rock's law. Before microprocessors, small computers had been built using racks of circuit boards with many medium- and small-scale integrated circuits.
Microprocessors combined this into a few large-scale ICs. Continued increases in microprocessor capacity have since rendered other forms of computers completely obsolete, with one or more microprocessors used in everything from the smallest embedded systems and handheld devices to the largest mainframes and supercomputers; the complexity of an integrated circuit is bounded by physical limitations on the number of transistors that can be put onto one chip, the number of package terminations that can connect the processor to other parts of the system, the number of interconnections it is possible to make on the chip, the heat that the chip can dissipate. Advancing technology makes more powerful chips feasible to manufacture. A minimal hypothetical microprocessor might include only an arithmetic logic unit, a control logic section; the ALU performs addition and operations such as AND or OR. Each operation of the ALU sets one or more flags in a status register, which indicate the results of the last operation.
The control logic retrieves instruction codes from memory and initiates the sequence of operations required for the ALU to carry out the instruction. A single operation code might affect many individual data paths and other elements of the processor; as integrated circuit technology advanced, it was feasible to manufacture more and more complex processors on a single chip. The size of data objects became larger. Additional features were added to the processor architecture. Floating-point arithmetic, for example, was not available on 8-bit microprocessors, but had to be carried out in software. Integration of the floating point unit first as a separate integrated circuit and as part of the same microprocessor chip sped up floating point calculations. Physical limitations of integrated circuits made such practices as a bit slice approach necessary. Instead of processing all of a long word on one integrated circuit, multiple circuits in parallel processed subsets of each data word. While this required extra logic to handle, for example and overflow within each slice, the result was a system that could handle, for example, 32-bit words using integrated circuits with a capacity for only four bits each.
The ability to put large numbers of transistors on one chip makes it feasible to integrate memory on the same die as the processor. This CPU cache has the advantage of faster access than off-chip memory and increases the processing speed of the system for many applications. Processor clock frequency has increased more than external memory speed, so cache memory is necessary if the processor is not delayed by slower external memory. A microprocessor is a general-purpose entity. Several specialized processing devices have followed: A digital signal processor is specialized for signal processing. Graphics processing units are processors designed for realtime rendering of images. Other specialized units exist for video machine vision. Microcontrollers integrate a microprocessor with peripheral devices in embedded systems. Systems on chip integrate one or more microprocessor or microcontroller cores. Microprocessors can be selected for differing applications based on their word size, a measure of their complexity.
Longer word sizes allow each clock cycle of a processor to carry out more computation, but correspond to physically larger integrated circuit dies with higher standby and operating power consumption. 4, 8 or 12 bit processors are integrated into microcontrollers operating embedded systems. Where a system is expected to handle larger volumes of data or require a more flexible user interface, 16, 32 or 64 bit processors are used. An 8- or 16-bit processor may be selected over a 32-bit processor for system on a chip or microcontroller applications that require low-power electronics, or are part of a mixed-signal integrated circuit with noise-sensitive on-chip analog electronics such as high-resolution analog to digital converters, or both. Running 32-bit arithmetic on an 8-bit chip could end up using more power, as the chip must execute software with multiple instructions. Thousands of items that were traditionally not computer-related inc
Machine code is a computer program written in machine language instructions that can be executed directly by a computer's central processing unit. Each instruction causes the CPU to perform a specific task, such as a load, a store, a jump, or an ALU operation on one or more units of data in CPU registers or memory. Machine code is a numerical language, intended to run as fast as possible, may be regarded as the lowest-level representation of a compiled or assembled computer program or as a primitive and hardware-dependent programming language. While it is possible to write programs directly in machine code, it is tedious and error prone to manage individual bits and calculate numerical addresses and constants manually. For this reason, programs are rarely written directly in machine code in modern contexts, but may be done for low level debugging, program patching, assembly language disassembly; the overwhelming majority of practical programs today are written in higher-level languages or assembly language.
The source code is translated to executable machine code by utilities such as compilers and linkers, with the important exception of interpreted programs, which are not translated into machine code. However, the interpreter itself, which may be seen as an executor or processor, performing the instructions of the source code consists of directly executable machine code. Machine code is by definition the lowest level of programming detail visible to the programmer, but internally many processors use microcode or optimise and transform machine code instructions into sequences of micro-ops, this is not considered to be a machine code per se; every processor or processor family has its own instruction set. Instructions are patterns of bits that by physical design correspond to different commands to the machine. Thus, the instruction set is specific to a class of processors using the same architecture. Successor or derivative processor designs include all the instructions of a predecessor and may add additional instructions.
A successor design will discontinue or alter the meaning of some instruction code, affecting code compatibility to some extent. Systems may differ in other details, such as memory arrangement, operating systems, or peripheral devices; because a program relies on such factors, different systems will not run the same machine code when the same type of processor is used. A processor's instruction set may have all instructions of the same length, or it may have variable-length instructions. How the patterns are organized varies with the particular architecture and also with the type of instruction. Most instructions have one or more opcode fields which specifies the basic instruction type and the actual operation and other fields that may give the type of the operand, the addressing mode, the addressing offset or index, or the actual value itself. Not all machines or individual instructions have explicit operands. An accumulator machine has a combined left operand and result in an implicit accumulator for most arithmetic instructions.
Other architectures have accumulator versions of common instructions, with the accumulator regarded as one of the general registers by longer instructions. A stack machine has all of its operands on an implicit stack. Special purpose instructions often lack explicit operands; this distinction between explicit and implicit operands is important in code generators in the register allocation and live range tracking parts. A good code optimizer can track implicit as well as explicit operands which may allow more frequent constant propagation, constant folding of registers and other code enhancements. A computer program is a list of instructions. A program's execution is done in order for the CPU, executing it to solve a specific problem and thus accomplish a specific result. While simple processors are able to execute instructions one after another, superscalar processors are capable of executing a variety of different instructions at once. Program flow may be influenced by special'jump' instructions that transfer execution to an instruction other than the numerically following one.
Conditional jumps are not depending on some condition. A much more readable rendition of machine language, called assembly language, uses mnemonic codes to refer to machine code instructions, rather than using the instructions' numeric values directly. For example, on the Zilog Z80 processor, the machine code 00000101, which causes the CPU to decrement the B processor register, would be represented in assembly language as DEC B; the MIPS architecture provides a specific example for a machine code whose instructions are always 32 bits long. The general type of instruction is given by the op field. J-type and I-type instructions are specified by op. R-type instructions include an additional field funct to determine the exact operation; the fields used in the
A simulation is an approximate imitation of the operation of a process or system. This model is a well-defined description of the simulated subject, represents its key characteristics, such as its behaviour and abstract or physical properties; the model represents the system itself. Simulation is used in many contexts, such as simulation of technology for performance optimization, safety engineering, training and video games. Computer experiments are used to study simulation models. Simulation is used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may not exist. Key issues in simulation include the acquisition of valid source information about the relevant selection of key characteristics and behaviours, the use of simplifying approximations and assumptions within the simulation, fidelity and validity of the simulation outcomes.
Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement and development in simulations technology or practice in the field of computer simulation. Simulations used in different fields developed independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing; these physical objects are chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation referred to as a human in the loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or a driving simulator. Continuous simulation is a simulation where time evolves continuously based on numerical integration of Differential Equations.
Discrete Event Simulation is a simulation where time evolves along events that represent critical moments, while the values of the variables are not relevant between two of them or result trivial to be computed in case of necessityStochastic Simulation is a simulation where some variable or process is regulated by stochastic factors and estimated based on Monte Carlo techniques using pseudo-random numbers, so replicated runs from same boundary conditions are expected to produce different results within a specific confidence band Deterministic Simulation is a simulation where the variable is regulated by deterministic algorithms, so replicated runs from same boundary conditions produce always identical results. Hybrid Simulation corresponds to a mix between Continuous and Discrete Event Simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities Stand Alone Simulation is a Simulation running on a single workstation by itself.
Distributed Simulation is operating over distributed computers in order to guarantee access from/to different resources. Modeling & Simulation as a Service where Simulation is accessed as a Service over the web. Modeling, interoperable Simulation and Serious Games where Serious Games Approaches are integrated with Interoperable Simulation. Simulation Fidelity is used to describe the accuracy of a simulation and how it imitates the real-life counterpart. Fidelity is broadly classified as 1 of 3 categories: low and high. Specific descriptions of fidelity levels are subject to interpretation but the following generalization can be made: Low – the minimum simulation required for a system to respond to accept inputs and provide outputs Medium – responds automatically to stimuli, with limited accuracy High – nearly indistinguishable or as close as possible to the real systemHuman in the loop simulations can include a computer simulation as a so-called synthetic environment. Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure.
This was the fastest method to identify the failure cause. A computer simulation is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system, it is a tool to investigate the behaviour of the system under study. Computer simulation has become a useful part of modeling many natural systems in physics and biology, human systems in economics and social science as well as in engineering to gain insight into the operation of those systems
A debugger or debugging tool is a computer program, used to test and debug other programs. The code to be examined might alternatively be running on an instruction set simulator, a technique that allows great power in its ability to halt when specific conditions are encountered, but which will be somewhat slower than executing the code directly on the appropriate processor; some debuggers offer two modes of operation, partial simulation, to limit this impact. A "trap" occurs when the program cannot continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory; when the program "traps" or reaches a preset condition, the debugger shows the location in the original code if it is a source-level debugger or symbolic debugger now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly.
Debuggers offer a query processor, a symbol resolver, an expression interpreter, a debug support interface at its top level. Debuggers offer more sophisticated functions such as running a program step by step, stopping at some event or specified instruction by means of a breakpoint, tracking the values of variables; some debuggers have the ability to modify program state. It may be possible to continue execution at a different location in the program to bypass a crash or logical error; the same functionality which makes a debugger useful for eliminating bugs allows it to be used as a software cracking tool to evade copy protection, digital rights management, other software protection features. It also makes it useful as a general verification tool, fault coverage, performance analyzer if instruction path lengths are shown. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces. Debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, visualization features.
Some debuggers include a feature called "reverse debugging" known as "historical debugging" or "backwards debugging". These debuggers make it possible to step a program's execution backwards in time. Various debuggers include this feature. Microsoft Visual Studio offers IntelliTrace reverse debugging for Visual Basic. NET, some other languages, but not C++. Reverse debuggers exist for C, C++, Python and other languages; some are open source. Some reverse debuggers slow down the target by orders of magnitude, but the best reverse debuggers cause a slowdown of 2× or less. Reverse debugging is useful for certain types of problems, but is still not used yet; some debuggers operate on a single specific language while others can handle multiple languages transparently. For example, if the main target program is written in COBOL but calls assembly language subroutines and PL/1 subroutines, the debugger may have to dynamically switch modes to accommodate the changes in language as they occur; some debuggers incorporate memory protection to avoid storage violations such as buffer overflow.
This may be important in transaction processing environments where memory is dynamically allocated from memory'pools' on a task by task basis. Most modern microprocessors have at least one of these features in their CPU design to make debugging easier: Hardware support for single-stepping a program, such as the trap flag. An instruction set that meets the Popek and Goldberg virtualization requirements makes it easier to write debugger software that runs on the same CPU as the software being debugged. In-system programming allows an external hardware debugger to reprogram a system under test. Many systems with such ISP support have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved, page fault hardware. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set. Processors used in embedded systems have extensive JTAG debug support.
A programmer, coder, or software engineer is a person who creates computer software. The term computer programmer can refer to a specialist in one area of computers, or to a generalist who writes code for many kinds of software. One who practices, or professes, a formal approach to programming may be known as a programmer analyst. On the other hand, "code monkey" is a derogatory term for a programmer who writes code without any involvement in the design or specifications. A programmer's primary computer language is prefixed to these titles, those who work in a web environment prefix their titles with web. A range of occupations—including: software developer, web developer, mobile applications developer, embedded firmware developer, software engineer, computer scientist, game programmer, game developer, or software analyst—that involve programming require a range of other skills; the use of the term programmer for these positions is sometimes considered an insulting or derogatory simplification.
British countess and mathematician Ada Lovelace is considered the first computer programmer, as she was the first to publish an algorithm intended for implementation on Charles Babbage's analytical engine, in October 1842, intended for the calculation of Bernoulli numbers. Because Babbage's machine was never completed to a functioning standard in her time, she never saw this algorithm run; the first person to run a program on a functioning modern electronically based computer was computer scientist Konrad Zuse, in 1941. The ENIAC programming team, consisting of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were the first working programmers. International Programmers' Day is celebrated annually on 7 January. In 2009, the government of Russia decreed a professional annual holiday known as Programmers' Day to be celebrated on 13 September, it had been an unofficial international holiday before that. The word "software" did not appear in print until the 1960s.
Before this time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as UNIVAC and IBM. The first company founded to provide software products and services was Computer Usage Company in 1955; the software industry expanded in the early 1960s immediately after computers were first sold in mass-produced quantities. Universities and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers; some were distributed between users of a particular machine for no charge. Others were done on a commercial basis, other firms such as Computer Sciences Corporation started to grow; the computer/hardware makers started bundling operating systems, system software and programming environments with their machines. The industry expanded with the rise of the personal computer in the mid-1970s, which brought computing to the desktop of the office worker. In the following years, it created a growing market for games and utilities.
DOS, Microsoft's first operating system product, was the dominant operating system at the time. In the early years of the 21st century, another successful business model has arisen for hosted software, called software-as-a-service, or SaaS. From the point of view of producers of some proprietary software, SaaS reduces the concerns about unauthorized copying, since it can only be accessed through the Web, by definition, no client software is loaded onto the end user's PC. By 2014, the role of cloud developer had been defined. Computer programmers write, test and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers conceive and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.
Programmers work in many settings, including corporate information technology departments, big software companies, small service firms and government entities of all sizes. Many professional programmers work for consulting companies at client sites as contractors. Licensing is not required to work as a programmer, although professional certifications are held by programmers. Programming is considered a profession. Programmers' work varies depending on the type of business for which they are writing programs. For example, the instructions involved in updating financial records are different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Simple programs can be written in a few hours, more complex ones may require more than a year of work, while others are never considered'complete' but rather are continuously improved as long as they stay in use. In most cases, several programmers work together as a team under a senior programmer’s supervision.
Programmers write programs according to the specifications determined b
In computing, an emulator is hardware or software that enables one computer system to behave like another computer system. An emulator enables the host system to run software or use peripheral devices designed for the guest system. Emulation refers to the ability of a computer program in an electronic device to emulate another program or device. Many printers, for example, are designed to emulate Hewlett-Packard LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will run in the non-HP printer emulation and produce equivalent printing. Since at least the 1990s, many video game enthusiasts have used emulators to play classic arcade games from the 1980s using the games' original 1980s machine code and data, interpreted by a current-era system. A hardware emulator is an emulator. Examples include the DOS-compatible card installed in some 1990s-era Macintosh computers like the Centris 610 or Performa 630 that allowed them to run personal computer software programs and FPGA-based hardware emulators.
In a theoretical sense, the Church-Turing thesis implies that any operating environment can be emulated within any other environment. However, in practice, it can be quite difficult when the exact behavior of the system to be emulated is not documented and has to be deduced through reverse engineering, it says nothing about timing constraints. Emulation is a strategy in digital preservation to combat obsolescence. Emulation focuses on recreating an original computer environment, which can be time-consuming and difficult to achieve, but valuable because of its ability to maintain a closer connection to the authenticity of the digital object. Emulation addresses the original hardware and software environment of the digital object, recreates it on a current machine; the emulator allows the user to have access to any kind of application or operating system on a current platform, while the software runs as it did in its original environment. Jeffery Rothenberg, an early proponent of emulation as a digital preservation strategy states, "the ideal approach would provide a single extensible, long-term solution that can be designed once and for all and applied uniformly, in synchrony to all types of documents and media".
He further states that this should not only apply to out of date systems, but be upwardly mobile to future unknown systems. Speaking, when a certain application is released in a new version, rather than address compatibility issues and migration for every digital object created in the previous version of that application, one could create an emulator for the application, allowing access to all of said digital objects. Better graphics quality than original hardware. Additional features original hardware didn't have. Emulators maintain the original look and behavior of the digital object, just as important as the digital data itself. Despite the original cost of developing an emulator, it may prove to be the more cost efficient solution over time. Reduces labor hours, because rather than continuing an ongoing task of continual data migration for every digital object, once the library of past and present operating systems and application software is established in an emulator, these same technologies are used for every document using those platforms.
Many emulators have been developed and released under the GNU General Public License through the open source environment, allowing for wide scale collaboration. Emulators allow software exclusive to one system to be used on another. For example, a PlayStation 2 exclusive video game could be played on a PC using an emulator; this is useful when the original system is difficult to obtain, or incompatible with modern equipment. Intellectual property - Many technology vendors implemented non-standard features during program development in order to establish their niche in the market, while applying ongoing upgrades to remain competitive. While this may have advanced the technology industry and increased vendor's market share, it has left users lost in a preservation nightmare with little supporting documentation due to the proprietary nature of the hardware and software. Copyright laws are not yet in effect to address saving the documentation and specifications of proprietary software and hardware in an emulator module.
Emulators are used as a copyright infringement tool, since they allow users to play video games without having to buy the console, make any attempt to prevent the use of illegal copies. This leads to a number of legal uncertainties regarding emulation, leads to software being programmed to refuse to work if it can tell the host is an emulator; these protections make it more difficult to design emulators, since they must be accurate enough to avoid triggering the protections, whose effects may not be obvious. Emulators require better hardware; because of its primary use of digital formats
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing; the processor responds by suspending its current activities, saving its state, executing a function called an interrupt handler to deal with the event. This interruption is temporary, after the interrupt handler finishes, the processor resumes normal activities. There are two types of interrupts: software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system. Internally, hardware interrupts are implemented using electronic alerting signals that are sent to the processor from an external device, either a part of the computer itself, such as a disk controller, or an external peripheral. For example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position.
Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, requiring additional care in programming. The act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by an exceptional condition in the processor itself, or a special instruction in the instruction set which causes an interrupt when it is executed; the former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, a divide-by-zero exception will be thrown if the processor's arithmetic logic unit is commanded to divide a number by zero as this instruction is an error and impossible; the operating system will catch this exception, can decide what to do about it: aborting the process and displaying an error message. Software interrupt instructions can function to subroutine calls and are used for a variety of purposes, such as to request services from device drivers, like interrupts sent to and from a disk controller to request reading or writing of data to and from the disk.
Each interrupt has its own interrupt handler. The number of hardware interrupts is limited by the number of interrupt request lines to the processor, but there may be hundreds of different software interrupts. Interrupts are a used technique for computer multitasking in real-time computing; such a system is said to be interrupt-driven. Interrupts are similar to signals, the difference being that signals are used for inter-process communication, mediated by the kernel and handled by processes, while interrupts are mediated by the processor and handled by the kernel; the kernel may pass an interrupt as a signal to the process. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops, waiting for external events; the first system to use this approach was the DYSEAC, completed in 1954, although earlier systems provided error trap functions. Interrupts may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem.
If implemented in hardware, an interrupt controller circuit such as the IBM PC's Programmable Interrupt Controller may be connected between the interrupting device and the processor's interrupt pin to multiplex several sources of interrupt onto the one or two CPU lines available. If implemented as part of the memory controller, interrupts are mapped into the system's memory address space. Interrupts can be categorized into these different types: Maskable interrupt: a hardware interrupt that may be ignored by setting a bit in an interrupt mask register's bit-mask. Non-maskable interrupt: a hardware interrupt that lacks an associated bit-mask, so that it can never be ignored. NMIs are used for the highest priority tasks such as timers watchdog timers. Inter-processor interrupt: a special case of interrupt, generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt: an interrupt generated within a processor by executing an instruction. Software interrupts are used to implement system calls because they result in a subroutine call with a CPU ring level change.
Spurious interrupt: a hardware interrupt, unwanted. They are generated by system conditions such as electrical interference on an interrupt line or through incorrectly designed hardware. Processors have an internal interrupt mask which allows software to ignore all external hardware interrupts while it is set. Setting or clearing this mask may be faster than accessing an interrupt mask register in a PIC or disabling interrupts in the device itself. In some cases, such as the x86 architecture and enabling interrupts on the processor itself act as a memory barrier. An interrupt that leaves the machine in a well-defined state is called a precise interrupt; such an interrupt has four properties: The Program Counter is saved in a known place. All instructions before the one pointed to by the PC have executed. No instruction beyond the one pointed to by the PC has been executed, or any such instructions are undone before handling the interrupt; the execution state of the instruction pointed to by the PC is known.
An interrupt that does not meet these requirements is called an impr