Random-access memory is a form of computer data storage that stores data and machine code being used. A random-access memory device allows data items to be read or written in the same amount of time irrespective of the physical location of data inside the memory. In contrast, with other direct-access data storage media such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory, the time required to read and write data items varies depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. More than one bit of storage is accessed by the same address, RAM devices have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuits. RAM is associated with volatile types of memory, where stored information is lost if power is removed, although non-volatile RAM has been developed.
Other types of non-volatile memories exist that allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Integrated-circuit RAM chips came into the market in the early 1970s, with the first commercially available DRAM chip, the Intel 1103, introduced in October 1970. Early computers used relays, mechanical counters or delay lines for main memory functions. Ultrasonic delay lines could only reproduce data in the order. Drum memory could be expanded at low cost but efficient retrieval of memory items required knowledge of the physical layout of the drum to optimize speed. Latches built out of vacuum tube triodes, out of discrete transistors, were used for smaller and faster memories such as registers; such registers were large and too costly to use for large amounts of data. The first practical form of random-access memory was the Williams tube starting in 1947.
It stored data. Since the electron beam of the CRT could read and write the spots on the tube in any order, memory was random access; the capacity of the Williams tube was a few hundred to around a thousand bits, but it was much smaller and more power-efficient than using individual vacuum tube latches. Developed at the University of Manchester in England, the Williams tube provided the medium on which the first electronically stored program was implemented in the Manchester Baby computer, which first ran a program on 21 June 1948. In fact, rather than the Williams tube memory being designed for the Baby, the Baby was a testbed to demonstrate the reliability of the memory. Magnetic-core memory was developed up until the mid-1970s, it became a widespread form of random-access memory. By changing the sense of each ring's magnetization, data could be stored with one bit stored per ring. Since every ring had a combination of address wires to select and read or write it, access to any memory location in any sequence was possible.
Magnetic core memory was the standard form of memory system until displaced by solid-state memory in integrated circuits, starting in the early 1970s. Dynamic random-access memory allowed replacement of a 4 or 6-transistor latch circuit by a single transistor for each memory bit increasing memory density at the cost of volatility. Data was stored in the tiny capacitance of each transistor, had to be periodically refreshed every few milliseconds before the charge could leak away; the Toshiba Toscal BC-1411 electronic calculator, introduced in 1965, used a form of DRAM built from discrete components. DRAM was developed by Robert H. Dennard in 1968. Prior to the development of integrated read-only memory circuits, permanent random-access memory was constructed using diode matrices driven by address decoders, or specially wound core rope memory planes; the two used forms of modern RAM are static RAM and dynamic RAM. In SRAM, a bit of data is stored using the state of a six transistor memory cell.
This form of RAM is more expensive to produce, but is faster and requires less dynamic power than DRAM. In modern computers, SRAM is used as cache memory for the CPU. DRAM stores a bit of data using a transistor and capacitor pair, which together comprise a DRAM cell; the capacitor holds a high or low charge, the transistor acts as a switch that lets the control circuitry on the chip read the capacitor's state of charge or change it. As this form of memory is less expensive to produce than static RAM, it is the predominant form of computer memory used in modern computers. Both static and dynamic RAM are considered volatile, as their state is lost or reset when power is removed from the system. By contrast, read-only memory stores data by permanently enabling or disabling selected transistors, such that the memory cannot be altered. Writeable variants of ROM share properties of both ROM and RAM, enabling data to persist without power and to be updated without requiring special equipment; these persistent forms of semiconductor ROM include USB flash drives, memory cards for cameras and portable devices, solid-state drives.
ECC memory includes special circuitry to detect and/or correct random faults (mem
Disk storage is a general category of storage mechanisms where data is recorded by various electronic, optical, or mechanical changes to a surface layer of one or more rotating disks. A disk drive is a device implementing such a storage mechanism. Notable types are the hard disk drive containing a non-removable disk, the floppy disk drive and its removable floppy disk, various optical disc drives and associated optical disc media.. Audio information was recorded by analog methods; the first video disc used an analog recording method. In the music industry, analog recording has been replaced by digital optical technology where the data is recorded in a digital format with optical information; the first commercial digital disk storage device was the IBM 350 which shipped in 1956 as a part of the IBM 305 RAMAC computing system. The random-access, low-density storage of disks was developed to complement the used sequential-access, high-density storage provided by tape drives using magnetic tape. Vigorous innovation in disk storage technology, coupled with less vigorous innovation in tape storage, has reduced the difference in acquisition cost per terabyte between disk storage and tape storage.
Disk storage is now used in both computer storage and consumer electronic storage, e.g. audio CDs and video discs. Data on modern disks is stored in fixed length blocks called sectors and varying in length from a few hundred to many thousands of bytes. Gross disk drive capacity is the number of disk surfaces times the number of blocks/surface times the number of bytes/block. In certain legacy IBM CKD drives the data was stored on magnetic disks with variable length blocks, called records. Capacity decreased. Digital disk drives are block storage devices; each disk is divided into logical blocks. Blocks are addressed using their logical block addresses. Read from or writing to disk happens at the granularity of blocks; the disk capacity was quite low and has been improved in one of several ways. Improvements in mechanical design and manufacture allowed smaller and more precise heads, meaning that more tracks could be stored on each of the disks. Advancements in data compression methods permitted more information to be stored in each of the individual sectors.
The drive stores data onto cylinders and sectors. The sectors unit is the smallest size of data to be stored in a hard disk drive and each file will have many sectors units assigned to it; the smallest entity in a CD is called a frame, which consists of 33 bytes and contains six complete 16-bit stereo samples. The other nine bytes consist of eight CIRC error-correction bytes and one subcode byte used for control and display; the information is sent from the computer processor to the BIOS into a chip controlling the data transfer. This is sent out to the hard drive via a multi-wire connector. Once the data is received onto the circuit board of the drive, they are translated and compressed into a format that the individual drive can use to store onto the disk itself; the data is passed to a chip on the circuit board that controls the access to the drive. The drive is divided into sectors of data stored onto one of the sides of one of the internal disks. An HDD with two disks internally will store data on all four surfaces.
The hardware on the drive tells the actuator arm where it is to go for the relevant track and the compressed information is sent down to the head which changes the physical properties, optically or magnetically for example, of each byte on the drive, thus storing the information. A file is not stored in a linear manner, rather, it is held in the best way for quickest retrieval. Mechanically there are two different motions occurring inside the drive. One is the rotation of the disks inside the device; the other is the side-to-side motion of the head across the disk. There are two types of disk rotation methods: constant linear velocity varies the rotational speed of the optical disc depending upon the position of the head, constant angular velocity spins the media at one constant speed regardless of where the head is positioned. Track positioning follows two different methods across disk storage devices. Storage devices focused on holding computer data, e.g. HDDs, FDDs, Iomega zip drives, use concentric tracks to store data.
During a sequential read or write operation, after the drive accesses all the sectors in a track it repositions the head to the next track. This will cause a momentary delay in the flow of data between the computer. In contrast, optical audio and video discs use a single spiral track that starts at the inner most point on the disc and flows continuously to the outer edge; when reading or writing data there is no need to stop the flow of data to switch tracks. This is similar to vinyl records except vinyl records started at the outer edge and spiraled in toward the center; the disk drive interface is the mechanism/protocol of communicat
A game engine is a software-development environment designed for people to build video games. Developers use game engines to construct games for consoles, mobile devices, personal computers; the core functionality provided by a game engine includes a rendering engine for 2D or 3D graphics, a physics engine or collision detection, scripting, artificial intelligence, streaming, memory management, localization support, scene graph, may include video support for cinematics. Implementers economize on the process of game development by reusing/adapting, in large part, the same game engine to produce different games or to aid in porting games to multiple platforms. In many cases game engines provide a suite of visual development tools in addition to reusable software components; these tools are provided in an integrated development environment to enable simplified, rapid development of games in a data-driven manner. Game engine developers attempt to "pre-invent the wheel" by developing robust software suites which include many elements a game developer may need to build a game.
Most game engine suites provide facilities that ease development, such as graphics, physics and AI functions. These game engines are sometimes called "middleware" because, as with the business sense of the term, they provide a flexible and reusable software platform which provides all the core functionality needed, right out of the box, to develop a game application while reducing costs and time-to-market — all critical factors in the competitive video game industry; as of 2001, Gamebryo, JMonkeyEngine and RenderWare were such used middleware programs. Like other types of middleware, game engines provide platform abstraction, allowing the same game to be run on various platforms including game consoles and personal computers with few, if any, changes made to the game source code. Game engines are designed with a component-based architecture that allows specific systems in the engine to be replaced or extended with more specialized game middleware components; some game engines are designed as a series of loosely connected game middleware components that can be selectively combined to create a custom engine, instead of the more common approach of extending or customizing a flexible integrated product.
However extensibility is achieved, it remains a high priority for game engines due to the wide variety of uses for which they are applied. Despite the specificity of the name, game engines are used for other kinds of interactive applications with real-time graphical needs such as marketing demos, architectural visualizations, training simulations, modeling environments; some game engines only provide real-time 3D rendering capabilities instead of the wide range of functionality needed by games. These engines rely upon the game developer to implement the rest of this functionality or assemble it from other game middleware components; these types of engines are referred to as a "graphics engine", "rendering engine", or "3D engine" instead of the more encompassing term "game engine". This terminology is inconsistently used as many full-featured 3D game engines are referred to as "3D engines". A few examples of graphics engines are: Crystal Space, Genesis3D, Irrlicht, OGRE, RealmForge, Truevision3D, Vision Engine.
Modern game or graphics engines provide a scene graph, an object-oriented representation of the 3D game world which simplifies game design and can be used for more efficient rendering of vast virtual worlds. As technology ages, the components of an engine may become outdated or insufficient for the requirements of a given project. Since the complexity of programming an new engine may result in unwanted delays, a development team may elect to update their existing engine with newer functionality or components; such a framework is composed of a multitude of different components. The actual game logic has to be implemented by some algorithms, it is distinct from sound or input work. The rendering engine generates animated 3D graphics by any of a number of methods. Instead of being programmed and compiled to be executed on the CPU or GPU directly, most rendering engines are built upon one or multiple rendering application programming interfaces, such as Direct3D, OpenGL, or Vulkan which provide a software abstraction of the graphics processing unit.
Low-level libraries such as DirectX, Simple DirectMedia Layer, OpenGL are commonly used in games as they provide hardware-independent access to other computer hardware such as input devices, network cards, sound cards. Before hardware-accelerated 3D graphics, software renderers had been used. Software rendering is still used in some modeling tools or for still-rendered images when visual accuracy is valued over real-time performance or when the computer hardware does not meet needs such as shader support. With the advent of hardware accelerated physics processing, various physics APIs such as PAL and the physics extensions of COLLADA became available to provide a software abstraction of the physics processing unit of different middleware providers and console platforms. Game engines can be written in any programming language like C++, C or Java, though each language is structurally different and may provide different levels of access to specific functions; the audio engine is the component which consists of algorithms related to the loading and output of sound through the client's speaker system.
At a minimum i
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources. For hardware functions such as input and output and memory allocation, the operating system acts as an intermediary between programs and the computer hardware, although the application code is executed directly by the hardware and makes system calls to an OS function or is interrupted by it. Operating systems are found on many devices that contain a computer – from cellular phones and video game consoles to web servers and supercomputers; the dominant desktop operating system is Microsoft Windows with a market share of around 82.74%. MacOS by Apple Inc. is in second place, the varieties of Linux are collectively in third place. In the mobile sector, use in 2017 is up to 70% of Google's Android and according to third quarter 2016 data, Android on smartphones is dominant with 87.5 percent and a growth rate 10.3 percent per year, followed by Apple's iOS with 12.1 percent and a per year decrease in market share of 5.2 percent, while other operating systems amount to just 0.3 percent.
Linux distributions are dominant in supercomputing sectors. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can only run one program at a time, while a multi-tasking operating system allows more than one program to be running in concurrency; this is achieved by time-sharing, where the available processor time is divided between multiple processes. These processes are each interrupted in time slices by a task-scheduling subsystem of the operating system. Multi-tasking may be characterized in co-operative types. In preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, such as Solaris and Linux—as well as non-Unix-like, such as AmigaOS—support preemptive multitasking. Cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner. 16-bit versions of Microsoft Windows used cooperative multi-tasking.
32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem. A multi-user operating system extends the basic concept of multi-tasking with facilities that identify processes and resources, such as disk space, belonging to multiple users, the system permits multiple users to interact with the system at the same time. Time-sharing operating systems schedule tasks for efficient use of the system and may include accounting software for cost allocation of processor time, mass storage and other resources to multiple users. A distributed operating system manages a group of distinct computers and makes them appear to be a single computer; the development of networked computers that could be linked and communicate with each other gave rise to distributed computing. Distributed computations are carried out on more than one machine; when computers in a group work in cooperation, they form a distributed system.
In an OS, distributed and cloud computing context, templating refers to creating a single virtual machine image as a guest operating system saving it as a tool for multiple running virtual machines. The technique is used both in virtualization and cloud computing management, is common in large server warehouses. Embedded operating systems are designed to be used in embedded computer systems, they are designed to operate on small machines like PDAs with less autonomy. They are able to operate with a limited number of resources, they are compact and efficient by design. Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is an operating system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, it uses specialized scheduling algorithms so that a deterministic nature of behavior is achieved. An event-driven system switches between tasks based on their priorities or external events while time-sharing operating systems switch tasks based on clock interrupts.
A library operating system is one in which the services that a typical operating system provides, such as networking, are provided in the form of libraries and composed with the application and configuration code to construct a unikernel: a specialized, single address space, machine image that can be deployed to cloud or embedded environments. Early computers were built to perform a series of single tasks, like a calculator. Basic operating system features were developed in the 1950s, such as resident monitor functions that could automatically run different programs in succession to speed up processing. Operating systems did not exist in their more complex forms until the early 1960s. Hardware features were added, that enabled use of runtime libraries and parallel processing; when personal computers became popular in the 1980s, operating systems were made for them similar in concept to those used on larger computers. In the 1940s, the earliest electronic digital systems had no operating systems.
Electronic systems of this time were programmed on rows of mechanical switches or by jumper wires on plug boards. These were special-purpose systems that, for example, generated ballistics tables for the military or controlled the pri
Z/OS is a 64-bit operating system for IBM mainframes, produced by IBM. It is the successor to OS/390, which in turn followed a string of MVS versions. Like OS/390, z/OS combines a number of separate, related products, some of which are still optional. Z/OS offers the attributes of modern operating systems but retains much of the functionality originating in the 1960s and each subsequent decade, still found in daily use. Z/OS was first introduced in October 2000. Z/OS supports stable mainframe systems and standards such as CICS, COBOL, IMS, DB2, RACF, SNA, IBM MQ, record-oriented data access methods, REXX, CLIST, SMP/E, JCL, TSO/E, ISPF, among others. However, z/OS supports 64-bit Java, C, C++, UNIX APIs and applications through UNIX System Services – The Open Group certifies z/OS as a compliant UNIX operating system – with UNIX/Linux-style hierarchical HFS and zFS file systems; as a result, z/OS hosts a broad range of open source software. Z/OS can communicate directly via TCP/IP, including IPv6, includes standard HTTP servers along with other common services such as FTP, NFS, CIFS/SMB.
Another central design philosophy is support for high quality of service within a single operating system instance, although z/OS has built-in support for Parallel Sysplex clustering. Z/OS has a Workload Manager and dispatcher which automatically manages numerous concurrently hosted units of work running in separate key-protected address spaces according to dynamically adjustable goals; this capability inherently supports multi-tenancy within a single operating system image. However, modern IBM mainframes offer two additional levels of virtualization: LPARs and z/VM; these new functions within the hardware, z/OS, z/VM — and Linux and OpenSolaris support — have encouraged development of new applications for mainframes. Many of them utilize the WebSphere Application Server for z/OS middleware. From its inception z/OS has supported tri-modal addressing. Up through Version 1.5, z/OS itself could start in either 31-bit ESA/390 or 64-bit z/Architecture mode, so it could function on older hardware albeit without 64-bit application support on those machines.
IBM support for z/OS 1.5 ended on March 31, 2007. Now z/OS only runs in 64-bit mode. Application programmers can still use any addressing mode: all applications, regardless of their addressing mode, can coexist without modification, IBM maintains commitment to tri-modal backward compatibility. However, increasing numbers of middleware products and applications, such as DB2 Version 8 and above, now require and exploit 64-bit addressing. IBM markets z/OS as its flagship operating system, suited for continuous, high-volume operation with high security and stability. Z/OS is available under standard license pricing as well as via IBM Z New Application License Charges and "IBM Z Solution Edition," two lower priced offerings aimed at supporting newer applications. U. S. standard commercial z/OS pricing starts at about $125 per month, including support, for the smallest zNALC installation running the base z/OS product plus a typical set of optional z/OS features. Z/OS introduced Variable Workload License Charges and Entry Workload License Charges which are sub-capacity billing options.
VWLC and EWLC customers only pay for peak monthly z/OS usage, not for full machine capacity as with the previous OS/390 operating system. VWLC and EWLC are available for most IBM software products running on z/OS, their peaks are separately calculated but can never exceed the z/OS peak. To be eligible for sub-capacity licensing, a z/OS customer must be running in 64-bit mode, must have eliminated OS/390 from the system, must e-mail IBM monthly sub-capacity reports. Sub-capacity billing reduces software charges for most IBM mainframe customers. Advanced Workload License Charges is the successor to VWLC on mainframe models starting with the zEnterprise 196, EAWLC is an option on zEnterprise 114 models. AWLC and EAWLC offer further sub-capacity discounts. Within each address space, z/OS permits the placement of only data, not code, above the 2 GB "bar". Z/OS enforces this distinction for performance reasons. There are no architectural impediments to allowing more than 2 GB of application code per address space.
IBM has started to allow Java code running on z/OS to execute above the 2 GB bar, again for performance reasons. Starting with z/OS version 2 release 3, code may be placed and executed above the 2 GB "bar"; however few z/OS services may be invoked from above the "bar". Memory is obtained as "Large Memory Objects" in multiples of 1 MB. There are three types of large memory objects: Unshared – where only the creating address space can access the memory. Shared – where the creating address space can give access to specific other address spaces. Common – where all address spaces can access the memory. Generation Data Group is a special type of file used by IBM's mainframe operating system z/OS; the actual GDG is a description of how many generations of a file are to be kept and how old the oldest generation must be at least before it is deleted. Whenever a new generation is created, the system checks whether one or more obso
A web browser is a software application for accessing information on the World Wide Web. Each individual web page and video is identified by a distinct Uniform Resource Locator, enabling browsers to retrieve these resources from a web server and display them on the user's device. A web browser is not the same thing as a search engine, though the two are confused. For a user, a search engine is just a website, such as google.com, that stores searchable data about other websites. But to connect to a website's server and display its web pages, a user needs to have a web browser installed on their device; the most popular browsers are Chrome, Safari, Internet Explorer, Edge. The first web browser, called WorldWideWeb, was invented in 1990 by Sir Tim Berners-Lee, he recruited Nicola Pellow to write the Line Mode Browser, which displayed web pages on dumb terminals. 1993 was a landmark year with the release of Mosaic, credited as "the world's first popular browser". Its innovative graphical interface made the World Wide Web system easy to use and thus more accessible to the average person.
This, in turn, sparked the Internet boom of the 1990s when the Web grew at a rapid rate. Marc Andreessen, the leader of the Mosaic team, soon started his own company, which released the Mosaic-influenced Netscape Navigator in 1994. Navigator became the most popular browser. Microsoft debuted Internet Explorer in 1995. Microsoft was able to gain a dominant position for two reasons: it bundled Internet Explorer with its popular Microsoft Windows operating system and did so as freeware with no restrictions on usage; the market share of Internet Explorer peaked at over 95% in 2002. In 1998, desperate to remain competitive, Netscape launched what would become the Mozilla Foundation to create a new browser using the open source software model; this work evolved into Firefox, first released by Mozilla in 2004. Firefox reached a 28% market share in 2011. Apple released its Safari browser in 2003, it remains the dominant browser on Apple platforms. The last major entrant to the browser market was Google, its Chrome browser, which debuted in 2008, has been a huge success.
Once a web page has been retrieved, the browser's rendering engine displays it on the user's device. This includes video formats supported by the browser. Web pages contain hyperlinks to other pages and resources; each link contains a URL, when it is clicked, the browser navigates to the new resource. Thus the process of bringing content to the user begins again. To implement all of this, modern browsers are a combination of numerous software components. Web browsers can be configured with a built-in menu. Depending on the browser, the menu may be named Options, or Preferences; the menu has different types of settings. For example, users can change their home default search engine, they can change default web page colors and fonts. Various network connectivity and privacy settings are usually available. During the course of browsing, cookies received from various websites are stored by the browser; some of them contain login credentials or site preferences. However, others are used for tracking user behavior over long periods of time, so browsers provide settings for removing cookies when exiting the browser.
Finer-grained management of cookies requires a browser extension. The most popular browsers have a number of features in common, they allow users to browse in a private mode. They can be customized with extensions, some of them provide a sync service. Most browsers have these user interface features: Allow the user to open multiple pages at the same time, either in different browser windows or in different tabs of the same window. Back and forward buttons to go back to the previous page forward to the next one. A refresh or reload button to reload the current page. A stop button to cancel loading the page. A home button to return to the user's home page. An address bar to display it. A search bar to input terms into a search engine. There are niche browsers with distinct features. One example is text-only browsers that can benefit people with slow Internet connections or those with visual impairments. Mobile browser List of web browsers Comparison of web browsers Media related to Web browsers at Wikimedia Commons
A debugger or debugging tool is a computer program, used to test and debug other programs. The code to be examined might alternatively be running on an instruction set simulator, a technique that allows great power in its ability to halt when specific conditions are encountered, but which will be somewhat slower than executing the code directly on the appropriate processor; some debuggers offer two modes of operation, partial simulation, to limit this impact. A "trap" occurs when the program cannot continue because of a programming bug or invalid data. For example, the program might have tried to use an instruction not available on the current version of the CPU or attempted to access unavailable or protected memory; when the program "traps" or reaches a preset condition, the debugger shows the location in the original code if it is a source-level debugger or symbolic debugger now seen in integrated development environments. If it is a low-level debugger or a machine-language debugger it shows the line in the disassembly.
Debuggers offer a query processor, a symbol resolver, an expression interpreter, a debug support interface at its top level. Debuggers offer more sophisticated functions such as running a program step by step, stopping at some event or specified instruction by means of a breakpoint, tracking the values of variables; some debuggers have the ability to modify program state. It may be possible to continue execution at a different location in the program to bypass a crash or logical error; the same functionality which makes a debugger useful for eliminating bugs allows it to be used as a software cracking tool to evade copy protection, digital rights management, other software protection features. It also makes it useful as a general verification tool, fault coverage, performance analyzer if instruction path lengths are shown. Most mainstream debugging engines, such as gdb and dbx, provide console-based command line interfaces. Debugger front-ends are popular extensions to debugger engines that provide IDE integration, program animation, visualization features.
Some debuggers include a feature called "reverse debugging" known as "historical debugging" or "backwards debugging". These debuggers make it possible to step a program's execution backwards in time. Various debuggers include this feature. Microsoft Visual Studio offers IntelliTrace reverse debugging for Visual Basic. NET, some other languages, but not C++. Reverse debuggers exist for C, C++, Python and other languages; some are open source. Some reverse debuggers slow down the target by orders of magnitude, but the best reverse debuggers cause a slowdown of 2× or less. Reverse debugging is useful for certain types of problems, but is still not used yet; some debuggers operate on a single specific language while others can handle multiple languages transparently. For example, if the main target program is written in COBOL but calls assembly language subroutines and PL/1 subroutines, the debugger may have to dynamically switch modes to accommodate the changes in language as they occur; some debuggers incorporate memory protection to avoid storage violations such as buffer overflow.
This may be important in transaction processing environments where memory is dynamically allocated from memory'pools' on a task by task basis. Most modern microprocessors have at least one of these features in their CPU design to make debugging easier: Hardware support for single-stepping a program, such as the trap flag. An instruction set that meets the Popek and Goldberg virtualization requirements makes it easier to write debugger software that runs on the same CPU as the software being debugged. In-system programming allows an external hardware debugger to reprogram a system under test. Many systems with such ISP support have other hardware debug support. Hardware support for code and data breakpoints, such as address comparators and data value comparators or, with more work involved, page fault hardware. JTAG access to hardware debug interfaces such as those on ARM architecture processors or using the Nexus command set. Processors used in embedded systems have extensive JTAG debug support.