A control register is a processor register which changes or controls the general behavior of a CPU or other digital device. Common tasks performed by control registers include interrupt control, switching the addressing mode, paging control, the CR0 register is 32 bits long on the 386 and higher processors. On x86-64 processors in long mode, it is 64 bits long, CR0 has various control flags that modify the basic operation of the processor. Reserved Contains a value called Page Fault Linear Address, when a page fault occurs, the address the program attempted to access is stored in the CR2 register. Used when virtual addressing is enabled, hence when the PG bit is set in CR0, CR3 enables the processor to translate linear addresses into physical addresses by locating the page directory and page tables for the current task. Typically, the upper 20 bits of CR3 become the page directory base register, used in protected mode to control operations such as virtual-8086 support, enabling I/O breakpoints, page size extension and machine check exceptions.
Extended Feature Enable Register is a model-specific register added in the AMD K6 processor, to enabling the SYSCALL/SYSRET instruction. This register becomes architectural in AMD64 and has been adopted by Intel, cR8 is a new register accessible in 64-bit mode using the REX prefix. CR8 is used to prioritize external interrupts and is referred to as the task-priority register, the AMD64 architecture allows software to define up to 15 external interrupt-priority classes. Priority classes are numbered from 1 to 15, with priority-class 1 being the lowest, cR8 uses the four low-order bits for specifying a task priority and the remaining 60 bits are reserved and must be written with zeros. System software can use the TPR register to temporarily block low-priority interrupts from interrupting a high-priority task and this is accomplished by loading TPR with a value corresponding to the highest-priority interrupt that is to be blocked. For example, loading TPR with a value of 9 blocks all interrupts with a priority class of 9 or less, loading TPR with 0 enables all external interrupts.
Loading TPR with 15 disables all external interrupts, the TPR is cleared to 0 on reset
An operating system is system software that manages computer hardware and software resources and provides common services for computer programs. All computer programs, excluding firmware, require a system to function. Operating systems are found on many devices that contain a computer – from cellular phones, the dominant desktop operating system is Microsoft Windows with a market share of around 83. 3%. MacOS by Apple Inc. is in place, and the varieties of Linux is in third position. Linux distributions are dominant in the server and supercomputing sectors, other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. A single-tasking system can run one program at a time. Multi-tasking may be characterized in preemptive and co-operative types, in preemptive multitasking, the operating system slices the CPU time and dedicates a slot to each of the programs. Unix-like operating systems, e. g. Solaris, cooperative multitasking is achieved by relying on each process to provide time to the other processes in a defined manner.
16-bit versions of Microsoft Windows used cooperative multi-tasking, 32-bit versions of both Windows NT and Win9x, used preemptive multi-tasking. Single-user operating systems have no facilities to distinguish users, but may allow multiple programs to run in tandem, a distributed operating system manages a group of distinct computers and makes them appear to be a single computer. The development of networked computers that could be linked and communicate with each other gave rise to distributed computing, distributed computations are carried out on more than one machine. When computers in a work in cooperation, they form a distributed system. The technique is used both in virtualization and cloud computing management, and is common in large server warehouses, embedded operating systems are designed to be used in embedded computer systems. They are designed to operate on small machines like PDAs with less autonomy and they are able to operate with a limited number of resources. They are very compact and extremely efficient by design, Windows CE and Minix 3 are some examples of embedded operating systems. A real-time operating system is a system that guarantees to process events or data by a specific moment in time. A real-time operating system may be single- or multi-tasking, but when multitasking, early computers were built to perform a series of single tasks, like a calculator.
Basic operating system features were developed in the 1950s, such as resident monitor functions that could run different programs in succession to speed up processing
In computing, a process is an instance of a computer program that is being executed. It contains the code and its current activity. Depending on the system, a process may be made up of multiple threads of execution that execute instructions concurrently. A computer program is a collection of instructions, while a process is the actual execution of those instructions. Several processes may be associated with the program, for example. Multitasking is a method to allow processes to share processors. Each CPU executes a task at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish. Depending on the system implementation, switches could be performed when tasks perform input/output operations. A common form of multitasking is time-sharing, time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor and this seeming execution of multiple processes simultaneously is called concurrency.
In general, a system process consists of the following resources. Memory, which includes the code, process-specific data, a call stack. Operating system descriptors of resources that are allocated to the process, such as file descriptors or handles, security attributes, such as the process owner and the process set of permissions. Processor state, such as the content of registers and physical memory addressing, the state is typically stored in computer registers when the process is executing, and in memory otherwise. The operating system holds most of this information about processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, the operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures. The operating system may provide mechanisms for communication to enable processes to interact in safe
In system programming, an interrupt is a signal to the processor emitted by hardware or software indicating an event that needs immediate attention. An interrupt alerts the processor to a high-priority condition requiring the interruption of the current code the processor is executing, the processor responds by suspending its current activities, saving its state, and executing a function called an interrupt handler to deal with the event. This interruption is temporary, after the interrupt handler finishes, there are two types of interrupts, hardware interrupts and software interrupts. Hardware interrupts are used by devices to communicate that they require attention from the operating system, for example, pressing a key on the keyboard or moving the mouse triggers hardware interrupts that cause the processor to read the keystroke or mouse position. Unlike the software type, hardware interrupts are asynchronous and can occur in the middle of instruction execution, the act of initiating a hardware interrupt is referred to as an interrupt request. A software interrupt is caused either by a condition in the processor itself.
The former is called a trap or exception and is used for errors or events occurring during program execution that are exceptional enough that they cannot be handled within the program itself. For example, an exception will be thrown if the processors arithmetic logic unit is commanded to divide a number by zero as this instruction is in error. The operating system will catch this exception, and can choose to abort the instruction, each interrupt has its own interrupt handler. The number of interrupts is limited by the number of interrupt request lines to the processor. Interrupts are a commonly used technique for computer multitasking, especially in real-time computing, such a system is said to be interrupt-driven. Hardware interrupts were introduced as an optimization, eliminating unproductive waiting time in polling loops and they may be implemented in hardware as a distinct system with control lines, or they may be integrated into the memory subsystem. If implemented as part of the controller, interrupts are mapped into the systems memory address space.
Interrupts can be categorized into different types, Maskable interrupt. Non-maskable interrupt, an interrupt that lacks an associated bit-mask. NMIs are used for the highest priority tasks such as timers, inter-processor interrupt, a special case of interrupt that is generated by one processor to interrupt another processor in a multiprocessor system. Software interrupt, an interrupt generated within a processor by executing an instruction, software interrupts are often used to implement system calls because they result in a subroutine call with a CPU ring level change. Spurious interrupt, an interrupt that is unwanted
In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. The implementation of threads and processes differs between operating systems, but in most cases a thread is a component of a process, multiple threads can exist within one process, executing concurrently and sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code, systems with a single processor generally implement multithreading by time slicing, the central processing unit switches between different software threads. This context switching generally happens very often and rapidly enough that users perceive the threads or tasks as running in parallel, Threads made an early appearance in OS/360 Multiprogramming with a Variable Number of Tasks in 1967, in which context they were called tasks. The term thread has been attributed to Victor A.
Vyssotsky, some threading implementations are called kernel threads, whereas light-weight processes are a specific type of kernel thread that share the same state and information. Furthermore, programs can have user-space threads when threading with timers, signals, or other methods to interrupt their own execution, in computer programming, single-threading is the processing of one command at a time. The opposite of single-threading is multithreading, while it has been suggested that the term single-threading is misleading, the term has been widely accepted within the functional programming community. Multithreading is mainly found in multitasking operating systems, multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the resources, but are able to execute independently. The threaded programming model provides developers with an abstraction of concurrent execution. Multithreading can be applied to one process to enable parallel execution on a multiprocessing system, multithreaded applications have the following advantages, multithreading can allow an application to remain responsive to input.
In a one-thread program, if the main execution thread blocks on a long-running task, on the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for gaining similar results. Lower resource consumption, using threads, an application can serve multiple clients concurrently using fewer resources than it would need when using multiple copies of itself. For example, the Apache HTTP server uses thread pools, a pool of threads for listening to incoming requests. GPU computing environments like CUDA and OpenCL use the model where dozens to hundreds of threads run in parallel across data on a large number of cores. Multithreading has the drawbacks, since threads share the same address space. In order for data to be manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may require mutually exclusive operations in order to prevent common data from being simultaneously modified or read while in the process of being modified, careless use of such primitives can lead to deadlocks
Array data type
In computer science, an array type is a data type that is meant to describe a collection of elements, each selected by one or more indices that can be computed at run time by the program. Such a collection is called an array variable, array value. By analogy with the concepts of vector and matrix, array types with one. For example, in the Pascal programming language, the declaration type MyTable = array of integer, the declaration var A, MyTable defines a variable A of that type, which is an aggregate of eight elements, each being an integer variable identified by two indices. In the Pascal program, those elements are denoted A, A, A, … A, special array types are often defined by the languages standard libraries. Dynamic lists are common and easier to implement than dynamic arrays. Array types are distinguished from record types mainly because they allow the element indices to be computed at run time, among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array variable.
Depending on the language, array types may overlap other data types that describe aggregates of values, such as lists, array types are often implemented by array data structures, but sometimes by other means, such as hash tables, linked lists, or search trees. Heinz Rutishausers programming language Superplan included multi-dimensional arrays, rutishauser however although describing how a compiler for his language should be built, did not implement one. Assembly languages and low-level languages like BCPL generally have no support for arrays. These operations are required to satisfy the axioms get = V get = get if I ≠ J for any array state A, any value V, the first axiom means that each element behaves like a variable. The second axiom means that elements with distinct indices behave as disjoint variables and these axioms do not place any constraints on the set of valid index tuples I, therefore this abstract model can be used for triangular matrices and other oddly-shaped arrays. Most of those languages restrict each index to an interval of integers.
In some compiled languages, in fact, the index ranges may have to be known at compile time. On the other hand, some programming languages provide more liberal array types, such index values cannot be restricted to an interval, much less a fixed interval. So, these languages usually allow arbitrary new elements to be created at any time and this choice precludes the implementation of array types as array data structures. That is, those languages use syntax to implement a more general associative array semantics. The number of indices needed to specify an element is called the dimension, many languages support only one-dimensional arrays