1.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
2.
Virtual address space
–
In computing, a virtual address space or address space is the set of ranges of virtual addresses that an operating system makes available to a process. This provides several benefits, one of which is, if each process is given an address space. In the following description, the terminology used will be particular to the Windows NT operating system, but the concepts are applicable to other virtual memory operating systems. When a new application on a 32-bit OS is executed, the process has a 4 GiB VAS, initially, none of them have values. Using or setting values in such a VAS would cause a memory exception,0 4GB VAS |----------------------------------------------| Then the applications executable file is mapped into the VAS. Addresses in the process VAS are mapped to bytes in the exe file, the OS manages the mapping,0 4GB VAS |---vvvvvvv------------------------------------| mapping |-----| file bytes app. exe The vs are values from bytes in the mapped file. However, the way the process can use or set - values in its VAS is to ask the OS to map them to bytes from a file. A common way to use VAS memory in this way is to map it to the page file, the other 2GB are used by the operating system. ini file. On 64-bit Microsoft Windows, in a process running an executable that was linked with /LARGEADDRESSAWARE, NO and this applies to both 32- and 64-bit executables.1 and later. Allocating memory via Cs malloc establishes the page file as the store for any new virtual address space. However, a process can also explicitly map file bytes, for x86 CPUs, Linux 32-bit allows to split the user and kernel address ranges in differents ways, 3G/1G user/kernel, 1G/3G user/kernel or 2G/2G user/kernel. Linear address space Advanced Windows by Jeffrey Richter, Microsoft Press
3.
Executable
–
These instructions are traditionally machine code instructions for a physical CPU. Executable code is used to describe sequences of instructions that do not necessarily constitute an executable file, for example. Several object files are linked to create the executable, object files, executable or not, are typically in a container format, such as Executable and Linkable Format. This structures the generated code, for example dividing it into sections such as the. text. data. In order to be executed by the system, a file must conform to the systems Application Binary Interface. For example, in ELF, the point is specified in the header in the e_entry field. In the GCC this field is set by the based on the _start symbol. For C, this is done by linking in the crt0 object, Executable files thus normally contain significant additional machine code beyond that directly generated from the specific source code. In some cases it is desirable to omit this, for example for embedded systems development or simply to understand how compilation, linking, comparison of executable file formats EXE File Format at What Is
4.
Instruction set architecture
–
An ISA includes a specification of the set of opcodes, and the native commands implemented by a particular processor. An instruction set architecture is distinguished from a microarchitecture, which is the set of design techniques used, in a particular processor. Processors with different microarchitectures can share a common instruction set, for example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs. The concept of an architecture, distinct from the design of a machine, was developed by Fred Brooks at IBM during the design phase of System/360. Prior to NPL, the companys computer designers had been free to honor cost objectives not only by selecting technologies, the SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. In addition, these virtual machines execute less frequently used code paths by interpretation, transmeta implemented the x86 instruction set atop VLIW processors in this fashion. A complex instruction set computer has many specialized instructions, some of which may only be used in practical programs. Theoretically important types are the instruction set computer and the one instruction set computer. Another variation is the very long instruction word where the processor receives many instructions encoded and retrieved in one instruction word, machine language is built up from discrete statements or instructions. Examples of operations common to many instruction sets include, Set a register to a constant value. Copy data from a location to a register, or vice versa. Used to store the contents of a register, result of a computation, often called load and store operations. Read and write data from hardware devices, add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register. Increment, decrement in some ISAs, saving operand fetch in trivial cases, perform bitwise operations, e. g. taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register. Floating-point instructions for arithmetic on floating-point numbers, branch to another location in the program and execute instructions there. Conditionally branch to another if a certain condition holds. Call another block of code, while saving the location of the instruction as a point to return to. Load/store data to and from a coprocessor, or exchanging with CPU registers, processors may include complex instructions in their instruction set
5.
Memory management
–
Memory management is a form of resource management applied to computer memory. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request and this is critical to any advanced computer system where more than a single process might be underway at any time. Several methods have been devised that increase the effectiveness of memory management, the quality of the virtual memory manager can have an extensive effect on overall system performance. Modern general-purpose computer systems manage memory at two levels, operating system level, and application level, application-level memory management is generally categorized as either automatic memory management, usually involving garbage collection, or manual memory management. The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size, Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are free, the allocators metadata can also inflate the size of small allocations. This is often managed by chunking, the memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever lost as a memory leak. The specific dynamic memory allocation algorithm implemented can impact performance significantly, a study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52, since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. This works well for simple embedded systems where no large objects need to be allocated, however, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games. All blocks of a particular size are kept in a linked list or tree. If a smaller size is requested than is available, the smallest available size is selected, one of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks, when a block is freed, it is compared to its buddy. If they are free, they are combined and placed in the correspondingly larger-sized buddy-block list. Virtual memory is a method of decoupling the memory organization from the physical hardware, the applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address, in this way addition of virtual memory enables granular control over memory systems and methods of access. In virtual memory systems the system limits how a process can access the memory. Even though the memory allocated for specific processes is normally isolated, shared memory is one of the fastest techniques for inter-process communication
6.
Computer data storage
–
Computer data storage, often called storage or memory, is a technology consisting of computer components and recording media used to retain digital data. It is a function and fundamental component of computers. The central processing unit of a computer is what manipulates data by performing computations, in practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but larger and cheaper options farther away. In the Von Neumann architecture, the CPU consists of two parts, The control unit and the arithmetic logic unit. The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data, without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior and this is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions, most modern computers are von Neumann machines. A modern digital computer represents data using the numeral system. Text, numbers, pictures, audio, and nearly any form of information can be converted into a string of bits, or binary digits. The most common unit of storage is the byte, equal to 8 bits, a piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the works of Shakespeare, about 1250 pages in print. Data is encoded by assigning a bit pattern to each character, digit, by adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. A random bit flip is typically corrected upon detection, the cyclic redundancy check method is typically used in communications and storage for error detection. A detected error is then retried, data compression methods allow in many cases to represent a string of bits by a shorter bit string and reconstruct the original string when needed. This utilizes substantially less storage for many types of data at the cost of more computation, analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not. For security reasons certain types of data may be encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots. Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and this traditional division of storage to primary, secondary, tertiary and off-line storage is also guided by cost per bit. In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM or other forms of fast but temporary storage
7.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components
8.
Read-only memory
–
Read-only memory is a type of non-volatile memory used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, strictly, read-only memory refers to memory that is hard-wired, such as diode matrix and the later mask ROM, which cannot be changed after manufacture. Although discrete circuits can be altered in principle, integrated circuits cannot and that such memory can never be changed is a disadvantage in many applications, as bugs and security issues cannot be fixed, and new features cannot be added. More recently, ROM has come to include memory that is read-only in normal operation, the simplest type of solid-state ROM is as old as the semiconductor technology itself. Combinational logic gates can be joined manually to map n-bit address input onto arbitrary values of m-bit data output, with the invention of the integrated circuit came mask ROM. In mask ROM, the data is encoded in the circuit. This leads to a number of disadvantages, It is only economical to buy mask ROM in large quantities. The turnaround time between completing the design for a mask ROM and receiving the finished product is long, for the same reason, mask ROM is impractical for R&D work since designers frequently need to modify the contents of memory as they refine a design. If a product is shipped with faulty mask ROM, the way to fix it is to recall the product. Subsequent developments have addressed these shortcomings, PROM, invented in 1956, allowed users to program its contents exactly once by physically altering its structure with the application of high-voltage pulses. This addressed problems 1 and 2 above, since a company can order a large batch of fresh PROM chips. The 1971 invention of EPROM essentially solved problem 3, since EPROM can be reset to its unprogrammed state by exposure to strong ultraviolet light. All of these technologies improved the flexibility of ROM, but at a significant cost-per-chip, rewriteable technologies were envisioned as replacements for mask ROM. The most recent development is NAND flash, also invented at Toshiba, as of 2007, NAND has partially achieved this goal by offering throughput comparable to hard disks, higher tolerance of physical shock, extreme miniaturization, and much lower power consumption. Every stored-program computer may use a form of storage to store the initial program that runs when the computer is powered on or otherwise begins execution. Likewise, every non-trivial computer needs some form of memory to record changes in its state as it executes. Forms of read-only memory were employed as non-volatile storage for programs in most early stored-program computers, consequently, ROM could be implemented at a lower cost-per-bit than RAM for many years. Most home computers of the 1980s stored a BASIC interpreter or operating system in ROM as other forms of storage such as magnetic disk drives were too costly
9.
Computer architecture
–
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer, in other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine, johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory. Brooks went on to develop the IBM System/360 line of computers. Later, computer users came to use the term in many less-explicit ways, the earliest computer architectures were designed on paper and then directly built into the final hardware form. The discipline of architecture has three main subcategories, Instruction Set Architecture, or ISA. The ISA defines the code that a processor reads and acts upon as well as the word size, memory address modes, processor registers. Microarchitecture, or computer organization describes how a processor will implement the ISA. The size of a computers CPU cache for instance, is an issue that generally has nothing to do with the ISA, system Design includes all of the other hardware components within a computing system. These include, Data processing other than the CPU, such as memory access Other issues such as virtualization, multiprocessing. There are other types of computer architecture, E. g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture. UISA —a group of machines with different hardware level microarchitectures may share a common microcode architecture, pin Architecture, The hardware functions that a microprocessor should provide to a hardware platform, e. g. the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated, pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term architecture fits, because the functions must be provided for compatible systems, the purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes Instruction Set Design, Functional Organization, Logic Design, the implementation involves Integrated Circuit Design, Packaging, Power, and Cooling. Optimization of the design requires familiarity with Compilers, Operating Systems to Logic Design, an instruction set architecture is the interface between the computers software and hardware and also can be viewed as the programmers view of the machine. Computers do not understand high level languages such as Java, C++, a processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand, besides instructions, the ISA defines items in the computer that are available to a program—e. g
10.
Data segment
–
The size of this segment is determined by the size of the values in the programs source code, and does not change at run time. The data segment is read-write, since the values of variables can be altered at run time, uninitialized data, both variables and constants, is instead in the BSS segment. The Intel 8086 family of CPUs provided four segments, the segment, the data segment, the stack segment. This allowed a 16-bit address register, which would normally provide 64KiB of memory space, a computer program memory can be largely categorized into two sections, read-only and read-write. This distinction grew from early systems holding their main program in memory such as Mask ROM, PROM or EEPROM. As systems became more complex and programs were loaded from other media into RAM instead of executing from ROM the idea that some portions of the memory should not be modified was retained. These became the. text and. rodata segments of the program, the. data segment contains any global or static variables which have a pre-defined value and can be modified. That is any variables that are not defined within a function or are defined in a function but are defined as static so they retain their address across subsequent calls, the BSS segment, also known as uninitialized data, is usually adjacent to the data segment. The BSS segment contains all variables and static variables that are initialized to zero or do not have explicit initialization in source code. For instance, a variable defined as static int i, would be contained in the BSS segment, the heap area commonly begins at the end of the. bss and. data segments and grows to larger addresses from there. The heap area is managed by malloc, calloc, realloc, and free, the heap area is shared by all threads, shared libraries, and dynamically loaded modules in a process. The stack area contains the program stack, a LIFO structure, a stack pointer register tracks the top of the stack, it is adjusted each time a value is pushed onto the stack. The set of values pushed for one function call is termed a stack frame, a stack frame consists at minimum of a return address. Automatic variables are allocated on the stack. The stack area traditionally adjoined the heap area and they grew towards each other, with large address spaces and virtual memory techniques they tend to be placed more freely, but they still typically grow in a converging direction. On the standard PC x86 architecture the stack grows toward address zero, meaning that more recent items, deeper in the chain, are at numerically lower addresses. On some other architectures it grows the opposite direction, some interpreted languages offer a similar facility to the data segment, notably Perl and Ruby. In these languages, including the line __DATA__ or __END__ marks the end of the code segment and the start of the data segment
11.
Institute of Electrical and Electronics Engineers
–
The Institute of Electrical and Electronics Engineers is a professional association with its corporate office in New York City and its operations center in Piscataway, New Jersey. It was formed in 1963 from the amalgamation of the American Institute of Electrical Engineers, today, it is the worlds largest association of technical professionals with more than 400,000 members in chapters around the world. Its objectives are the educational and technical advancement of electrical and electronic engineering, telecommunications, computer engineering, IEEE stands for the Institute of Electrical and Electronics Engineers. The association is chartered under this full legal name, IEEEs membership has long been composed of engineers and scientists. For this reason the organization no longer goes by the name, except on legal business documents. The IEEE is dedicated to advancing technological innovation and excellence and it has about 430,000 members in about 160 countries, slightly less than half of whom reside in the United States. The major interests of the AIEE were wire communications and light, the IRE concerned mostly radio engineering, and was formed from two smaller organizations, the Society of Wireless and Telegraph Engineers and the Wireless Institute. After World War II, the two became increasingly competitive, and in 1961, the leadership of both the IRE and the AIEE resolved to consolidate the two organizations. The two organizations merged as the IEEE on January 1,1963. The IEEE is incorporated under the Not-for-Profit Corporation Law of the state of New York and it was formed in 1963 by the merger of the Institute of Radio Engineers and the American Institute of Electrical Engineers. The IEEE serves as a publisher of scientific journals and organizer of conferences, workshops. IEEE develops and participates in activities such as accreditation of electrical engineering programs in institutes of higher learning. The IEEE logo is a design which illustrates the right hand grip rule embedded in Benjamin Franklins kite. IEEE has a dual complementary regional and technical structure – with organizational units based on geography and it manages a separate organizational unit which recommends policies and implements programs specifically intended to benefit the members, the profession and the public in the United States. The IEEE includes 39 technical Societies, organized around specialized technical fields, the IEEE Standards Association is in charge of the standardization activities of the IEEE. The IEEE History Center became an organization to the Engineering. The new ETHW is an effort by various engineering societies as a formal repository of topic articles, oral histories, first-hand histories, Landmarks + Milestones. The IEEE History Center is annexed to Stevens University Hoboken, NJ, in 2016, the IEEE acquired GlobalSpec, adding the provision of engineering data for a profit to its organizational portfolio
12.
Computer programming
–
Computer programming is a process that leads from an original formulation of a computing problem to executable computer programs. Source code is written in one or more programming languages, the purpose of programming is to find a sequence of instructions that will automate performing a specific task or solving a given problem. The process of programming thus often requires expertise in many different subjects, including knowledge of the domain, specialized algorithms. Related tasks include testing, debugging, and maintaining the code, implementation of the build system. Software engineering combines engineering techniques with software development practices, within software engineering, programming is regarded as one phase in a software development process. There is a debate on the extent to which the writing of programs is an art form. In general, good programming is considered to be the application of all three, with the goal of producing an efficient and evolvable software solution. Because the discipline covers many areas, which may or may not include critical applications, in most cases, the discipline is self-governed by the entities which require the programming, and sometimes very strict environments are defined. Another ongoing debate is the extent to which the language used in writing computer programs affects the form that the final program takes. Different language patterns yield different patterns of thought and this idea challenges the possibility of representing the world perfectly with language because it acknowledges that the mechanisms of any language condition the thoughts of its speaker community. In the 1880s Herman Hollerith invented the concept of storing data in machine-readable form, however, with the concept of the stored-program computers introduced in 1949, both programs and data were stored and manipulated in the same way in computer memory. Machine code was the language of early programs, written in the set of the particular machine. Assembly languages were developed that let the programmer specify instruction in a text format, with abbreviations for each operation code. However, because a language is little more than a different notation for a machine language. High-level languages allow the programmer to write programs in terms that are more abstract and they harness the power of computers to make programming easier by allowing programmers to specify calculations by entering a formula directly. Programs were mostly still entered using punched cards or paper tape, see computer programming in the punch card era. By the late 1960s, data storage devices and computer terminals became inexpensive enough that programs could be created by typing directly into the computers, text editors were developed that allowed changes and corrections to be made much more easily than with punched cards. Whatever the approach to development may be, the program must satisfy some fundamental properties