1.
Application software
–
An application program is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a processor, a spreadsheet, an accounting application, a web browser, a media player, an aeronautical flight simulator. The collective noun application software refers to all applications collectively and this contrasts with system software, which is mainly involved with running the computer. Applications may be bundled with the computer and its software or published separately. Apps built for mobile platforms are called mobile apps, in information technology, an application is a computer program designed to help people perform an activity. An application thus differs from a system, a utility. Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, some application packages focus on a single task, such as word processing, others, called integrated software include several applications. User-written software tailors systems to meet the specific needs. User-written software includes templates, word processor macros, scientific simulations, graphics. Even email filters are a kind of user software, users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. As another example, the GNU/Linux naming controversy is, in part, the above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app, see Application Portfolio Management, the word application, once used as an adjective, is not restricted to the of or pertaining to application software meaning. Sometimes a new and popular application arises which only runs on one platform and this is called a killer application or killer app. There are many different ways to divide up different types of application software, web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated, Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every aspect possible of, for example, manufacturing or banking systems, or accounting
2.
80-bit floating point format
–
Extended precision refers to floating point number formats that provide greater precision than the basic floating point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software, the IBM1130 offered two floating point formats, a 32-bit standard precision format and a 40-bit extended precision format. Standard precision format contained a 24-bit twos complement significand while extended precision utilized a 32-bit twos complement significand, the latter format could make full use of the cpus 32-bit integer operations. The characteristic in both formats was an 8-bit field containing the power of two biased by 128, floating-point arithmetic operations were performed by software, and double precision was not supported at all. The extended format occupied three 16-bit words, with the extra space simply ignored, the IBM System/360 supports a 32-bit short floating point format and a 64-bit long floating point format. The 360/85 and follow-on System/370 added support for a 128-bit extended format and these formats are still supported in the current design, where they are now called the hexadecimal floating point formats. The IEEE754 floating point standard recommends that implementations provide extended precision formats, the standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementors choice, the IA32 and x86-64 and Itanium processors support an 80-bit double extended extended precision format with a 64-bit significand. The Intel 8087 math coprocessor was the first x86 device which supported floating point arithmetic in hardware and it was designed to support a 32-bit single precision format and a 64-bit double precision format for encoding and interchanging floating point numbers. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit extended precision format, the floating-point unit on all subsequent x86 processors have supported this format. As a result software can be developed which takes advantage of the higher precision provided by this format and that kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors support this same 64-bit significand extended precision type, the follow-on Coldfire processors do not support this 96-bit extended precision format. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE754 double extended format and this 80-bit format uses one bit for the sign of the significand,15 bits for the exponent field and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the power of 2. An exponent field value of 32767 is reserved so as to enable the representation of states such as infinity. If the exponent field is zero, the value is a denormal number, the m field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit, rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part
3.
Half-precision floating-point format
–
In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. In IEEE 754-2008 the 16-bit base 2 format is referred to as binary16. It is intended for storage of many floating-point values where higher precision is not needed, nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. The hardware-accelerated programmable shading group led by John Airey at SGI invented the s10e5 data type in 1997 as part of the design effort. This is described in a SIGGRAPH2000 paper and further documented in US patent 7518615 and this format is used in several computer graphics environments including OpenEXR, JPEG XR, OpenGL, Cg, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the dynamic range allows for more detail to be preserved in highlights. The advantage over 32-bit single-precision binary formats is that it half the storage. The F16C extension allows x86 processors to convert half-precision floats to, thus only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE754 parlance, there are 10 bits of significand, the half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15, also known as exponent bias in the IEEE754 standard. The stored exponents 000002 and 111112 are interpreted specially, the minimum strictly positive value is 2−24 ≈5.96 × 10−8. The minimum positive value is 2−14 ≈6.10 × 10−5. The maximum representable value is ×215 =65504 and these examples are given in bit representation of the floating-point value. This includes the sign bit, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, ARM processors support an alternative half-precision format, which does away with the special case for an exponent value of 31. It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs, instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008
4.
Single-precision floating-point format
–
Single-precision floating-point format is a computer number format that occupies 4 bytes in computer memory and represents a wide dynamic range of values by using a floating point. In IEEE 754-2008 the 32-bit base-2 format is referred to as binary32. It was called single in IEEE 754-1985, in older computers, different floating-point formats of 4 bytes were used, e. g. GW-BASICs single-precision data type was the 32-bit MBF floating-point format. One of the first programming languages to provide single- and double-precision floating-point data types was Fortran, before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model. Single-precision binary floating-point is used due to its range over fixed point. A signed 32-bit integer can have a value of 231 −1 =2,147,483,647. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE754 form. Single precision is termed REAL in Fortran, float in C, C++, C#, Java, Float in Haskell, and Single in Object Pascal, Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml, in most implementations of PostScript, the only real precision is single. The IEEE754 standard specifies a binary32 as having, Sign bit,1 bit Exponent width,8 bits Significand precision,24 bits This gives from 6 to 9 significant decimal digits precision. Sign bit determines the sign of the number, which is the sign of the significand as well, Exponent is either an 8-bit signed integer from −128 to 127 or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE754 binary32 definition. Exponents range from −126 to +127 because exponents of −127 and +128 are reserved for special numbers, the true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, B0 =1 + ∑ i =123 b 23 − i 2 − i =1 +1 ⋅2 −2 =1.25 ∈ ⊂ ⊂ [1,2 ). Thus, value = ×1.25 ×2 −3 = +0.15625. Note,1 +2 −23 ≈1.000000119,2 −2 −23 ≈1.999999881,2 −126 ≈1.17549435 ×10 −38,2 +127 ≈1.70141183 ×10 +38. The single-precision binary floating-point exponent is encoded using a representation, with the zero offset being 127. The stored exponents 00H and FFH are interpreted specially, the minimum positive normal value is 2−126 ≈1.18 × 10−38 and the minimum positive value is 2−149 ≈1.4 × 10−45. In general, refer to the IEEE754 standard itself for the conversion of a real number into its equivalent binary32 format
5.
Double-precision floating-point format
–
Double-precision floating-point format is a computer number format that occupies 8 bytes in computer memory and represents a wide, dynamic range of values by using a floating point. Double-precision floating-point format usually refers to binary64, as specified by the IEEE754 standard, in older computers, different floating-point formats of 8 bytes were used, e. g. GW-BASICs double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its range over single-precision floating point, in spite of its performance. As with single-precision floating-point format, it lacks precision on integer numbers when compared with a format of the same size. It is commonly simply as double. The IEEE754 standard specifies a binary64 as having, Sign bit,1 bit Exponent,11 bits Significand precision,53 bits This gives 15–17 significant decimal digits precision. If an IEEE754 double precision is converted to a string with at least 17 significant digits and then converted back to double. The format is written with the significand having an implicit integer bit of value 1, with the 52 bits of the fraction significand appearing in the memory format, the total precision is therefore 53 bits. For the next range, from 253 to 254, everything is multiplied by 2, so the numbers are the even ones. Conversely, for the range from 251 to 252, the spacing is 0.5. The spacing as a fraction of the numbers in the range from 2n to 2n+1 is 2n−52, the maximum relative rounding error when rounding a number to the nearest representable one is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308 and 10308, with full 15–17 decimal digits precision, by compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using a representation, with the zero offset being 1023. All bit patterns are valid encoding, except for the above exceptions, the entire double-precision number is described by, sign ×2 exponent − exponent bias ×1. Fraction In the case of subnormals the double-precision number is described by, because there have been many floating point formats with no network standard representation for them, the XDR standard uses big-endian IEEE754 as its representation. It may therefore appear strange that the widespread IEEE754 floating point standard does not specify endianness, theoretically, this means that even standard IEEE floating point data written by one machine might not be readable by another. One area of computing where this is an issue is for parallel code running on GPUs. For example, when using NVIDIAs CUDA platform, on video cards designed for gaming, doubles are implemented in many programming languages in different ways such as the following
6.
Quadruple-precision floating-point format
–
That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. In IEEE 754-2008 the 128-bit base-2 format is referred to as binary128. The IEEE754 standard specifies a binary128 as having, Sign bit,1 bit Exponent width,15 bits Significand precision,113 bits This gives from 33 to 36 significant decimal digits precision. The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros, thus only 112 bits of the significand appear in the memory format, but the total precision is 113 bits. The bits are laid out as, A binary256 would have a precision of 237 bits. The stored exponents 000016 and 7FFF16 are interpreted specially, the minimum strictly positive value is 2−16494 ≈ 10−4965 and has a precision of only one bit. The minimum positive value is 2−16382 ≈3.3621 × 10−4932 and has a precision of 113 bits. The maximum representable value is 216384 −216271 ≈1.1897 ×104932 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, a common software technique to implement nearly quadruple precision using pairs of double-precision values is sometimes called double-double arithmetic. That is, the pair is stored in place of q, note that double-double arithmetic has the following special characteristics, As the magnitude of the value decreases, the amount of extra precision also decreases. Therefore, the smallest number in the range is narrower than double precision. The smallest number with full precision is 1000.02 × 2−1074, numbers whose magnitude is smaller than 2−1021 will not have additional precision compared with double precision. The actual number of bits of precision can vary, in general, the magnitude of the low-order part of the number is no greater than half ULP of the high-order part. If the low-order part is less than half ULP of the high-order part, certain algorithms that rely on having a fixed number of bits in the significand can fail when using 128-bit long double numbers. Because of the reason above, it is possible to represent values like 1 + 2−1074 and they are represented as a sum of three double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively, a similar technique can be used to produce a double-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 bits, quadruple precision is often implemented in software by a variety of techniques, since direct hardware support for quadruple precision is, as of 2016, less common
7.
Octuple-precision floating-point format
–
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision and this format is rarely used and very few things support it. Thus only 236 bits of the significand appear in the memory format, the stored exponents 0000016 and 7FFFF16 are interpreted specially. The minimum strictly positive value is 2−262378 ≈ 10−78984 and has a precision of one bit. The minimum positive value is 2−262142 ≈2.4824 × 10−78913. The maximum representable value is 2262144 −2261907 ≈1.6113 ×1078913 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of numbers with a 224-bit twos complement significand. One can use general arbitrary-precision arithmetic libraries to obtain octuple precision, there is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, IEEE Standard for Floating-Point Arithmetic ISO/IEC10967, Language-independent arithmetic Primitive data type
8.
Decimal floating point
–
Decimal floating-point arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal fractions can avoid the errors that otherwise typically occur when converting between decimal fractions and binary fractions. The advantage of decimal floating-point representation over decimal fixed-point and integer representation is that it supports a wider range of values. Early mechanical uses of decimal floating point are evident in the abacus, slide rule, the Smallwood calculator, in the case of the mechanical calculators, the exponent is often treated as side information that is accounted for separately. Some computer languages have implementations of decimal floating-point arithmetic, including PL/I, Java with big decimal, emacs with calc and this was subsequently addressed in IEEE 754-2008, which standardized the encoding of decimal floating-point data, albeit with two different alternative methods. IBM POWER6 includes DFP in hardware, as does the IBM System z9, silMinds offers SilAx, a configurable vector DFP coprocessor. IEEE 754-2008 defines this in more detail, the IEEE 754-2008 standard defines 32-, 64- and 128-bit decimal floating-point representations. Like the binary floating-point formats, the number is divided into a sign, an exponent, unlike binary floating-point, numbers are not necessarily normalized, values with few significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. When the significand is zero, the exponent can be any value at all, the exponent ranges were chosen so that the range available to normalized values is approximately symmetrical. Since this cannot be done exactly with an number of possible exponent values. Two different representations are defined, One with a binary integer significand field encodes the significand as a binary integer between 0 and 10p−1. This is expected to be convenient for software implementations using a binary ALU. Another with a densely packed decimal significand field encodes decimal digits more directly and this makes conversion to and from binary floating-point form faster, but requires specialized hardware to manipulate efficiently. This is expected to be convenient for hardware implementations. Both alternatives provide exactly the range of representable values. The most significant two bits of the exponent are limited to the range of 0−2, and the most significant 4 bits of the significand are limited to the range of 0−9. The 30 possible combinations are encoded in a 5-bit field, along with forms for infinity. Thus, it is possible to initialize an array to NaNs by filling it with a byte value
9.
Decimal32 floating-point format
–
In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, like the binary16 format, it is intended for memory saving storage. Decimal32 supports 7 decimal digits of significand and an exponent range of −95 to +96, because the significand is not normalized, most values with less than 7 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal32 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal32 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,7 digits of significand and 3×26=192 possible exponent values. The remaining combinations encode infinities and NaNs and this format uses a binary significand from 0 to 107−1 =9999999 = 98967F16 =1001100010010110011111112. The encoding can represent binary significands up to 10×220−1 =10485759 = 9FFFFF16 =1001111111111111111111112, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 8-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. These six bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 20 bits are the significand continuation field, consisting of two 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results
10.
Decimal64 floating-point format
–
In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i. e. ±0. 000000000000000×10^−383 to ±9. 999999999999999×10^384. In contrast, the binary format, which is the most commonly used type, has an approximate range of ±0. 000000000000001×10^−308 to ±1. 797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple representations, 1×102=0. 1×103=0. 01×104. Decimal64 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal64 values. Both alternatives provide exactly the range of representable numbers,16 digits of significand. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of a 5-bit field, the remaining combinations encode infinities and NaNs. In the cases of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. This format uses a binary significand from 0 to 1016−1 =9999999999999999 = 2386F26FC0FFFF16 =1000111000011011110010011011111100000011111111111111112, the encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 =11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 10-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 for the most bits of the true significand, compare having an implicit 1-bit prefix 1 in the significand of normal values for the binary formats. Note also that the 2-bit sequences 00,01, or 10 after the bit are part of the exponent field. Note that the bits of the significand field do not encode the most significant decimal digit. The highest valid significant is 9999999999999999 whose binary encoding is 0111000011011110010011011111100000011111111111111112, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 50 bits are the significand continuation field, consisting of five 10-bit declets
11.
Decimal128 floating-point format
–
In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i. e. ±0. 000000000000000000000000000000000×10^−6143 to ±9. 999999999999999999999999999999999×10^6144. Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats, because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal128 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal128 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,34 digits of significand and 3×212 =12288 possible exponent values. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of 5 bits in the combination field, the remaining combinations encode infinities and NaNs. In the case of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. The encoding can represent binary significands up to 10×2110−1 =12980742146337069071326240823050239, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 14-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, for the decimal128 format, all of these significands are out of the valid range, and are thus decoded as zero, but the pattern is same as decimal32 and decimal64. The leading digit is between 0 and 9, and the rest of the uses the densely packed decimal encoding. This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each
12.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
13.
Central processing unit
–
The computer industry has used the term central processing unit at least since the early 1960s. The form, design and implementation of CPUs have changed over the course of their history, most modern CPUs are microprocessors, meaning they are contained on a single integrated circuit chip. An IC that contains a CPU may also contain memory, peripheral interfaces, some computers employ a multi-core processor, which is a single chip containing two or more CPUs called cores, in that context, one can speak of such single chips as sockets. Array processors or vector processors have multiple processors that operate in parallel, there also exists the concept of virtual CPUs which are an abstraction of dynamical aggregated computational resources. Early computers such as the ENIAC had to be rewired to perform different tasks. Since the term CPU is generally defined as a device for software execution, the idea of a stored-program computer was already present in the design of J. Presper Eckert and John William Mauchlys ENIAC, but was initially omitted so that it could be finished sooner. On June 30,1945, before ENIAC was made, mathematician John von Neumann distributed the paper entitled First Draft of a Report on the EDVAC and it was the outline of a stored-program computer that would eventually be completed in August 1949. EDVAC was designed to perform a number of instructions of various types. Significantly, the programs written for EDVAC were to be stored in high-speed computer memory rather than specified by the wiring of the computer. This overcame a severe limitation of ENIAC, which was the considerable time, with von Neumanns design, the program that EDVAC ran could be changed simply by changing the contents of the memory. Early CPUs were custom designs used as part of a larger, however, this method of designing custom CPUs for a particular application has largely given way to the development of multi-purpose processors produced in large quantities. This standardization began in the era of discrete transistor mainframes and minicomputers and has accelerated with the popularization of the integrated circuit. The IC has allowed increasingly complex CPUs to be designed and manufactured to tolerances on the order of nanometers, both the miniaturization and standardization of CPUs have increased the presence of digital devices in modern life far beyond the limited application of dedicated computing machines. Modern microprocessors appear in electronic devices ranging from automobiles to cellphones, the so-called Harvard architecture of the Harvard Mark I, which was completed before EDVAC, also utilized a stored-program design using punched paper tape rather than electronic memory. Relays and vacuum tubes were used as switching elements, a useful computer requires thousands or tens of thousands of switching devices. The overall speed of a system is dependent on the speed of the switches, tube computers like EDVAC tended to average eight hours between failures, whereas relay computers like the Harvard Mark I failed very rarely. In the end, tube-based CPUs became dominant because the significant speed advantages afforded generally outweighed the reliability problems, most of these early synchronous CPUs ran at low clock rates compared to modern microelectronic designs. Clock signal frequencies ranging from 100 kHz to 4 MHz were very common at this time, the design complexity of CPUs increased as various technologies facilitated building smaller and more reliable electronic devices
14.
Instruction set
–
An ISA includes a specification of the set of opcodes, and the native commands implemented by a particular processor. An instruction set architecture is distinguished from a microarchitecture, which is the set of design techniques used, in a particular processor. Processors with different microarchitectures can share a common instruction set, for example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs. The concept of an architecture, distinct from the design of a machine, was developed by Fred Brooks at IBM during the design phase of System/360. Prior to NPL, the companys computer designers had been free to honor cost objectives not only by selecting technologies, the SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. In addition, these virtual machines execute less frequently used code paths by interpretation, transmeta implemented the x86 instruction set atop VLIW processors in this fashion. A complex instruction set computer has many specialized instructions, some of which may only be used in practical programs. Theoretically important types are the instruction set computer and the one instruction set computer. Another variation is the very long instruction word where the processor receives many instructions encoded and retrieved in one instruction word, machine language is built up from discrete statements or instructions. Examples of operations common to many instruction sets include, Set a register to a constant value. Copy data from a location to a register, or vice versa. Used to store the contents of a register, result of a computation, often called load and store operations. Read and write data from hardware devices, add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register. Increment, decrement in some ISAs, saving operand fetch in trivial cases, perform bitwise operations, e. g. taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register. Floating-point instructions for arithmetic on floating-point numbers, branch to another location in the program and execute instructions there. Conditionally branch to another if a certain condition holds. Call another block of code, while saving the location of the instruction as a point to return to. Load/store data to and from a coprocessor, or exchanging with CPU registers, processors may include complex instructions in their instruction set
15.
Computer architecture
–
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer, in other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine, johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory. Brooks went on to develop the IBM System/360 line of computers. Later, computer users came to use the term in many less-explicit ways, the earliest computer architectures were designed on paper and then directly built into the final hardware form. The discipline of architecture has three main subcategories, Instruction Set Architecture, or ISA. The ISA defines the code that a processor reads and acts upon as well as the word size, memory address modes, processor registers. Microarchitecture, or computer organization describes how a processor will implement the ISA. The size of a computers CPU cache for instance, is an issue that generally has nothing to do with the ISA, system Design includes all of the other hardware components within a computing system. These include, Data processing other than the CPU, such as memory access Other issues such as virtualization, multiprocessing. There are other types of computer architecture, E. g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture. UISA —a group of machines with different hardware level microarchitectures may share a common microcode architecture, pin Architecture, The hardware functions that a microprocessor should provide to a hardware platform, e. g. the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated, pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term architecture fits, because the functions must be provided for compatible systems, the purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes Instruction Set Design, Functional Organization, Logic Design, the implementation involves Integrated Circuit Design, Packaging, Power, and Cooling. Optimization of the design requires familiarity with Compilers, Operating Systems to Logic Design, an instruction set architecture is the interface between the computers software and hardware and also can be viewed as the programmers view of the machine. Computers do not understand high level languages such as Java, C++, a processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand, besides instructions, the ISA defines items in the computer that are available to a program—e. g
16.
Computer memory
–
In computing, memory refers to the computer hardware devices involved to store information for immediate use in a computer, it is synonymous with the term primary storage. Computer memory operates at a speed, for example random-access memory, as a distinction from storage that provides slow-to-access program and data storage. If needed, contents of the memory can be transferred to secondary storage. An archaic synonym for memory is store, there are two main kinds of semiconductor memory, volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory, most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes both one bit per cell and multiple bits per cell. The memory cells are grouped into words of fixed word length, each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word, typical secondary storage devices are hard disk drives and solid-state drives. In the early 1940s, memory technology oftenly permit a capacity of a few bytes, the next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient, two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, the Williams tube would prove more capacious than the Selectron tube and less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory, jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s, developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term memory when used with reference to computers generally refers to Random Access Memory or RAM, volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM or dynamic RAM, SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM, non-volatile memory is computer memory that can retain the stored information even when not powered
17.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components
18.
Digital signal processor
–
A digital signal processor is a specialized microprocessor, with its architecture optimized for the operational needs of digital signal processing. The goal of DSPs is usually to measure, filter or compress continuous real-world analog signals, DSPs often use special memory architectures that are able to fetch multiple data or instructions at the same time. Digital signal processing algorithms typically require a number of mathematical operations to be performed quickly and repeatedly on a series of data samples. Signals are constantly converted from analog to digital, manipulated digitally, many DSP applications have constraints on latency, that is, for the system to work, the DSP operation must be completed within some fixed time, and deferred processing is not viable. A specialized digital signal processor, however, will tend to provide a lower-cost solution, with performance, lower latency. For example, the SES-12 and SES-14 satellites from operator SES, the architecture of a digital signal processor is optimized specifically for digital signal processing. Most also support some of the features as a processor or microcontroller. Some useful features for optimizing DSP algorithms are outlined below, sometimes various sticky bits operation modes are available. DSPs can sometimes rely on supporting code to know about cache hierarchies and this is a tradeoff that allows for better performance. In addition, extensive use of DMA is employed, DSPs frequently use multi-tasking operating systems, but have no support for virtual memory or memory protection. Operating systems that use virtual memory require more time for context switching among processes, the AMD2901 bit-slice chip with its family of components was a very popular choice. There were reference designs from AMD, but very often the specifics of a design were application specific. These bit slice architectures would sometimes include a peripheral multiplier chip, examples of these multipliers were a series from TRW including the TDC1008 and TDC1010, some of which included an accumulator, providing the requisite multiply–accumulate function. In 1976, Richard Wiggins proposed the Speak & Spell concept to Paul Breedlove, Larry Brantingham, two years later in 1978 they produced the first Speak & Spell, with the technological centerpiece being the TMS5100, the industrys first digital signal processor. It also set other milestones, being the first chip to use Linear predictive coding to perform speech synthesis, in 1978, Intel released the 2920 as an analog signal processor. It had an on-chip ADC/DAC with a signal processor. In 1979, AMI released the S2811 and it was designed as a microprocessor peripheral, and it had to be initialized by the host. The S2811 was likewise not successful in the market, in 1980 the first stand-alone, complete DSPs – the NEC µPD7720 and AT&T DSP1 – were presented at the International Solid-State Circuits Conference 80