1.
Computing
–
Computing is any goal-oriented activity requiring, benefiting from, or creating a mathematical sequence of steps known as an algorithm — e. g. through computers. The field of computing includes computer engineering, software engineering, computer science, information systems, the ACM Computing Curricula 2005 defined computing as follows, In a general way, we can define computing to mean any goal-oriented activity requiring, benefiting from, or creating computers. For example, an information systems specialist will view computing somewhat differently from a software engineer, regardless of the context, doing computing well can be complicated and difficult. Because society needs people to do computing well, we must think of computing not only as a profession, the fundamental question underlying all computing is What can be automated. The term computing is also synonymous with counting and calculating, in earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers. Computing is intimately tied to the representation of numbers, but long before abstractions like the number arose, there were mathematical concepts to serve the purposes of civilization. These concepts include one-to-one correspondence, comparison to a standard, the earliest known tool for use in computation was the abacus, and it was thought to have been invented in Babylon circa 2400 BC. Its original style of usage was by lines drawn in sand with pebbles, abaci, of a more modern design, are still used as calculation tools today. This was the first known computer and most advanced system of calculation known to date - preceding Greek methods by 2,000 years. The first recorded idea of using electronics for computing was the 1931 paper The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena by C. E. Wynn-Williams. Claude Shannons 1938 paper A Symbolic Analysis of Relay and Switching Circuits then introduced the idea of using electronics for Boolean algebraic operations, a computer is a machine that manipulates data according to a set of instructions called a computer program. The program has a form that the computer can use directly to execute the instructions. The same program in its source code form, enables a programmer to study. Because the instructions can be carried out in different types of computers, the execution process carries out the instructions in a computer program. Instructions express the computations performed by the computer and they trigger sequences of simple actions on the executing machine. Those actions produce effects according to the semantics of the instructions, computer software or just software, is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more programs and data held in the storage of the computer for some purposes. In other words, software is a set of programs, procedures, algorithms, program software performs the function of the program it implements, either by directly providing instructions to the computer hardware or by serving as input to another piece of software
2.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
3.
Double-precision floating-point format
–
Double-precision floating-point format is a computer number format that occupies 8 bytes in computer memory and represents a wide, dynamic range of values by using a floating point. Double-precision floating-point format usually refers to binary64, as specified by the IEEE754 standard, in older computers, different floating-point formats of 8 bytes were used, e. g. GW-BASICs double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its range over single-precision floating point, in spite of its performance. As with single-precision floating-point format, it lacks precision on integer numbers when compared with a format of the same size. It is commonly simply as double. The IEEE754 standard specifies a binary64 as having, Sign bit,1 bit Exponent,11 bits Significand precision,53 bits This gives 15–17 significant decimal digits precision. If an IEEE754 double precision is converted to a string with at least 17 significant digits and then converted back to double. The format is written with the significand having an implicit integer bit of value 1, with the 52 bits of the fraction significand appearing in the memory format, the total precision is therefore 53 bits. For the next range, from 253 to 254, everything is multiplied by 2, so the numbers are the even ones. Conversely, for the range from 251 to 252, the spacing is 0.5. The spacing as a fraction of the numbers in the range from 2n to 2n+1 is 2n−52, the maximum relative rounding error when rounding a number to the nearest representable one is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308 and 10308, with full 15–17 decimal digits precision, by compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using a representation, with the zero offset being 1023. All bit patterns are valid encoding, except for the above exceptions, the entire double-precision number is described by, sign ×2 exponent − exponent bias ×1. Fraction In the case of subnormals the double-precision number is described by, because there have been many floating point formats with no network standard representation for them, the XDR standard uses big-endian IEEE754 as its representation. It may therefore appear strange that the widespread IEEE754 floating point standard does not specify endianness, theoretically, this means that even standard IEEE floating point data written by one machine might not be readable by another. One area of computing where this is an issue is for parallel code running on GPUs. For example, when using NVIDIAs CUDA platform, on video cards designed for gaming, doubles are implemented in many programming languages in different ways such as the following
4.
William Kahan
–
He attended the University of Toronto, where he received his bachelors degree in 1954, his masters degree in 1956, and his Ph. D. in 1958, all in the field of mathematics. Kahan is now professor of mathematics and of electrical engineering and computer sciences at the University of California. Kahan was the architect behind the IEEE 754-1985 standard for floating-point computation. He has been called “The Father of Floating Point, ” since he was instrumental in creating the original IEEE754 specification, Kahan continued his contributions to the IEEE754 revision that led to the current IEEE754 standard. In the 1980s he developed the program paranoia, a benchmark that tests for a range of potential floating point bugs. It would go on to detect the infamous Pentium division bug and he also developed the Kahan summation algorithm, an important algorithm for minimizing error introduced when adding a sequence of finite precision floating point numbers. He coined the term The Table-Makers Dilemma for the unknown cost of correctly rounding transcendental functions to some preassigned number of digits, when Hewlett–Packard introduced the original HP-35 pocket scientific calculator, its numerical accuracy in evaluating transcendental functions for some arguments was not optimal. Hewlett–Packard worked extensively with Kahan to enhance the accuracy of the algorithms and this was documented at the time in the Hewlett-Packard Journal. He also contributed substantially to the design of the algorithms in the HP Voyager series, IEEE standard for binary floating-point arithmetic. ACM SIGPLAN Notices 22,2, 9–25 An Interview with the Old Man of Floating-Point, 1998-Feb-20 A Conversation with William Kahan, Dr. Dobbs Journal November,1997
5.
Extended precision
–
Extended precision refers to floating point number formats that provide greater precision than the basic floating point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software, the IBM1130 offered two floating point formats, a 32-bit standard precision format and a 40-bit extended precision format. Standard precision format contained a 24-bit twos complement significand while extended precision utilized a 32-bit twos complement significand, the latter format could make full use of the cpus 32-bit integer operations. The characteristic in both formats was an 8-bit field containing the power of two biased by 128, floating-point arithmetic operations were performed by software, and double precision was not supported at all. The extended format occupied three 16-bit words, with the extra space simply ignored, the IBM System/360 supports a 32-bit short floating point format and a 64-bit long floating point format. The 360/85 and follow-on System/370 added support for a 128-bit extended format and these formats are still supported in the current design, where they are now called the hexadecimal floating point formats. The IEEE754 floating point standard recommends that implementations provide extended precision formats, the standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementors choice, the IA32 and x86-64 and Itanium processors support an 80-bit double extended extended precision format with a 64-bit significand. The Intel 8087 math coprocessor was the first x86 device which supported floating point arithmetic in hardware and it was designed to support a 32-bit single precision format and a 64-bit double precision format for encoding and interchanging floating point numbers. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit extended precision format, the floating-point unit on all subsequent x86 processors have supported this format. As a result software can be developed which takes advantage of the higher precision provided by this format and that kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors support this same 64-bit significand extended precision type, the follow-on Coldfire processors do not support this 96-bit extended precision format. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE754 double extended format and this 80-bit format uses one bit for the sign of the significand,15 bits for the exponent field and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the power of 2. An exponent field value of 32767 is reserved so as to enable the representation of states such as infinity. If the exponent field is zero, the value is a denormal number, the m field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit, rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part
6.
IEEE 754
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
7.
Half-precision floating-point format
–
In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. In IEEE 754-2008 the 16-bit base 2 format is referred to as binary16. It is intended for storage of many floating-point values where higher precision is not needed, nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. The hardware-accelerated programmable shading group led by John Airey at SGI invented the s10e5 data type in 1997 as part of the design effort. This is described in a SIGGRAPH2000 paper and further documented in US patent 7518615 and this format is used in several computer graphics environments including OpenEXR, JPEG XR, OpenGL, Cg, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the dynamic range allows for more detail to be preserved in highlights. The advantage over 32-bit single-precision binary formats is that it half the storage. The F16C extension allows x86 processors to convert half-precision floats to, thus only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE754 parlance, there are 10 bits of significand, the half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15, also known as exponent bias in the IEEE754 standard. The stored exponents 000002 and 111112 are interpreted specially, the minimum strictly positive value is 2−24 ≈5.96 × 10−8. The minimum positive value is 2−14 ≈6.10 × 10−5. The maximum representable value is ×215 =65504 and these examples are given in bit representation of the floating-point value. This includes the sign bit, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, ARM processors support an alternative half-precision format, which does away with the special case for an exponent value of 31. It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs, instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008
8.
Single-precision floating-point format
–
Single-precision floating-point format is a computer number format that occupies 4 bytes in computer memory and represents a wide dynamic range of values by using a floating point. In IEEE 754-2008 the 32-bit base-2 format is referred to as binary32. It was called single in IEEE 754-1985, in older computers, different floating-point formats of 4 bytes were used, e. g. GW-BASICs single-precision data type was the 32-bit MBF floating-point format. One of the first programming languages to provide single- and double-precision floating-point data types was Fortran, before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model. Single-precision binary floating-point is used due to its range over fixed point. A signed 32-bit integer can have a value of 231 −1 =2,147,483,647. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE754 form. Single precision is termed REAL in Fortran, float in C, C++, C#, Java, Float in Haskell, and Single in Object Pascal, Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml, in most implementations of PostScript, the only real precision is single. The IEEE754 standard specifies a binary32 as having, Sign bit,1 bit Exponent width,8 bits Significand precision,24 bits This gives from 6 to 9 significant decimal digits precision. Sign bit determines the sign of the number, which is the sign of the significand as well, Exponent is either an 8-bit signed integer from −128 to 127 or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE754 binary32 definition. Exponents range from −126 to +127 because exponents of −127 and +128 are reserved for special numbers, the true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, B0 =1 + ∑ i =123 b 23 − i 2 − i =1 +1 ⋅2 −2 =1.25 ∈ ⊂ ⊂ [1,2 ). Thus, value = ×1.25 ×2 −3 = +0.15625. Note,1 +2 −23 ≈1.000000119,2 −2 −23 ≈1.999999881,2 −126 ≈1.17549435 ×10 −38,2 +127 ≈1.70141183 ×10 +38. The single-precision binary floating-point exponent is encoded using a representation, with the zero offset being 127. The stored exponents 00H and FFH are interpreted specially, the minimum positive normal value is 2−126 ≈1.18 × 10−38 and the minimum positive value is 2−149 ≈1.4 × 10−45. In general, refer to the IEEE754 standard itself for the conversion of a real number into its equivalent binary32 format
9.
Decimal32 floating-point format
–
In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, like the binary16 format, it is intended for memory saving storage. Decimal32 supports 7 decimal digits of significand and an exponent range of −95 to +96, because the significand is not normalized, most values with less than 7 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal32 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal32 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,7 digits of significand and 3×26=192 possible exponent values. The remaining combinations encode infinities and NaNs and this format uses a binary significand from 0 to 107−1 =9999999 = 98967F16 =1001100010010110011111112. The encoding can represent binary significands up to 10×220−1 =10485759 = 9FFFFF16 =1001111111111111111111112, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 8-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. These six bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 20 bits are the significand continuation field, consisting of two 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results
10.
Decimal64 floating-point format
–
In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i. e. ±0. 000000000000000×10^−383 to ±9. 999999999999999×10^384. In contrast, the binary format, which is the most commonly used type, has an approximate range of ±0. 000000000000001×10^−308 to ±1. 797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple representations, 1×102=0. 1×103=0. 01×104. Decimal64 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal64 values. Both alternatives provide exactly the range of representable numbers,16 digits of significand. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of a 5-bit field, the remaining combinations encode infinities and NaNs. In the cases of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. This format uses a binary significand from 0 to 1016−1 =9999999999999999 = 2386F26FC0FFFF16 =1000111000011011110010011011111100000011111111111111112, the encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 =11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 10-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 for the most bits of the true significand, compare having an implicit 1-bit prefix 1 in the significand of normal values for the binary formats. Note also that the 2-bit sequences 00,01, or 10 after the bit are part of the exponent field. Note that the bits of the significand field do not encode the most significant decimal digit. The highest valid significant is 9999999999999999 whose binary encoding is 0111000011011110010011011111100000011111111111111112, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 50 bits are the significand continuation field, consisting of five 10-bit declets
11.
Decimal128 floating-point format
–
In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i. e. ±0. 000000000000000000000000000000000×10^−6143 to ±9. 999999999999999999999999999999999×10^6144. Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats, because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal128 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal128 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,34 digits of significand and 3×212 =12288 possible exponent values. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of 5 bits in the combination field, the remaining combinations encode infinities and NaNs. In the case of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. The encoding can represent binary significands up to 10×2110−1 =12980742146337069071326240823050239, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 14-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, for the decimal128 format, all of these significands are out of the valid range, and are thus decoded as zero, but the pattern is same as decimal32 and decimal64. The leading digit is between 0 and 9, and the rest of the uses the densely packed decimal encoding. This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each
12.
Octuple-precision floating-point format
–
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision and this format is rarely used and very few things support it. Thus only 236 bits of the significand appear in the memory format, the stored exponents 0000016 and 7FFFF16 are interpreted specially. The minimum strictly positive value is 2−262378 ≈ 10−78984 and has a precision of one bit. The minimum positive value is 2−262142 ≈2.4824 × 10−78913. The maximum representable value is 2262144 −2261907 ≈1.6113 ×1078913 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of numbers with a 224-bit twos complement significand. One can use general arbitrary-precision arithmetic libraries to obtain octuple precision, there is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, IEEE Standard for Floating-Point Arithmetic ISO/IEC10967, Language-independent arithmetic Primitive data type
13.
Minifloat
–
In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations and they are used for special purposes most often in computer graphics where iterations are small and precision has aesthetic effects. Additionally they are encountered as a pedagogical tool in computer science courses to demonstrate the properties and structures of floating point arithmetic. Minifloats with 16 bits are half-precision numbers, there are also minifloats with 8 bits or even fewer. Minifloats can be designed following the principles of the IEEE754 standard, in this case they must obey the rules for the frontier between subnormal and normal numbers and they must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent, the new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats. The Radeon R300 and R420 GPUs used an fp24 floating-point format with 7 bits of exponent and 16 bits of mantissa, Full Precision in Direct3D9.0 is a proprietary 24-bit floating point format. Microsofts D3D9 graphics API initially supported both FP24 and FP32 as Full Precision as well as FP16 as Partial Precision for vertex, in computer graphics minifloats are sometimes used to represent only integral values. If at the same time subnormal values should exist, the least subnormal number has to be 1 and this statement can be used to calculate the bias value. The following example demonstrates the calculation as well as the underlying principles, a minifloat in one byte with one sign bit, four exponent bits and three mantissa bits should be used to represent integral values. All IEEE754 principles should be valid, the only free value is the exponent bias, which will come out as −2. The unknown exponent is called for the moment x, numbers in a different base are marked as. base. The bit patterns have spaces to visualize their parts,00000000 =0 The mantissa is extended with 0. 00000001 =0.0012 × 2x =0.125 × 2x =1,00000111 =0.1112 × 2x =0.875 × 2x =7 The mantissa is extended with 1. 00001000 =1.0002 × 2x =1 × 2x =800001001 =1.0012 × 2x =1.125 × 2x =9. 00010000 =1.0002 × 2x+1 =1 × 2x+1 =1600010001 =1.0012 × 2x+1 =1.125 × 2x+1 =18. 01110000 =1.0002 × 2x+13 =1.000 × 2x+13 =6553601110001 =1.0012 × 2x+13 =1.125 × 2x+13 =73728. Therefore the bias has to be −2, that is every stored exponent has to be decreased by −2 or has to be increased by 2, to get the numerical exponent
14.
Microsoft Binary Format
–
In computing, Microsoft Binary Format was a format for floating point numbers used in Microsofts BASIC language products including MBASIC, GW-BASIC and QuickBasic prior to version 4.00. In 1975, Bill Gates and Paul Allen were working on Altair BASIC, one thing still missing was code to handle floating point numbers, needed to support calculations with very big and very small numbers, which would be particularly useful for science and engineering. One of the uses of the Altair was as a scientific calculator. At a dinner at Currier House, a residential house at Harvard, Gates. One of them, Monte Davidoff, told them he had written floating point routines before and convinced Gates, at the time there was no standard for floating point numbers, so Davidoff had to come up with his own. He decided 32 bits would allow enough range and precision, when Allen had to demonstrate it to MITS, it was the first time it ran on an actual Altair. But it worked and when he entered ‘PRINT 2+2’, Davidoffs adding routine gave the right answer, the source code for Altair BASIC was thought to have been lost to history, but resurfaced in 2000. It had been sitting behind Gatess former tutor and dean Harry Lewiss file cabinet, a comment in the source credits Davidoff as the writer of Altair BASICs math package. Altair BASIC took off and soon most early home computers ran some form of Microsoft BASIC, the BASIC port for the 6502 CPU, such as used in the Commodore PET, took up more space due to the lower code density of the 6502. Because of this it would not fit in a single ROM chip together with the machine-specific input and output code. Since an extra chip was necessary, extra space was available, not long afterwards the Z80 ports, such as Level II BASIC for the TRS-80, introduced the 64 bit, double precision format as a separate data type from 32 bit, single precision. Even so, for a while MBF became the de facto floating point format on computers, to the point where people still occasionally encounter legacy files. As early as in 1976, Intel was starting the development of a floating point coprocessor, Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries. John Palmer, who managed the project, contacted William Kahan, the first VAX, the VAX-11/780 had just come out in late 1977 and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, when rumours of Intels new chip reached its competitors they started a standardization effort, called IEEE754, to prevent Intel from gaining too much ground. Kahan got Palmers permission to participate, he was allowed to explain Intels design decisions and their underlying reasoning, vAXs floating point formats differed from MBF only in that it had the sign in the most significant bit. It turned out that for double precision numbers, an 8 bit exponent isnt wide enough for some wanted operations, both Kahans proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60 bits floating point format of the CDC6600 from 1965. The next year DEC had a study done in order to demonstrate that gradual underflow was a bad idea, in 1985 the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers
15.
Exponentiation
–
Exponentiation is a mathematical operation, written as bn, involving two numbers, the base b and the exponent n. The exponent is usually shown as a superscript to the right of the base, Some common exponents have their own names, the exponent 2 is called the square of b or b squared, the exponent 3 is called the cube of b or b cubed. The exponent −1 of b, or 1 / b, is called the reciprocal of b, when n is a positive integer and b is not zero, b−n is naturally defined as 1/bn, preserving the property bn × bm = bn + m. The definition of exponentiation can be extended to any real or complex exponent. Exponentiation by integer exponents can also be defined for a variety of algebraic structures. The term power was used by the Greek mathematician Euclid for the square of a line, archimedes discovered and proved the law of exponents, 10a 10b = 10a+b, necessary to manipulate powers of 10. In the late 16th century, Jost Bürgi used Roman numerals for exponents, early in the 17th century, the first form of our modern exponential notation was introduced by Rene Descartes in his text titled La Géométrie, there, the notation is introduced in Book I. Nicolas Chuquet used a form of notation in the 15th century. The word exponent was coined in 1544 by Michael Stifel, samuel Jeake introduced the term indices in 1696. In the 16th century Robert Recorde used the square, cube, zenzizenzic, sursolid, zenzicube, second sursolid. Biquadrate has been used to refer to the power as well. Some mathematicians used exponents only for greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d, another historical synonym, involution, is now rare and should not be confused with its more common meaning. In 1748 Leonhard Euler wrote consider exponentials or powers in which the exponent itself is a variable and it is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant. With this introduction of transcendental functions, Euler laid the foundation for the introduction of natural logarithm as the inverse function for y = ex. The expression b2 = b ⋅ b is called the square of b because the area of a square with side-length b is b2, the expression b3 = b ⋅ b ⋅ b is called the cube of b because the volume of a cube with side-length b is b3. The exponent indicates how many copies of the base are multiplied together, for example,35 =3 ⋅3 ⋅3 ⋅3 ⋅3 =243. The base 3 appears 5 times in the multiplication, because the exponent is 5
16.
0 (number)
–
0 is both a number and the numerical digit used to represent that number in numerals. The number 0 fulfills a role in mathematics as the additive identity of the integers, real numbers. As a digit,0 is used as a placeholder in place value systems, names for the number 0 in English include zero, nought or naught, nil, or—in contexts where at least one adjacent digit distinguishes it from the letter O—oh or o. Informal or slang terms for zero include zilch and zip, ought and aught, as well as cipher, have also been used historically. The word zero came into the English language via French zéro from Italian zero, in pre-Islamic time the word ṣifr had the meaning empty. Sifr evolved to mean zero when it was used to translate śūnya from India, the first known English use of zero was in 1598. The Italian mathematician Fibonacci, who grew up in North Africa and is credited with introducing the system to Europe. This became zefiro in Italian, and was contracted to zero in Venetian. The Italian word zefiro was already in existence and may have influenced the spelling when transcribing Arabic ṣifr, modern usage There are different words used for the number or concept of zero depending on the context. For the simple notion of lacking, the words nothing and none are often used, sometimes the words nought, naught and aught are used. Several sports have specific words for zero, such as nil in football, love in tennis and it is often called oh in the context of telephone numbers. Slang words for zero include zip, zilch, nada, duck egg and goose egg are also slang for zero. Ancient Egyptian numerals were base 10 and they used hieroglyphs for the digits and were not positional. By 1740 BC, the Egyptians had a symbol for zero in accounting texts. The symbol nfr, meaning beautiful, was used to indicate the base level in drawings of tombs and pyramids. By the middle of the 2nd millennium BC, the Babylonian mathematics had a sophisticated sexagesimal positional numeral system, the lack of a positional value was indicated by a space between sexagesimal numerals. By 300 BC, a symbol was co-opted as a placeholder in the same Babylonian system. In a tablet unearthed at Kish, the scribe Bêl-bân-aplu wrote his zeros with three hooks, rather than two slanted wedges, the Babylonian placeholder was not a true zero because it was not used alone
17.
Signed zero
–
Signed zero is zero with an associated sign. In ordinary arithmetic, the number 0 does not have a sign, so that −0 and this occurs in the sign and magnitude and ones complement signed number representations for integers, and in most floating-point number representations. The number 0 is usually encoded as +0, but can be represented by either +0 or −0, the IEEE754 standard for floating-point arithmetic requires both +0 and −0. Real arithmetic with signed zeros can be considered a variant of the real number line such that 1/−0 = −∞ and 1/+0 = +∞, division is only undefined for ±0/±0. Negatively signed zero echoes the mathematical concept of approaching 0 from below as a one-sided limit. The notation −0 may be used informally to denote a negative number that has been rounded to zero. The concept of zero also has some theoretical applications in statistical mechanics. On the other hand, the concept of signed zero runs contrary to the assumption made in most mathematical fields that negative zero is the same thing as zero. The widely used twos complement encoding does not allow a negative zero, in a 1+7-bit sign-and-magnitude representation for integers, negative zero is represented by the bit string 10000000. In an 8-bit ones complement representation, negative zero is represented by the bit string 11111111, in all three encodings, positive zero is represented by 00000000. In IEEE754 binary floating point numbers, zero values are represented by the biased exponent, negative zero has the sign bit set to one. One may obtain negative zero as the result of certain computations, for instance as the result of arithmetic underflow on a number, or −1. 0×0.0. The IEEE754 floating point standard specifies the behavior of positive zero, the outcome may depend on the current IEEE rounding mode settings. In systems that include both signed and unsigned zeros, notation 0 + and 0 − is sometimes used for signed zeros. Addition and multiplication are commutative, but there are special rules that have to be followed. The = sign below shows the result of the operations. However + x can be replaced by x with rounding to nearest, an exception handler is called if enabled for the corresponding flag. According to the IEEE754 standard, negative zero and positive zero should compare as equal with the comparison operators, like the == operators of C
18.
Denormal number
–
In computer science, denormal numbers or denormalized numbers fill the underflow gap around zero in floating-point arithmetic. Any non-zero number with magnitude smaller than the smallest normal number is subnormal, in a normal floating-point value, there are no leading zeros in the significand, instead leading zeros are moved to the exponent. So 0.0123 would be written as 1.23 × 10−2, denormal numbers are numbers where this representation would result in an exponent that is below the minimum exponent. Such numbers are represented using leading zeros in the significand, the significand of an IEEE floating point number is the part of a floating-point number that represents the significant digits. For a positive normalised number it can be represented as m0. m1m2m3. mp-2mp-1, notice that for a binary radix, the leading binary digit is always 1. In a denormal number, since the exponent is the least that it can be, zero is the leading significand digit, by filling the underflow gap like this, significant digits are lost, but not as abruptly as when using the flush to zero on underflow approach. Hence the production of a number is sometimes called gradual underflow because it allows a calculation to lose precision slowly when the result is small. In IEEE 754-2008, denormal numbers are renamed subnormal numbers, and are supported in both binary and decimal formats. In binary interchange formats, subnormal numbers are encoded with an exponent of 0, but are interpreted with the value of the smallest allowed exponent. In decimal interchange formats they require no special encoding because the format supports unnormalized numbers directly, mathematically speaking, the normalized floating point numbers of a given sign are roughly logarithmically spaced, and as such any finite-sized normal float cannot include zero. The denormal floats are a set of values which span the gap between the negative and positive normal floats. Denormal numbers provide the guarantee that addition and subtraction of floating-point numbers never underflows, without gradual underflow, the subtraction a−b can underflow and produce zero even though the values are not equal. This can, in turn, lead to division by zero errors that occur when gradual underflow is used. Denormal numbers were implemented in the Intel 8087 while the IEEE754 standard was being written, some implementations of floating point units do not directly support denormal numbers in hardware, but rather trap to some kind of software support. While this may be transparent to the user, it can result in calculations which produce or consume denormal numbers being much slower than similar calculations on normal numbers, some systems handle denormal values in hardware, in the same way as normal values. Others leave the handling of denormal values to system software, only handling normal values, handling denormal values in software always leads to a significant decrease in performance. This speed difference can be a security risk, researchers showed that it provides a timing side channel that allows a malicious web site to extract page content from another site inside a web browser. Some applications need to code to avoid denormal numbers, either to maintain accuracy
19.
Infinity
–
Infinity is an abstract concept describing something without any bound or larger than any number. In mathematics, infinity is treated as a number but it is not the same sort of number as natural or real numbers. Georg Cantor formalized many ideas related to infinity and infinite sets during the late 19th, in the theory he developed, there are infinite sets of different sizes. For example, the set of integers is countably infinite, while the set of real numbers is uncountable. Ancient cultures had various ideas about the nature of infinity, the ancient Indians and Greeks did not define infinity in precise formalism as does modern mathematics, and instead approached infinity as a philosophical concept. The earliest recorded idea of infinity comes from Anaximander, a pre-Socratic Greek philosopher who lived in Miletus and he used the word apeiron which means infinite or limitless. However, the earliest attestable accounts of mathematical infinity come from Zeno of Elea, aristotle called him the inventor of the dialectic. He is best known for his paradoxes, described by Bertrand Russell as immeasurably subtle, however, recent readings of the Archimedes Palimpsest have found that Archimedes had an understanding about actual infinite quantities. The Jain mathematical text Surya Prajnapti classifies all numbers into three sets, enumerable, innumerable, and infinite, on both physical and ontological grounds, a distinction was made between asaṃkhyāta and ananta, between rigidly bounded and loosely bounded infinities. European mathematicians started using numbers in a systematic fashion in the 17th century. John Wallis first used the notation ∞ for such a number, euler used the notation i for an infinite number, and exploited it by applying the binomial formula to the i th power, and infinite products of i factors. In 1699 Isaac Newton wrote about equations with an number of terms in his work De analysi per aequationes numero terminorum infinitas. The infinity symbol ∞ is a symbol representing the concept of infinity. The symbol is encoded in Unicode at U+221E ∞ infinity and in LaTeX as \infty and it was introduced in 1655 by John Wallis, and, since its introduction, has also been used outside mathematics in modern mysticism and literary symbology. Leibniz, one of the co-inventors of infinitesimal calculus, speculated widely about infinite numbers, in real analysis, the symbol ∞, called infinity, is used to denote an unbounded limit. X → ∞ means that x grows without bound, and x → − ∞ means the value of x is decreasing without bound. ∑ i =0 ∞ f = ∞ means that the sum of the series diverges in the specific sense that the partial sums grow without bound. Infinity can be used not only to define a limit but as a value in the real number system
20.
NaN
–
In computing, NaN, standing for not a number, is a numeric data type value representing an undefined or unrepresentable value, especially in floating-point calculations. Systematic use of NaNs was introduced by the IEEE754 floating-point standard in 1985, two separate kinds of NaNs are provided, termed quiet NaNs and signaling NaNs. An invalid operation is not the same as an arithmetic overflow or an arithmetic underflow. For example, a bit-wise IEEE floating-point standard single precision NaN would be, s1111111 1xxx xxxx xxxx xxxx xxxx where s is the sign. Some bits from x are used to determine the type of NaN, the remaining bits encode a payload. Floating-point operations other than ordered comparisons normally propagate a quiet NaN, a comparison with a NaN always returns an unordered result even when comparing with itself. The comparison predicates are either signaling or non-signaling, the signaling versions signal the invalid operation exception for such comparisons, the equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN. The other standard comparison predicates are all signaling if they receive a NaN operand, the predicate isNaN determines if a value is a NaN and never signals an exception, even if x is a signaling NaN. The propagation of quiet NaNs through arithmetic operations allows errors to be detected at the end of a sequence of operations without extensive testing during intermediate stages. In section 6.2 of the revised IEEE 754-2008 standard there are two anomalous functions that favor numbers — if just one of the operands is a NaN then the value of the operand is returned. There are three kinds of operations that can return NaN, Operations with a NaN as at least one operand, indeterminate forms, The divisions 0/0 and ±∞/±∞. The additions ∞ +, + ∞ and equivalent subtractions, the standard has alternative functions for powers, The standard pow function and the integer exponent pown function define 00, 1∞, and ∞0 as 1. The powr function defines all three forms as invalid operations and so returns NaN. Real operations with complex results, for example, The square root of a negative number, the logarithm of a negative number. The inverse sine or cosine of a number that is less than −1 or greater than +1, NaNs may also be explicitly assigned to variables, typically as a representation for missing values. Prior to the IEEE standard, programmers often used a value to represent undefined or missing values. NaNs are not necessarily generated in all the above cases, if an operation can produce an exception condition and traps are not masked then the operation will cause a trap instead. If an operand is a quiet NaN, and there isnt also a signaling NaN operand, then there is no exception condition, explicit assignments will not cause an exception even for signaling NaNs
21.
Hexadecimal
–
In mathematics and computing, hexadecimal is a positional numeral system with a radix, or base, of 16. It uses sixteen distinct symbols, most often the symbols 0–9 to represent values zero to nine, Hexadecimal numerals are widely used by computer system designers and programmers. As each hexadecimal digit represents four binary digits, it allows a more human-friendly representation of binary-coded values, one hexadecimal digit represents a nibble, which is half of an octet or byte. For example, a byte can have values ranging from 00000000 to 11111111 in binary form. In a non-programming context, a subscript is typically used to give the radix, several notations are used to support hexadecimal representation of constants in programming languages, usually involving a prefix or suffix. The prefix 0x is used in C and related languages, where this value might be denoted as 0x2AF3, in contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously, a numerical subscript can give the base explicitly,15910 is decimal 159,15916 is hexadecimal 159, which is equal to 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. example. com/name%20with%20spaces where %20 is the space character, thus ’, represents the right single quotation mark, Unicode code point number 2019 in hex,8217. In the Unicode standard, a value is represented with U+ followed by the hex value. Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits prefixed with #, white, CSS allows 3-hexdigit abbreviations with one hexdigit per component, #FA3 abbreviates #FFAA33. *nix shells, AT&T assembly language and likewise the C programming language, to output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed H or h, some assembly languages use the notation HABCD. Ada and VHDL enclose hexadecimal numerals in based numeric quotes, 16#5A3#, for bit vector constants VHDL uses the notation x5A3. Verilog represents hexadecimal constants in the form 8hFF, where 8 is the number of bits in the value, the Smalltalk language uses the prefix 16r, 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#, 16#5A3. For PostScript, binary data can be expressed as unprefixed consecutive hexadecimal pairs, in early systems when a Macintosh crashed, one or two lines of hexadecimal code would be displayed under the Sad Mac to tell the user what went wrong. Common Lisp uses the prefixes #x and #16r, setting the variables *read-base* and *print-base* to 16 can also used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H, &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix, 0h5A3 ALGOL68 uses the prefix 16r to denote hexadecimal numbers, binary, quaternary and octal numbers can be specified similarly
22.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
23.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards
24.
GNU Fortran
–
GNU Fortran or GFortran is the name of the GNU Fortran compiler, which is part of the GNU Compiler Collection. GFortran has replaced the g77 compiler, on which development stopped before GCC version 4.0 and it includes full support for the Fortran 95 language and is compatible with most language extensions supported by g77, allowing it to serve as a drop-in replacement in many cases. Large parts of Fortran 2003 and Fortran 2008 have also been implemented, an experimental version of GFortran was included in GCC versions 4.0. x, but only since version 4.1 has it been considered user-ready by its developers. Development is ongoing together with the rest of GCC, GFortran forked off from g95 in January 2003, which itself started in early 2000. The two codebases have significantly diverged according to GCC developers, GNU Compiler for Java javac Official website GFortran on the GCC Wiki OpenMP in gfortran information web page
25.
X86
–
X86 is a family of backward-compatible instruction set architectures based on the Intel 8086 CPU and its Intel 8088 variant. The term x86 came into being because the names of several successors to Intels 8086 processor end in 86, many additions and extensions have been added to the x86 instruction set over the years, almost consistently with full backward compatibility. The architecture has been implemented in processors from Intel, Cyrix, AMD, VIA and many companies, there are also open implementations. In the 1980s and early 1990s, when the 8088 and 80286 were still in common use, today, however, x86 usually implies a binary compatibility also with the 32-bit instruction set of the 80386. An 8086 system, including such as 8087 and 8089. There were also terms iRMX, iSBC, and iSBX – all together under the heading Microsystem 80, however, this naming scheme was quite temporary, lasting for a few years during the early 1980s. Today, x86 is ubiquitous in both stationary and portable computers, and is also used in midrange computers, workstations, servers. A large amount of software, including operating systems such as DOS, Windows, Linux, BSD, Solaris and macOS, functions with x86-based hardware. There have been attempts, including by Intel itself, to end the market dominance of the inelegant x86 architecture designed directly from the first simple 8-bit microprocessors. Examples of this are the iAPX432, the Intel 960, Intel 860, however, the continuous refinement of x86 microarchitectures, circuitry and semiconductor manufacturing would make it hard to replace x86 in many segments. The table below lists processor models and model series implementing variations of the x86 instruction set, each line item is characterized by significantly improved or commercially successful processor microarchitecture designs. Such x86 implementations are seldom simple copies but often employ different internal microarchitectures as well as different solutions at the electronic, quite naturally, early compatible microprocessors were 16-bit, while 32-bit designs were developed much later. For the personal computer market, real quantities started to appear around 1990 with i386 and i486 compatible processors, other companies, which designed or manufactured x86 or x87 processors, include ITT Corporation, National Semiconductor, ULSI System Technology, and Weitek. Some early versions of these microprocessors had heat dissipation problems, AMD later managed to establish itself as a serious contender with the K6 set of processors, which gave way to the very successful Athlon and Opteron. There were also other contenders, such as Centaur Technology, Rise Technology, VIA Technologies energy efficient C3 and C7 processors, which were designed by the Centaur company, have been sold for many years. Centaurs newest design, the VIA Nano, is their first processor with superscalar and it was, perhaps interestingly, introduced at about the same time as Intels first in-order processor since the P5 Pentium, the Intel Atom. The instruction set architecture has twice been extended to a word size. In 1999-2003, AMD extended this 32-bit architecture to 64 bits and referred to it as x86-64 in early documents, Intel soon adopted AMDs architectural extensions under the name IA-32e, later using the name EM64T and finally using Intel 64
26.
X86-64
–
X86-64 is the 64-bit version of the x86 instruction set. It supports vastly larger amounts of memory and physical memory than is possible on its 32-bit predecessors. X86-64 also provides 64-bit general-purpose registers and numerous other enhancements and it is fully backward compatible with 16-bit and 32-bit x86 code. The original specification, created by AMD and released in 2000, has been implemented by AMD, Intel, the AMD K8 processor was the first to implement the architecture, this was the first significant addition to the x86 architecture designed by a company other than Intel. Intel was forced to suit and introduced a modified NetBurst family which was fully software-compatible with AMDs design. VIA Technologies introduced x86-64 in their VIA Isaiah architecture, with the VIA Nano, the x86-64 specification is distinct from the Intel Itanium architecture, which is not compatible on the native instruction set level with the x86 architecture. AMD64 was created as an alternative to the radically different IA-64 architecture, the first AMD64-based processor, the Opteron, was released in April 2003. AMDs processors implementing the AMD64 architecture include Opteron, Athlon 64, Athlon 64 X2, Athlon 64 FX, Athlon II, Turion 64, Turion 64 X2, Sempron, Phenom, Phenom II, FX, Fusion and Ryzen. The primary defining characteristic of AMD64 is the availability of 64-bit general-purpose processor registers, 64-bit integer arithmetic and logical operations, the designers took the opportunity to make other improvements as well. Some of the most significant changes are described below, pushes and pops on the stack default to 8-byte strides, and pointers are 8 bytes wide. Additional registers In addition to increasing the size of the general-purpose registers, AMD64 still has fewer registers than many common RISC instruction sets or VLIW-like machines such as the IA-64. However, an AMD64 implementation may have far more internal registers than the number of architectural registers exposed by the instruction set, additional XMM registers Similarly, the number of 128-bit XMM registers is also increased from 8 to 16. Larger virtual address space The AMD64 architecture defines a 64-bit virtual address format and this allows up to 256 TB of virtual address space. The architecture definition allows this limit to be raised in future implementations to the full 64 bits and this is compared to just 4 GB for the x86. This means that very large files can be operated on by mapping the entire file into the address space, rather than having to map regions of the file into. Larger physical address space The original implementation of the AMD64 architecture implemented 40-bit physical addresses, current implementations of the AMD64 architecture extend this to 48-bit physical addresses and therefore can address up to 256 TB of RAM. The architecture permits extending this to 52 bits in the future, for comparison, 32-bit x86 processors are limited to 64 GB of RAM in Physical Address Extension mode, or 4 GB of RAM without PAE mode. Any implementation therefore allows the physical address limit as under long mode
27.
Itanium
–
Itanium is a family of 64-bit Intel microprocessors that implement the Intel Itanium architecture. Intel markets the processors for servers and high-performance computing systems. The Itanium architecture originated at Hewlett-Packard, and was jointly developed by HP. Itanium-based systems have produced by HP and several other manufacturers. In 2008, Itanium was the fourth-most deployed microprocessor architecture for enterprise-class systems, behind x86-64, Power Architecture, the currently shipping Itanium processor generation, Poulson, was released on November 8,2012. In February 2017, Intel began releasing its successor, Kittson, to test customers, as Intel has not provided a roadmap beyond it and Hewlett-Packard is the only remaining major Itanium vendor, press and analysts have speculated that it will be the last Itanium generation. In 1989, HP determined that Reduced Instruction Set Computing architectures were approaching a processing limit at one instruction per cycle, HP researchers investigated a new architecture, later named Explicitly Parallel Instruction Computing, that allows the processor to execute multiple instructions in each clock cycle. EPIC implements a form of very long instruction word architecture, in which a single instruction word contains multiple instructions, Intel was willing to undertake a very large development effort on IA-64 in the expectation that the resulting microprocessor would be used by the majority of enterprise systems manufacturers. HP and Intel initiated a joint development effort with a goal of delivering the first product, Merced. Compaq and Silicon Graphics decided to further development of the Alpha. Several groups developed operating systems for the architecture, including Microsoft Windows, OpenVMS, Linux, and UNIX variants such as HP-UX, Solaris, Tru64 UNIX, and Monterey/64. By 1997, it was apparent that the IA-64 architecture and the compiler were much more difficult to implement than originally thought, technical difficulties included the very high transistor counts needed to support the wide instruction words and the large caches. There were also structural problems within the project, as the two parts of the joint team used different methodologies and had different priorities. Since Merced was the first EPIC processor, the development effort encountered more unanticipated problems than the team was accustomed to, in addition, the EPIC concept depends on compiler capabilities that had never been implemented before, so more research was needed. Intel announced the name of the processor, Itanium, on October 4,1999. Within hours, the name Itanic had been coined on a Usenet newsgroup, a reference to Titanic, by the time Itanium was released in June 2001, its performance was not superior to competing RISC and CISC processors. Itanium competed at the low-end with servers based on x86 processors, Intel repositioned Itanium to focus on high-end business and HPC computing, attempting to duplicate x86s successful horizontal market. POWER and SPARC remained strong, while the 32-bit x86 architecture continued to grow into the enterprise space, only a few thousand systems using the original Merced Itanium processor were sold, due to relatively poor performance, high cost and limited software availability
28.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
29.
C++
–
C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation and it was designed with a bias toward system programming and embedded, resource-constrained and large systems, with performance, efficiency and flexibility of use as its design highlights. C++ is a language, with implementations of it available on many platforms and provided by various organizations, including the Free Software Foundation, LLVM, Microsoft, Intel. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2014 as ISO/IEC14882,2014. The C++ programming language was standardized in 1998 as ISO/IEC14882,1998. The current C++14 standard supersedes these and C++11, with new features, the C++17 standard is due in 2017, with the draft largely implemented by some compilers already, and C++20 is the next planned standard thereafter. Many other programming languages have influenced by C++, including C#, D, Java. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on C with Classes, the motivation for creating a new language originated from Stroustrups experience in programming for his Ph. D. thesis. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing, remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable, as well as C and Simulas influences, other languages also influenced C++, including ALGOL68, Ada, CLU and ML. Initially, Stroustrups C with Classes added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining, furthermore, it included the development of a standalone compiler for C++, Cfront. In 1985, the first edition of The C++ Programming Language was released, the first commercial implementation of C++ was released in October of the same year. In 1989, C++2.0 was released, followed by the second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static functions, const member functions. In 1990, The Annotated C++ Reference Manual was published and this work became the basis for the future standard. Later feature additions included templates, exceptions, namespaces, new casts, after a minor C++14 update released in December 2014, various new additions are planned for 2017 and 2020. According to Stroustrup, the name signifies the nature of the changes from C. This name is credited to Rick Mascitti and was first used in December 1983, when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit
30.
Long double
–
In C and related programming languages, long double refers to a floating-point data type that is often more precise than double-precision. As with Cs other floating-point types, it may not necessarily map to an IEEE format, long double constants are floating-point constants suffixed with L or l, e. g.0. 333333333333333333L. Without a suffix, the evaluation depends on FLT_EVAL_METHOD, on the x86 architecture, most C compilers implement long double as the 80-bit extended precision type supported by x86 hardware, as specified in the C99 / C11 standards. An exception is Microsoft Visual C++ for x86, which makes long double a synonym for double, the Intel C++ compiler on Microsoft Windows supports extended precision, but requires the /Qlong‑double switch for long double to correspond to the hardwares extended precision format. Compilers may also use long double for a 128-bit quadruple precision format and this is the case on HP-UX and on Solaris/SPARC machines. This format is implemented in software due to lack of hardware support. Otherwise, long double is simply a synonym for double, as of gcc 4.3, a quadruple precision is also supported on x86, but as the nonstandard type __float128 rather than long double. Conversely, in mode, extended precision may be used for intermediate compiler-generated calculations even when the final results are stored at a lower precision. However, it is possible to override this within a program via the FLDCW floating-point load control-word instruction. On x86_64 the BSDs default to 80-bit extended precision, Microsoft Windows with Visual C++ also sets the processor in double-precision mode by default, but this can again be overridden within an individual program. The Intel C++ Compiler for x86, on the other hand, on OS X, long double is 80-bit extended precision. k. a. Quadruple precision without using that name
31.
GNU Compiler Collection
–
The GNU Compiler Collection is a compiler system produced by the GNU Project supporting various programming languages. GCC is a key component of the GNU toolchain and the compiler for most Unix-like Operating Systems. The Free Software Foundation distributes GCC under the GNU General Public License, GCC has played an important role in the growth of free software, as both a tool and an example. Originally named the GNU C Compiler, when it handled the C programming language. It was extended to compile C++ in December of that year, front ends were later developed for Objective-C, Objective-C++, Fortran, Java, Ada, and Go among others. Version 4.5 of the OpenMP specification is now supported in the C and C++ compilers, by default, the current version supports gnu++14, a superset of C++14 and gnu11, a superset of C11, with strict standard support also available. It also provides support for C++17 and later. GCC has been ported to a variety of instruction set architectures. GCC is also available for most embedded systems, including ARM-based, AMCC, the compiler can target a wide variety of platforms. Versions are also available for Microsoft Windows and other operating systems, GCC can compile code for Android and iOS. In an effort to bootstrap the GNU operating system, Richard Stallman asked Andrew S. Tanenbaum, when Tanenbaum told him that while the Free University was free, the compiler was not, Stallman decided to write his own. Stallmans initial plan was to rewrite an existing compiler from Lawrence Livermore Laboratory from Pastel to C with some help from Len Tower, none of the Pastel compiler code ended up in GCC, though Stallman did use the C front end he had written. GCC was first released March 22,1987, available by FTP from MIT, multiple forks proved inefficient and unwieldy, however, and the difficulty in getting work accepted by the official GCC project was greatly frustrating for many. In 1997, a group of developers formed Experimental/Enhanced GNU Compiler System to merge several experimental forks into a single project, the basis of the merger was a GCC development snapshot taken between the 2.7 and 2.81 releases. Projects merged included g77, PGCC, many C++ improvements, and many new architectures, with the release of GCC2.95 in July 1999 the two projects were once again united. GCC has since been maintained by a group of programmers from around the world under the direction of a steering committee. It has been ported to more kinds of processors and operating systems than any other compiler, GCC has been ported to a wide variety of instruction set architectures, and is widely deployed as a tool in the development of both free and proprietary software. GCC is also available for most embedded systems, including Symbian, ARM-based, AMCC, the compiler can target a wide variety of platforms, including video game consoles such as the PlayStation 2, Cell SPE of PlayStation 3 and Dreamcast
32.
PowerPC
–
PowerPC is a RISC instruction set architecture created by the 1991 Apple–IBM–Motorola alliance, known as AIM. PowerPC was the cornerstone of AIMs PReP and Common Hardware Reference Platform initiatives in the 1990s and it has since become niche in personal computers, but remain popular as embedded and high-performance processors. Its use in game consoles and embedded applications provided an array of uses. In addition, PowerPC CPUs are still used in AmigaOne and third party AmigaOS4 personal computers, the history of RISC began with IBMs 801 research project, on which John Cocke was the lead developer, where he developed the concepts of RISC in 1975–78. 801-based microprocessors were used in a number of IBM embedded products, the RT was a rapid design implementing the RISC architecture. The result was the POWER instruction set architecture, introduced with the RISC System/6000 in early 1990, the original POWER microprocessor, one of the first superscalar RISC implementations, was a high performance, multi-chip design. IBM soon realized that a microprocessor was needed in order to scale its RS/6000 line from lower-end to high-end machines. Work on a one-chip POWER microprocessor, designated the RSC began, in early 1991, IBM realized its design could potentially become a high-volume microprocessor used across the industry. IBM approached Apple with the goal of collaborating on the development of a family of single-chip microprocessors based on the POWER architecture and this three-way collaboration became known as AIM alliance, for Apple, IBM, Motorola. In 1991, the PowerPC was just one facet of an alliance among these three companies. The PowerPC chip was one of joint ventures involving the three, in their efforts to counter the growing Microsoft-Intel dominance of personal computing. For Motorola, POWER looked like an unbelievable deal and it allowed them to sell a widely tested and powerful RISC CPU for little design cash on their own part. It also maintained ties with an important customer, Apple, and seemed to offer the possibility of adding IBM too, at this point Motorola already had its own RISC design in the form of the 88000 which was doing poorly in the market. Motorola was doing well with their 68000 family and the majority of the funding was focused on this, the 88000 effort was somewhat starved for resources. However, the 88000 was already in production, Data General was shipping 88000 machines, the 88000 had also achieved a number of embedded design wins in telecom applications. The result of various requirements was the PowerPC specification. The differences between the earlier POWER instruction set and PowerPC is outlined in Appendix E of the manual for PowerPC ISA v.2.02, when the first PowerPC products reached the market, they were met with enthusiasm. In addition to Apple, both IBM and the Motorola Computer Group offered systems built around the processors, Microsoft released Windows NT3.51 for the architecture, which was used in Motorolas PowerPC servers, and Sun Microsystems offered a version of its Solaris OS