1.
Application software
–
An application program is a computer program designed to perform a group of coordinated functions, tasks, or activities for the benefit of the user. Examples of an application include a processor, a spreadsheet, an accounting application, a web browser, a media player, an aeronautical flight simulator. The collective noun application software refers to all applications collectively and this contrasts with system software, which is mainly involved with running the computer. Applications may be bundled with the computer and its software or published separately. Apps built for mobile platforms are called mobile apps, in information technology, an application is a computer program designed to help people perform an activity. An application thus differs from a system, a utility. Depending on the activity for which it was designed, an application can manipulate text, numbers, graphics, some application packages focus on a single task, such as word processing, others, called integrated software include several applications. User-written software tailors systems to meet the specific needs. User-written software includes templates, word processor macros, scientific simulations, graphics. Even email filters are a kind of user software, users create this software themselves and often overlook how important it is. The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. As another example, the GNU/Linux naming controversy is, in part, the above definitions may exclude some applications that may exist on some computers in large organizations. For an alternative definition of an app, see Application Portfolio Management, the word application, once used as an adjective, is not restricted to the of or pertaining to application software meaning. Sometimes a new and popular application arises which only runs on one platform and this is called a killer application or killer app. There are many different ways to divide up different types of application software, web apps have indeed greatly increased in popularity for some uses, but the advantages of applications make them unlikely to disappear soon, if ever. Furthermore, the two can be complementary, and even integrated, Application software can also be seen as being either horizontal or vertical. Horizontal applications are more popular and widespread, because they are general purpose, vertical applications are niche products, designed for a particular type of industry or business, or department within an organization. Integrated suites of software will try to handle every aspect possible of, for example, manufacturing or banking systems, or accounting
2.
80-bit floating point format
–
Extended precision refers to floating point number formats that provide greater precision than the basic floating point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software, the IBM1130 offered two floating point formats, a 32-bit standard precision format and a 40-bit extended precision format. Standard precision format contained a 24-bit twos complement significand while extended precision utilized a 32-bit twos complement significand, the latter format could make full use of the cpus 32-bit integer operations. The characteristic in both formats was an 8-bit field containing the power of two biased by 128, floating-point arithmetic operations were performed by software, and double precision was not supported at all. The extended format occupied three 16-bit words, with the extra space simply ignored, the IBM System/360 supports a 32-bit short floating point format and a 64-bit long floating point format. The 360/85 and follow-on System/370 added support for a 128-bit extended format and these formats are still supported in the current design, where they are now called the hexadecimal floating point formats. The IEEE754 floating point standard recommends that implementations provide extended precision formats, the standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementors choice, the IA32 and x86-64 and Itanium processors support an 80-bit double extended extended precision format with a 64-bit significand. The Intel 8087 math coprocessor was the first x86 device which supported floating point arithmetic in hardware and it was designed to support a 32-bit single precision format and a 64-bit double precision format for encoding and interchanging floating point numbers. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit extended precision format, the floating-point unit on all subsequent x86 processors have supported this format. As a result software can be developed which takes advantage of the higher precision provided by this format and that kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors support this same 64-bit significand extended precision type, the follow-on Coldfire processors do not support this 96-bit extended precision format. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE754 double extended format and this 80-bit format uses one bit for the sign of the significand,15 bits for the exponent field and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the power of 2. An exponent field value of 32767 is reserved so as to enable the representation of states such as infinity. If the exponent field is zero, the value is a denormal number, the m field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit, rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part
3.
Half-precision floating-point format
–
In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. In IEEE 754-2008 the 16-bit base 2 format is referred to as binary16. It is intended for storage of many floating-point values where higher precision is not needed, nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. The hardware-accelerated programmable shading group led by John Airey at SGI invented the s10e5 data type in 1997 as part of the design effort. This is described in a SIGGRAPH2000 paper and further documented in US patent 7518615 and this format is used in several computer graphics environments including OpenEXR, JPEG XR, OpenGL, Cg, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the dynamic range allows for more detail to be preserved in highlights. The advantage over 32-bit single-precision binary formats is that it half the storage. The F16C extension allows x86 processors to convert half-precision floats to, thus only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE754 parlance, there are 10 bits of significand, the half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15, also known as exponent bias in the IEEE754 standard. The stored exponents 000002 and 111112 are interpreted specially, the minimum strictly positive value is 2−24 ≈5.96 × 10−8. The minimum positive value is 2−14 ≈6.10 × 10−5. The maximum representable value is ×215 =65504 and these examples are given in bit representation of the floating-point value. This includes the sign bit, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, ARM processors support an alternative half-precision format, which does away with the special case for an exponent value of 31. It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs, instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008
4.
Single-precision floating-point format
–
Single-precision floating-point format is a computer number format that occupies 4 bytes in computer memory and represents a wide dynamic range of values by using a floating point. In IEEE 754-2008 the 32-bit base-2 format is referred to as binary32. It was called single in IEEE 754-1985, in older computers, different floating-point formats of 4 bytes were used, e. g. GW-BASICs single-precision data type was the 32-bit MBF floating-point format. One of the first programming languages to provide single- and double-precision floating-point data types was Fortran, before the widespread adoption of IEEE 754-1985, the representation and properties of the double float data type depended on the computer manufacturer and computer model. Single-precision binary floating-point is used due to its range over fixed point. A signed 32-bit integer can have a value of 231 −1 =2,147,483,647. As an example, the 32-bit integer 2,147,483,647 converts to 2,147,483,650 in IEEE754 form. Single precision is termed REAL in Fortran, float in C, C++, C#, Java, Float in Haskell, and Single in Object Pascal, Visual Basic, and MATLAB. However, float in Python, Ruby, PHP, and OCaml, in most implementations of PostScript, the only real precision is single. The IEEE754 standard specifies a binary32 as having, Sign bit,1 bit Exponent width,8 bits Significand precision,24 bits This gives from 6 to 9 significant decimal digits precision. Sign bit determines the sign of the number, which is the sign of the significand as well, Exponent is either an 8-bit signed integer from −128 to 127 or an 8-bit unsigned integer from 0 to 255, which is the accepted biased form in IEEE754 binary32 definition. Exponents range from −126 to +127 because exponents of −127 and +128 are reserved for special numbers, the true significand includes 23 fraction bits to the right of the binary point and an implicit leading bit with value 1, unless the exponent is stored with all zeros. Thus only 23 fraction bits of the significand appear in the memory format, B0 =1 + ∑ i =123 b 23 − i 2 − i =1 +1 ⋅2 −2 =1.25 ∈ ⊂ ⊂ [1,2 ). Thus, value = ×1.25 ×2 −3 = +0.15625. Note,1 +2 −23 ≈1.000000119,2 −2 −23 ≈1.999999881,2 −126 ≈1.17549435 ×10 −38,2 +127 ≈1.70141183 ×10 +38. The single-precision binary floating-point exponent is encoded using a representation, with the zero offset being 127. The stored exponents 00H and FFH are interpreted specially, the minimum positive normal value is 2−126 ≈1.18 × 10−38 and the minimum positive value is 2−149 ≈1.4 × 10−45. In general, refer to the IEEE754 standard itself for the conversion of a real number into its equivalent binary32 format
5.
Double-precision floating-point format
–
Double-precision floating-point format is a computer number format that occupies 8 bytes in computer memory and represents a wide, dynamic range of values by using a floating point. Double-precision floating-point format usually refers to binary64, as specified by the IEEE754 standard, in older computers, different floating-point formats of 8 bytes were used, e. g. GW-BASICs double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its range over single-precision floating point, in spite of its performance. As with single-precision floating-point format, it lacks precision on integer numbers when compared with a format of the same size. It is commonly simply as double. The IEEE754 standard specifies a binary64 as having, Sign bit,1 bit Exponent,11 bits Significand precision,53 bits This gives 15–17 significant decimal digits precision. If an IEEE754 double precision is converted to a string with at least 17 significant digits and then converted back to double. The format is written with the significand having an implicit integer bit of value 1, with the 52 bits of the fraction significand appearing in the memory format, the total precision is therefore 53 bits. For the next range, from 253 to 254, everything is multiplied by 2, so the numbers are the even ones. Conversely, for the range from 251 to 252, the spacing is 0.5. The spacing as a fraction of the numbers in the range from 2n to 2n+1 is 2n−52, the maximum relative rounding error when rounding a number to the nearest representable one is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308 and 10308, with full 15–17 decimal digits precision, by compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using a representation, with the zero offset being 1023. All bit patterns are valid encoding, except for the above exceptions, the entire double-precision number is described by, sign ×2 exponent − exponent bias ×1. Fraction In the case of subnormals the double-precision number is described by, because there have been many floating point formats with no network standard representation for them, the XDR standard uses big-endian IEEE754 as its representation. It may therefore appear strange that the widespread IEEE754 floating point standard does not specify endianness, theoretically, this means that even standard IEEE floating point data written by one machine might not be readable by another. One area of computing where this is an issue is for parallel code running on GPUs. For example, when using NVIDIAs CUDA platform, on video cards designed for gaming, doubles are implemented in many programming languages in different ways such as the following
6.
Quadruple-precision floating-point format
–
That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. In IEEE 754-2008 the 128-bit base-2 format is referred to as binary128. The IEEE754 standard specifies a binary128 as having, Sign bit,1 bit Exponent width,15 bits Significand precision,113 bits This gives from 33 to 36 significant decimal digits precision. The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros, thus only 112 bits of the significand appear in the memory format, but the total precision is 113 bits. The bits are laid out as, A binary256 would have a precision of 237 bits. The stored exponents 000016 and 7FFF16 are interpreted specially, the minimum strictly positive value is 2−16494 ≈ 10−4965 and has a precision of only one bit. The minimum positive value is 2−16382 ≈3.3621 × 10−4932 and has a precision of 113 bits. The maximum representable value is 216384 −216271 ≈1.1897 ×104932 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, a common software technique to implement nearly quadruple precision using pairs of double-precision values is sometimes called double-double arithmetic. That is, the pair is stored in place of q, note that double-double arithmetic has the following special characteristics, As the magnitude of the value decreases, the amount of extra precision also decreases. Therefore, the smallest number in the range is narrower than double precision. The smallest number with full precision is 1000.02 × 2−1074, numbers whose magnitude is smaller than 2−1021 will not have additional precision compared with double precision. The actual number of bits of precision can vary, in general, the magnitude of the low-order part of the number is no greater than half ULP of the high-order part. If the low-order part is less than half ULP of the high-order part, certain algorithms that rely on having a fixed number of bits in the significand can fail when using 128-bit long double numbers. Because of the reason above, it is possible to represent values like 1 + 2−1074 and they are represented as a sum of three double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively, a similar technique can be used to produce a double-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 bits, quadruple precision is often implemented in software by a variety of techniques, since direct hardware support for quadruple precision is, as of 2016, less common
7.
Octuple-precision floating-point format
–
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision and this format is rarely used and very few things support it. Thus only 236 bits of the significand appear in the memory format, the stored exponents 0000016 and 7FFFF16 are interpreted specially. The minimum strictly positive value is 2−262378 ≈ 10−78984 and has a precision of one bit. The minimum positive value is 2−262142 ≈2.4824 × 10−78913. The maximum representable value is 2262144 −2261907 ≈1.6113 ×1078913 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of numbers with a 224-bit twos complement significand. One can use general arbitrary-precision arithmetic libraries to obtain octuple precision, there is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, IEEE Standard for Floating-Point Arithmetic ISO/IEC10967, Language-independent arithmetic Primitive data type
8.
Decimal floating point
–
Decimal floating-point arithmetic refers to both a representation and operations on decimal floating-point numbers. Working directly with decimal fractions can avoid the errors that otherwise typically occur when converting between decimal fractions and binary fractions. The advantage of decimal floating-point representation over decimal fixed-point and integer representation is that it supports a wider range of values. Early mechanical uses of decimal floating point are evident in the abacus, slide rule, the Smallwood calculator, in the case of the mechanical calculators, the exponent is often treated as side information that is accounted for separately. Some computer languages have implementations of decimal floating-point arithmetic, including PL/I, Java with big decimal, emacs with calc and this was subsequently addressed in IEEE 754-2008, which standardized the encoding of decimal floating-point data, albeit with two different alternative methods. IBM POWER6 includes DFP in hardware, as does the IBM System z9, silMinds offers SilAx, a configurable vector DFP coprocessor. IEEE 754-2008 defines this in more detail, the IEEE 754-2008 standard defines 32-, 64- and 128-bit decimal floating-point representations. Like the binary floating-point formats, the number is divided into a sign, an exponent, unlike binary floating-point, numbers are not necessarily normalized, values with few significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. When the significand is zero, the exponent can be any value at all, the exponent ranges were chosen so that the range available to normalized values is approximately symmetrical. Since this cannot be done exactly with an number of possible exponent values. Two different representations are defined, One with a binary integer significand field encodes the significand as a binary integer between 0 and 10p−1. This is expected to be convenient for software implementations using a binary ALU. Another with a densely packed decimal significand field encodes decimal digits more directly and this makes conversion to and from binary floating-point form faster, but requires specialized hardware to manipulate efficiently. This is expected to be convenient for hardware implementations. Both alternatives provide exactly the range of representable values. The most significant two bits of the exponent are limited to the range of 0−2, and the most significant 4 bits of the significand are limited to the range of 0−9. The 30 possible combinations are encoded in a 5-bit field, along with forms for infinity. Thus, it is possible to initialize an array to NaNs by filling it with a byte value
9.
Decimal32 floating-point format
–
In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, like the binary16 format, it is intended for memory saving storage. Decimal32 supports 7 decimal digits of significand and an exponent range of −95 to +96, because the significand is not normalized, most values with less than 7 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal32 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal32 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,7 digits of significand and 3×26=192 possible exponent values. The remaining combinations encode infinities and NaNs and this format uses a binary significand from 0 to 107−1 =9999999 = 98967F16 =1001100010010110011111112. The encoding can represent binary significands up to 10×220−1 =10485759 = 9FFFFF16 =1001111111111111111111112, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 8-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. These six bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 20 bits are the significand continuation field, consisting of two 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results
10.
Decimal64 floating-point format
–
In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i. e. ±0. 000000000000000×10^−383 to ±9. 999999999999999×10^384. In contrast, the binary format, which is the most commonly used type, has an approximate range of ±0. 000000000000001×10^−308 to ±1. 797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple representations, 1×102=0. 1×103=0. 01×104. Decimal64 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal64 values. Both alternatives provide exactly the range of representable numbers,16 digits of significand. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of a 5-bit field, the remaining combinations encode infinities and NaNs. In the cases of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. This format uses a binary significand from 0 to 1016−1 =9999999999999999 = 2386F26FC0FFFF16 =1000111000011011110010011011111100000011111111111111112, the encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 =11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 10-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 for the most bits of the true significand, compare having an implicit 1-bit prefix 1 in the significand of normal values for the binary formats. Note also that the 2-bit sequences 00,01, or 10 after the bit are part of the exponent field. Note that the bits of the significand field do not encode the most significant decimal digit. The highest valid significant is 9999999999999999 whose binary encoding is 0111000011011110010011011111100000011111111111111112, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 50 bits are the significand continuation field, consisting of five 10-bit declets
11.
Decimal128 floating-point format
–
In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i. e. ±0. 000000000000000000000000000000000×10^−6143 to ±9. 999999999999999999999999999999999×10^6144. Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats, because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal128 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal128 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,34 digits of significand and 3×212 =12288 possible exponent values. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of 5 bits in the combination field, the remaining combinations encode infinities and NaNs. In the case of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. The encoding can represent binary significands up to 10×2110−1 =12980742146337069071326240823050239, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 14-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, for the decimal128 format, all of these significands are out of the valid range, and are thus decoded as zero, but the pattern is same as decimal32 and decimal64. The leading digit is between 0 and 9, and the rest of the uses the densely packed decimal encoding. This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each
12.
Computer architecture
–
In computer engineering, computer architecture is a set of rules and methods that describe the functionality, organization, and implementation of computer systems. Some definitions of architecture define it as describing the capabilities and programming model of a computer, in other definitions computer architecture involves instruction set architecture design, microarchitecture design, logic design, and implementation. The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine, johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory. Brooks went on to develop the IBM System/360 line of computers. Later, computer users came to use the term in many less-explicit ways, the earliest computer architectures were designed on paper and then directly built into the final hardware form. The discipline of architecture has three main subcategories, Instruction Set Architecture, or ISA. The ISA defines the code that a processor reads and acts upon as well as the word size, memory address modes, processor registers. Microarchitecture, or computer organization describes how a processor will implement the ISA. The size of a computers CPU cache for instance, is an issue that generally has nothing to do with the ISA, system Design includes all of the other hardware components within a computing system. These include, Data processing other than the CPU, such as memory access Other issues such as virtualization, multiprocessing. There are other types of computer architecture, E. g. the C, C++, or Java standards define different Programmer Visible Macroarchitecture. UISA —a group of machines with different hardware level microarchitectures may share a common microcode architecture, pin Architecture, The hardware functions that a microprocessor should provide to a hardware platform, e. g. the x86 pins A20M, FERR/IGNNE or FLUSH. Also, messages that the processor should emit so that external caches can be invalidated, pin architecture functions are more flexible than ISA functions because external hardware can adapt to new encodings, or change from a pin to a message. The term architecture fits, because the functions must be provided for compatible systems, the purpose is to design a computer that maximizes performance while keeping power consumption in check, costs low relative to the amount of expected performance, and is also very reliable. For this, many aspects are to be considered which includes Instruction Set Design, Functional Organization, Logic Design, the implementation involves Integrated Circuit Design, Packaging, Power, and Cooling. Optimization of the design requires familiarity with Compilers, Operating Systems to Logic Design, an instruction set architecture is the interface between the computers software and hardware and also can be viewed as the programmers view of the machine. Computers do not understand high level languages such as Java, C++, a processor only understands instructions encoded in some numerical fashion, usually as binary numbers. Software tools, such as compilers, translate those high level languages into instructions that the processor can understand, besides instructions, the ISA defines items in the computer that are available to a program—e. g
13.
Instruction set architecture
–
An ISA includes a specification of the set of opcodes, and the native commands implemented by a particular processor. An instruction set architecture is distinguished from a microarchitecture, which is the set of design techniques used, in a particular processor. Processors with different microarchitectures can share a common instruction set, for example, the Intel Pentium and the AMD Athlon implement nearly identical versions of the x86 instruction set, but have radically different internal designs. The concept of an architecture, distinct from the design of a machine, was developed by Fred Brooks at IBM during the design phase of System/360. Prior to NPL, the companys computer designers had been free to honor cost objectives not only by selecting technologies, the SPREAD compatibility objective, in contrast, postulated a single architecture for a series of five processors spanning a wide range of cost and performance. In addition, these virtual machines execute less frequently used code paths by interpretation, transmeta implemented the x86 instruction set atop VLIW processors in this fashion. A complex instruction set computer has many specialized instructions, some of which may only be used in practical programs. Theoretically important types are the instruction set computer and the one instruction set computer. Another variation is the very long instruction word where the processor receives many instructions encoded and retrieved in one instruction word, machine language is built up from discrete statements or instructions. Examples of operations common to many instruction sets include, Set a register to a constant value. Copy data from a location to a register, or vice versa. Used to store the contents of a register, result of a computation, often called load and store operations. Read and write data from hardware devices, add, subtract, multiply, or divide the values of two registers, placing the result in a register, possibly setting one or more condition codes in a status register. Increment, decrement in some ISAs, saving operand fetch in trivial cases, perform bitwise operations, e. g. taking the conjunction and disjunction of corresponding bits in a pair of registers, taking the negation of each bit in a register. Floating-point instructions for arithmetic on floating-point numbers, branch to another location in the program and execute instructions there. Conditionally branch to another if a certain condition holds. Call another block of code, while saving the location of the instruction as a point to return to. Load/store data to and from a coprocessor, or exchanging with CPU registers, processors may include complex instructions in their instruction set
14.
Small-scale integration
–
An integrated circuit or monolithic integrated circuit is a set of electronic circuits on one small flat piece of semiconductor material, normally silicon. The ICs mass production capability, reliability and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of using discrete transistors. ICs are now used in all electronic equipment and have revolutionized the world of electronics. Computers, mobile phones, and other home appliances are now inextricable parts of the structure of modern societies, made possible by the small size. These advances, roughly following Moores law, allow a computer chip of 2016 to have millions of times the capacity, ICs have two main advantages over discrete circuits, cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography rather than being constructed one transistor at a time, furthermore, packaged ICs use much less material than discrete circuits. Performance is high because the ICs components switch quickly and consume little power because of their small size, the main disadvantage of ICs is the high cost to design them and fabricate the required photomasks. This high initial cost means ICs are only practical when high production volumes are anticipated, Circuits meeting this definition can be constructed using many different technologies, including thin-film transistor, thick film technology, or hybrid integrated circuit. However, in general usage integrated circuit has come to refer to the single-piece circuit construction originally known as a integrated circuit. Jacobi disclosed small and cheap hearing aids as typical industrial applications of his patent, an immediate commercial use of his patent has not been reported. The idea of the circuit was conceived by Geoffrey Dummer. Dummer presented the idea to the public at the Symposium on Progress in Quality Electronic Components in Washington and he gave many symposia publicly to propagate his ideas, and unsuccessfully attempted to build such a circuit in 1956. A precursor idea to the IC was to create small ceramic squares, Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which seemed very promising in 1957, was proposed to the US Army by Jack Kilby, however, as the project was gaining momentum, Kilby came up with a new, revolutionary design, the IC. In his patent application of 6 February 1959, Kilby described his new device as a body of semiconductor material … wherein all the components of the circuit are completely integrated. The first customer for the new invention was the US Air Force, Kilby won the 2000 Nobel Prize in Physics for his part in the invention of the integrated circuit. His work was named an IEEE Milestone in 2009, half a year after Kilby, Robert Noyce at Fairchild Semiconductor developed his own idea of an integrated circuit that solved many practical problems Kilbys had not. Noyces design was made of silicon, whereas Kilbys chip was made of germanium, Noyce credited Kurt Lehovec of Sprague Electric for the principle of p–n junction isolation, a key concept behind the IC
15.
Wang Laboratories
–
Wang Laboratories was a computer company founded in 1951, by Dr. The company was headquartered in Cambridge, Massachusetts, Tewksbury, Massachusetts. At its peak in the 1980s, Wang Laboratories had annual revenues of $3 billion and it was one of the leading companies during the time of the Massachusetts Miracle. The company was directed by Dr. Wang, who was described as a leader and played a personal role in setting business. Under his direction, the company went through several transitions between different product lines. Wang Laboratories filed for protection in August 1992. After emerging from bankruptcy, the company changed its name to Wang Global. An Wang took steps to ensure that the Wang family would retain control of the company even after going public and he created a second class of stock, class B, with higher dividends, but only one-tenth the voting power of class C. The public mostly bought class B shares, the Wang family retained most of the class C shares, the companys first major project was the Linasec in 1964. It was a special purpose computer, designed to justify papertape for use on automated Linotype machines. It was developed under contract to Compugraphic, who manufactured phototypesetters, Compugraphic retained the rights to manufacture the Linasec without royalty. They exercised these rights, effectively forcing Wang out of the market, the Wang LOCI-2 Logarithmic Computing Instrument desktop calculator was introduced in January 1965. Using factor combining it was probably the first desktop calculator capable of computing logarithms, the electronics included 1,275 discrete transistors. It actually performed multiplication by adding logarithms, and roundoff in the conversion was noticeable,2 times 2 yielded 3.999999999. From 1965 to about 1971, Wang was a well-regarded calculator company, Wang calculators cost in the mid-four-figures, used Nixie tube readouts, performed transcendental functions, had varying degrees of programmability, and exploited magnetic core memory. The 200 and 300 calculator models were available as timeshared simultaneous packages that had a processing unit connected by cables leading to four individual desktop display/keyboard units. Competition included HP, which introduced the HP 9100A in 1968, one perhaps apocryphal story tells of a banker who spot-checked a Wang calculator against a mortgage table and found a discrepancy. The calculator was right, the tables were wrong