1.
Computer memory
–
In computing, memory refers to the computer hardware devices involved to store information for immediate use in a computer, it is synonymous with the term primary storage. Computer memory operates at a speed, for example random-access memory, as a distinction from storage that provides slow-to-access program and data storage. If needed, contents of the memory can be transferred to secondary storage. An archaic synonym for memory is store, there are two main kinds of semiconductor memory, volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory, most semiconductor memory is organized into memory cells or bistable flip-flops, each storing one bit. Flash memory organization includes both one bit per cell and multiple bits per cell. The memory cells are grouped into words of fixed word length, each word can be accessed by a binary address of N bit, making it possible to store 2 raised by N words in the memory. This implies that processor registers normally are not considered as memory, since they only store one word, typical secondary storage devices are hard disk drives and solid-state drives. In the early 1940s, memory technology oftenly permit a capacity of a few bytes, the next significant advance in computer memory came with acoustic delay line memory, developed by J. Presper Eckert in the early 1940s. Delay line memory would be limited to a capacity of up to a few hundred thousand bits to remain efficient, two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode ray tubes, Fred Williams would invent the Williams tube, the Williams tube would prove more capacious than the Selectron tube and less expensive. The Williams tube would prove to be frustratingly sensitive to environmental disturbances. Efforts began in the late 1940s to find non-volatile memory, jay Forrester, Jan A. Rajchman and An Wang developed magnetic core memory, which allowed for recall of memory after power loss. Magnetic core memory would become the dominant form of memory until the development of transistor-based memory in the late 1960s, developments in technology and economies of scale have made possible so-called Very Large Memory computers. The term memory when used with reference to computers generally refers to Random Access Memory or RAM, volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM or dynamic RAM, SRAM retains its contents as long as the power is connected and is easy for interfacing, but uses six transistors per bit. SRAM is not worthwhile for desktop system memory, where DRAM dominates, SRAM is commonplace in small embedded systems, which might only need tens of kilobytes or less. Forthcoming volatile memory technologies that aim at replacing or competing with SRAM and DRAM include Z-RAM and A-RAM, non-volatile memory is computer memory that can retain the stored information even when not powered
2.
Dynamic range
–
Dynamic range, abbreviated DR, DNR, or DYR is the ratio between the largest and smallest values that a certain quantity can assume. It is often used in the context of signals, like sound and it is measured either as a ratio or as a base-10 or base-2 logarithmic value of the difference between the smallest and largest signal values, in parallel to the common usage for audio signals. The human senses of sight and hearing have a high dynamic range. A human is capable of hearing anything from a quiet murmur in a room to the sound of the loudest heavy metal concert. Such a difference can exceed 100 dB which represents a factor of 100,000 in amplitude, a human cannot perform these feats of perception at both extremes of the scale at the same time. The eyes take time to adjust to different light levels, the instantaneous dynamic range of human audio perception is similarly subject to masking so that, for example, a whisper cannot be heard in loud surroundings. In practice, it is difficult to achieve the full dynamic range experienced by humans using electronic equipment. For example, a good quality LCD has a range of around 1000,1. Paper reflectance can achieve a range of about 100,1. A professional ENG camcorder such as the Sony Digital Betacam achieves a range of greater than 90 dB in audio recording. A nighttime scene will usually contain duller colours and will often be lit with blue lighting, the dynamic range of human hearing is roughly 140 dB, varying with frequency, from the threshold of hearing to the threshold of pain. The dynamic range of music as normally perceived in a concert hall doesnt exceed 80 dB, the dynamic range differs from the ratio of the maximum to minimum amplitude a given device can record, as a properly dithered recording device can record signals well below the noise RMS amplitude. Digital audio with undithered 20-bit digitization is theoretically capable of 120 dB dynamic range, 24-bit digital audio calculates to 144 dB dynamic range. Multiple noise processes determine the noise floor of a system, noise can be picked up from microphone self-noise, preamp noise, wiring and interconnection noise, media noise, etc. Early 78 rpm phonograph discs had a range of up to 40 dB, soon reduced to 30 dB. Ampex tape recorders in the 1950s achieved 60 dB in practical usage, the peak of professional analog magnetic recording tape technology reached 90 dB dynamic range in the midband frequencies at 3% distortion, or about 80 dB in practical broadband applications. The Dolby SR noise reduction gave a 20 dB further increased range resulting in 110 dB in the midband frequencies at 3% distortion. Specialized bias and record head improvements by Nakamichi and Tandberg combined with Dolby C noise reduction yielded 72 dB dynamic range for the cassette
3.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
4.
Integer
–
An integer is a number that can be written without a fractional component. For example,21,4,0, and −2048 are integers, while 9.75, 5 1⁄2, the set of integers consists of zero, the positive natural numbers, also called whole numbers or counting numbers, and their additive inverses. This is often denoted by a boldface Z or blackboard bold Z standing for the German word Zahlen, ℤ is a subset of the sets of rational and real numbers and, like the natural numbers, is countably infinite. The integers form the smallest group and the smallest ring containing the natural numbers, in algebraic number theory, the integers are sometimes called rational integers to distinguish them from the more general algebraic integers. In fact, the integers are the integers that are also rational numbers. Like the natural numbers, Z is closed under the operations of addition and multiplication, that is, however, with the inclusion of the negative natural numbers, and, importantly,0, Z is also closed under subtraction. The integers form a ring which is the most basic one, in the following sense, for any unital ring. This universal property, namely to be an object in the category of rings. Z is not closed under division, since the quotient of two integers, need not be an integer, although the natural numbers are closed under exponentiation, the integers are not. The following lists some of the properties of addition and multiplication for any integers a, b and c. In the language of algebra, the first five properties listed above for addition say that Z under addition is an abelian group. As a group under addition, Z is a cyclic group, in fact, Z under addition is the only infinite cyclic group, in the sense that any infinite cyclic group is isomorphic to Z. The first four properties listed above for multiplication say that Z under multiplication is a commutative monoid. However, not every integer has an inverse, e. g. there is no integer x such that 2x =1, because the left hand side is even. This means that Z under multiplication is not a group, all the rules from the above property table, except for the last, taken together say that Z together with addition and multiplication is a commutative ring with unity. It is the prototype of all objects of algebraic structure. Only those equalities of expressions are true in Z for all values of variables, note that certain non-zero integers map to zero in certain rings. The lack of zero-divisors in the means that the commutative ring Z is an integral domain
5.
IEEE 754
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
6.
Standardization
–
It can also facilitate commoditization of formerly custom processes. This view includes the case of spontaneous standardization processes, to de facto standards. Standard weights and measures were developed by the Indus Valley Civilisation, weights existed in multiples of a standard weight and in categories. Technical standardisation enabled gauging devices to be used in angular measurement and measurement for construction. Uniform units of length were used in the planning of such as Lothal, Surkotada, Kalibangan, Dolavira, Harappa. The weights and measures of the Indus civilisation also reached Persia and Central Asia, Standardisation is also related to Processes. In view of large variations in units related to Civil, Electrical and other Engineering streams, engineers united to overcome the situation and this association later on gave birth to ISO in 1950. ISO stands for International Organisation for Standardisation and this voluntary organisation is solely dedicated to standardisation and makes standards related to it. Certification as per ISO norms is popular all across world, henry Maudslay developed the first industrially practical screw-cutting lathe in 1800. This allowed for the standardisation of screw thread sizes for the first time, before this, screw threads were usually made by chipping and filing. Nuts were rare, metal screws, when made at all, were usually for use in wood, metal bolts passing through wood framing to a metal fastening on the other side were usually fastened in non-threaded ways. This was an advance in workshop technology. Maudslays work, as well as the contributions of other engineers, accomplished a modest amount of industry standardization, joseph Whitworths screw thread measurements were adopted as the first national standard by companies around the country in 1841. It came to be known as the British Standard Whitworth, and was adopted in other countries. This new standard specified a 55° thread angle and a depth of 0. 640327p and a radius of 0. 137329p. The thread pitch increased with diameter in steps specified on a chart, an example of the use of the Whitworth thread is the Royal Navys Crimean War gunboats. These were the first instance of mass-production techniques being applied to marine engineering, American Unified Coarse was originally based on almost the same imperial fractions. The Unified thread angle is 60° and has flattened crests, thread pitch is the same in both systems except that the thread pitch for the 1⁄2 in bolt is 12 threads per inch in BSW versus 13 tpi in the UNC
7.
IEEE 754-1985
–
IEEE 754-1985 was an industry standard for representing floating-point numbers in computers, officially adopted in 1985 and superseded in 2008 by IEEE 754-2008. During its 23 years, it was the most widely used format for floating-point computation and it was implemented in software, in the form of floating-point libraries, and in hardware, in the instructions of many CPUs and FPUs. The first integrated circuit to implement the draft of what was to become IEEE 754-1985 was the Intel 8087, floating-point numbers in IEEE754 format consist of three fields, a sign bit, a biased exponent, and a fraction. The following example illustrates the meaning of each, the decimal number 0.1562510 represented in binary is 0.001012. As illustrated in the pictures, the three fields in the IEEE754 representation of number are, sign =0, because the number is positive. Biased exponent = −3 + the bias, in single precision, the bias is,127, so in this example the biased exponent is 124, in double precision, the bias is 1023, so the biased exponent in this example is 1020. IEEE754 adds a bias to the exponent so that numbers can in many cases be compared conveniently by the hardware that compares signed 2s-complement integers. Using a biased exponent, the lesser of two positive floating-point numbers will come out less than the following the same ordering as for sign. If two floating-point numbers have different signs, the sign-and-magnitude comparison also works with biased exponents, however, if both biased-exponent floating-point numbers are negative, then the ordering must be reversed. If the exponent were represented as, say, a 2s-complement number, the leading 1 bit is omitted since all numbers except zero start with a leading 1, the leading 1 is implicit and doesnt actually need to be stored which gives an extra bit of precision for free. The number zero is represented specially, sign =0 for positive zero,1 for negative zero, the number representations described above are called normalized, meaning that the implicit leading binary digit is a 1. A denormal number is represented with an exponent of all 0 bits. In contrast, the smallest biased exponent representing a number is 1. The biased-exponent field is filled with all 1 bits to indicate either infinity or a result of a computation. Positive and negative infinity are represented thus, sign =0 for positive infinity,1 for negative infinity, biased exponent = all 1 bits. Some operations of floating-point arithmetic are invalid, such as taking the root of a negative number. The act of reaching a result is called a floating-point exception. An exceptional result is represented by a code called a NaN
8.
Double-precision floating-point format
–
Double-precision floating-point format is a computer number format that occupies 8 bytes in computer memory and represents a wide, dynamic range of values by using a floating point. Double-precision floating-point format usually refers to binary64, as specified by the IEEE754 standard, in older computers, different floating-point formats of 8 bytes were used, e. g. GW-BASICs double-precision data type was the 64-bit MBF floating-point format. Double-precision binary floating-point is a commonly used format on PCs, due to its range over single-precision floating point, in spite of its performance. As with single-precision floating-point format, it lacks precision on integer numbers when compared with a format of the same size. It is commonly simply as double. The IEEE754 standard specifies a binary64 as having, Sign bit,1 bit Exponent,11 bits Significand precision,53 bits This gives 15–17 significant decimal digits precision. If an IEEE754 double precision is converted to a string with at least 17 significant digits and then converted back to double. The format is written with the significand having an implicit integer bit of value 1, with the 52 bits of the fraction significand appearing in the memory format, the total precision is therefore 53 bits. For the next range, from 253 to 254, everything is multiplied by 2, so the numbers are the even ones. Conversely, for the range from 251 to 252, the spacing is 0.5. The spacing as a fraction of the numbers in the range from 2n to 2n+1 is 2n−52, the maximum relative rounding error when rounding a number to the nearest representable one is therefore 2−53. The 11 bit width of the exponent allows the representation of numbers between 10−308 and 10308, with full 15–17 decimal digits precision, by compromising precision, the subnormal representation allows even smaller values up to about 5 × 10−324. The double-precision binary floating-point exponent is encoded using a representation, with the zero offset being 1023. All bit patterns are valid encoding, except for the above exceptions, the entire double-precision number is described by, sign ×2 exponent − exponent bias ×1. Fraction In the case of subnormals the double-precision number is described by, because there have been many floating point formats with no network standard representation for them, the XDR standard uses big-endian IEEE754 as its representation. It may therefore appear strange that the widespread IEEE754 floating point standard does not specify endianness, theoretically, this means that even standard IEEE floating point data written by one machine might not be readable by another. One area of computing where this is an issue is for parallel code running on GPUs. For example, when using NVIDIAs CUDA platform, on video cards designed for gaming, doubles are implemented in many programming languages in different ways such as the following
9.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
10.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards
11.
GW-BASIC
–
GW-BASIC is a dialect of the BASIC programming language developed by Microsoft from BASICA, originally for Compaq. It is otherwise identical to Microsoft/IBM BASICA, but is a fully self-contained executable and it was bundled with MS-DOS operating systems on IBM PC compatibles by Microsoft. Microsoft also sold a BASIC compiler, BASCOM, compatible with GW-BASIC, the language is suitable for simple games, business programs and the like. Since it was included with most versions of MS-DOS, it was also a low-cost way for aspiring programmers to learn the fundamentals of computer programming. With the release of MS-DOS5.0, GW-BASICs place was taken by QBasic. IBM BASICA and GW-BASIC are direct ports of Microsofts BASIC-80 designed for 8080/Z80 machines, MBASIC programs not using PEEK/POKE statements would run under GW-BASIC. BASICA added a number of features for the IBM PC such as sound, graphics. Microsoft did not offer a version of MS-DOS until v3.20 in 1986, before then. Depending on the OEM, BASIC was distributed as either BASICA. EXE or GWBASIC. EXE, the former should not be confused with IBM BASICA, which always came as a. COM file. Some variants of BASIC had extra features to support a particular machine, the initial version of GW-BASIC was the one included with Compaq DOS1.13 and was analogous to IBM BASICA1.10. It used the CP/M-derived file control blocks for disk access and did not support subdirectories, later versions added this feature and improved graphics and other capabilities. GW-BASIC3.20 added EGA graphics support and was in effect the last new version released before it was superseded by QBasic. Buyers of Hercules Graphics Cards received a version of GW-BASIC on the cards utility disk that was called HBASIC. GW-BASIC has a command line-based integrated development environment based on Dartmouth BASIC, using the cursor movement keys, any line displayed on screen can be edited. It also includes function key shortcuts at the bottom of the screen, all program lines must be numbered, all non-numbered lines are considered to be commands in direct mode to be executed immediately. Program source files are saved in binary compressed format with tokens replacing commands. The GW-BASIC command-line environment has commands to RUN, LOAD, SAVE, LIST the current program, or quit to the operating SYSTEM, there is little support for structured programming in GW-BASIC. All IF/THEN/ELSE conditional statements must be written on one line, although WHILE/WEND statements may group multiple lines, functions can only be defined using the single line DEF FNf=<mathematical function of x> statement
12.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
13.
C++
–
C++ is a general-purpose programming language. It has imperative, object-oriented and generic programming features, while also providing facilities for low-level memory manipulation and it was designed with a bias toward system programming and embedded, resource-constrained and large systems, with performance, efficiency and flexibility of use as its design highlights. C++ is a language, with implementations of it available on many platforms and provided by various organizations, including the Free Software Foundation, LLVM, Microsoft, Intel. C++ is standardized by the International Organization for Standardization, with the latest standard version ratified and published by ISO in December 2014 as ISO/IEC14882,2014. The C++ programming language was standardized in 1998 as ISO/IEC14882,1998. The current C++14 standard supersedes these and C++11, with new features, the C++17 standard is due in 2017, with the draft largely implemented by some compilers already, and C++20 is the next planned standard thereafter. Many other programming languages have influenced by C++, including C#, D, Java. In 1979, Bjarne Stroustrup, a Danish computer scientist, began work on C with Classes, the motivation for creating a new language originated from Stroustrups experience in programming for his Ph. D. thesis. When Stroustrup started working in AT&T Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing, remembering his Ph. D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it was general-purpose, fast, portable, as well as C and Simulas influences, other languages also influenced C++, including ALGOL68, Ada, CLU and ML. Initially, Stroustrups C with Classes added features to the C compiler, Cpre, including classes, derived classes, strong typing, inlining, furthermore, it included the development of a standalone compiler for C++, Cfront. In 1985, the first edition of The C++ Programming Language was released, the first commercial implementation of C++ was released in October of the same year. In 1989, C++2.0 was released, followed by the second edition of The C++ Programming Language in 1991. New features in 2.0 included multiple inheritance, abstract classes, static functions, const member functions. In 1990, The Annotated C++ Reference Manual was published and this work became the basis for the future standard. Later feature additions included templates, exceptions, namespaces, new casts, after a minor C++14 update released in December 2014, various new additions are planned for 2017 and 2020. According to Stroustrup, the name signifies the nature of the changes from C. This name is credited to Rick Mascitti and was first used in December 1983, when Mascitti was questioned informally in 1992 about the naming, he indicated that it was given in a tongue-in-cheek spirit
14.
Haskell (programming language)
–
Haskell /ˈhæskəl/ is a standardized, general-purpose purely functional programming language, with non-strict semantics and strong static typing. It is named after logician Haskell Curry, the latest standard of Haskell is Haskell 2010. As of May 2016, a group is working on the next version, Haskell features a type system with type inference and lazy evaluation. Type classes first appeared in the Haskell programming language and its main implementation is the Glasgow Haskell Compiler. Haskell is based on the semantics, but not the syntax, of the language Miranda, Haskell is used widely in academia and also used in industry. Following the release of Miranda by Research Software Ltd, in 1985, by 1987, more than a dozen non-strict, purely functional programming languages existed. Of these, Miranda was used most widely, but it was proprietary software, the committees purpose was to consolidate the existing functional languages into a common one that would serve as a basis for future research in functional-language design. The first version of Haskell was defined in 1990, the committees efforts resulted in a series of language definitions. The committee expressly welcomed creating extensions and variants of Haskell 98 via adding and incorporating experimental features, in February 1999, the Haskell 98 language standard was originally published as The Haskell 98 Report. In January 2003, a version was published as Haskell 98 Language and Libraries. The language continues to rapidly, with the Glasgow Haskell Compiler implementation representing the current de facto standard. In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell Prime and this was intended to be an ongoing incremental process to revise the language definition, producing a new revision up to once per year. The first revision, named Haskell 2010, was announced in November 2009 and it introduces the Language-Pragma-Syntax-Extension which allows for code designating a Haskell source as Haskell 2010 or requiring certain extensions to the Haskell language. Haskell features lazy evaluation, pattern matching, list comprehension, type classes and it is a purely functional language, which means that in general, functions in Haskell have no side effects. A distinct construct exists to represent side effects, orthogonal to the type of functions, a pure function may return a side effect which is subsequently executed, modeling the impure functions of other languages. Haskell has a strong, static type system based on Hindley–Milner type inference, haskells principal innovation in this area is to add type classes, originally conceived as a principled way to add overloading to the language, but since finding many more uses. The construct which represents side effects is an example of a monad, monads are a general framework which can model different kinds of computation, including error handling, nondeterminism, parsing, and software transactional memory. Monads are defined as ordinary datatypes, but Haskell provides some syntactic sugar for their use, Haskell has an open, published specification, and multiple implementations exist
15.
Object Pascal
–
Object Pascal refers to a branch of object-oriented derivatives of Pascal, mostly known as the primary programming language of Embarcadero Delphi. Object Pascal is an extension of the Pascal language that was developed at Apple Computer by a led by Larry Tesler in consultation with Niklaus Wirth. It is descended from an earlier object-oriented version of Pascal called Clascal, Object Pascal was needed in order to support MacApp, an expandable Macintosh application framework that would now be called a class library. Object Pascal extensions, and MacApp itself, were developed by Barry Haynes, Ken Doyle, and Larry Rosenstein, Larry Tesler oversaw the project, which began very early in 1985 and became a product in 1986. An Object Pascal extension was implemented in the Think Pascal IDE. The IDE includes the compiler and an editor with syntax highlighting and checking, a powerful debugger, many developers preferred Think Pascal over Apples implementation of Object Pascal because Think Pascal offered a tight integration of its tools. The development stopped after the 4.01 version because the company was bought by Symantec, the developers then left the project. Apple dropped support for Object Pascal when they moved from Motorola 68K chips to IBMs PowerPC architecture in 1994, MacApp 3.0, for this platform, was re-written in C++. In 1986, Borland introduced similar extensions, also called Object Pascal, to the Turbo Pascal product for the Macintosh, and in 1989 for Turbo Pascal 5.5 for DOS. When Borland refocused from DOS to Windows in 1994, they created a successor to Turbo Pascal, called Delphi, the development of Delphi started in 1993 and Delphi 1.0 was officially released in the United States on 14 February 1995. These were inspired by the ISO working draft for object-oriented extensions, the Delphi language has continued to evolve over the years to support constructs such as dynamic arrays, generics and anonymous methods. Borland used the name Object Pascal for the language in the first versions of Delphi. However, compilers that claim to be compatible with Object Pascal are often trying to be compatible with Delphi source code, because Delphi is trademarked, compatible compilers continued using the name Object Pascal. The Oxygene programming language developed by RemObjects Software targets the Common Language Infrastructure, the first version of Free Pascal for the iPhone SDK2. x was announced on January 17,2009. Now there is support for the ARM ISA also, the Smart Pascal programming language targets JavaScript/ECMAScript and is used in Smart Mobile Studio, written by Jon Lennart Aasenden and published by Optimale Systemer. The language greatly simplifies HTML5 development through OOP and RAD approaches, Smart Pascal integrates tightly with established technologies such as node. js, Embarcadero DataSnap and Remobjects SDK to deliver high-performance client/server web applications. The language allows for creation of visual components and re-usable libraries. Smart Pascal introduces true inheritance, classes, partial classes, interfaces, mIDletPascal is aimed at the Java byte-code platform
16.
Delphi (programming language)
–
Embarcadero Delphi is a programming language and software development kit for desktop, mobile, web, and console applications. Delphis compilers use their own Object Pascal dialect of Pascal and generate code for several platforms, Windows, OS X, iOS. It is not unusual for a Delphi project of a million lines to compile in a few seconds – one benchmark gave 170,000 lines per second and it is under active development, with releases every six months, with new platforms being added approximately every second release. Delphi was originally developed by Borland as an application development tool for Windows as the successor of Turbo Pascal. In 2007, the products were released jointly as RAD Studio, RAD Studio is a shared host for Delphi and C++Builder, and can be purchased with either or both. In 2006, Borland’s developer tools section were transferred from Borland to a wholly owned subsidiary known as CodeGear, in 2015, Embarcadero was purchased by Idera, but the Embarcadero mark was retained for the developer tools division. Among the features supporting RAD are application framework and visual window layout designer, Delphi uses the Pascal-based programming language called Object Pascal introduced by Borland, and compiles Delphi source code into native x86 code. It includes VCL, support for COM independent interfaces with reference counted class implementations, interface implementations can be delegated to fields or properties of classes. Message handlers are implemented by tagging a method of a class with the constant of the message to handle. Database connectivity is supported, and Delphi supplies several database components, VCL includes many database-aware and database access components. Later versions have included upgraded and enhanced runtime library routines provided by the community group FastCode, Delphi is a strongly typed high-level programming language, intended to be easy to use and originally based on the earlier Object Pascal language. Turbo Pascal and its descendants, including Delphi, support access to hardware and low-level programming, with the facility to incorporate code written in assembly language, Delphis object orientation features only class- and interface-based polymorphism. There are dedicated reference-counted string types, and also null-terminated strings, strings can be concatenated by using the + operator, rather than using functions. For dedicated string types Delphi handles memory management without programmer intervention, since Borland Developer Studio 2006 there are functions to locate memory leaks. The Delphi products all ship with a Visual Component Library, including most of its source code, third-party components and tools to enhance the IDE or for other Delphi related development tasks are available, some free of charge. The IDE includes a GUI for localization and translation of created programs that may be deployed to a translator, the VCL framework maintains a high level of source compatibility between versions, which simplifies updating existing source code to a newer Delphi version. Third-party libraries may need updates from the vendor but, if code is supplied. The VCL was an early adopter of dependency injection or inversion of control, it uses a-re-usable component model, with class helpers new functionality can be introduced to core RTL and VCL classes without changing the original source code of the RTL or VCL
17.
Visual Basic
–
Microsoft intended Visual Basic to be relatively easy to learn and use. A programmer can create an application using the components provided by the Visual Basic program itself, over time the community of programmers developed third party components. Programs written in Visual Basic can also use the Windows API, the final release was version 6 in 1998. On April 8,2008 Microsoft stopped supporting Visual Basic 6.0 IDE, in 2014, some software developers still preferred Visual Basic 6.0 over its successor, Visual Basic. NET. In 2014 some developers lobbied for a new version of Visual Basic 6.0, in 2016, Visual Basic 6.0 won the technical impact award at The 19th Annual D. I. C. E. A dialect of Visual Basic, Visual Basic for Applications, is used as a macro or scripting language within several Microsoft applications, like the BASIC programming language, Visual Basic was designed to accommodate a steep learning curve. Programmers can create simple and complex GUI applications. Since VB defines default attributes and actions for the components, a programmer can develop a program without writing much code. Programs built with earlier versions suffered performance problems, but faster computers, though VB programs can be compiled into native code executables from version 5 on, they still require the presence of around 1 MB of runtime libraries. Core runtime libraries are included by default in Windows 2000 and later, earlier versions of Windows, require that the runtime libraries be distributed with the executable. Forms are created using drag-and-drop techniques, a tool is used to place controls on the form. Controls have attributes and event handlers associated with them, default values are provided when the control is created, but may be changed by the programmer. Many attribute values can be modified during run time based on actions or changes in the environment. For example, code can be inserted into the form resize event handler to reposition a control so that it remains centered on the form, expands to fill up the form, etc. Visual Basic can create executables, ActiveX controls, or DLL files, dialog boxes with less functionality can be used to provide pop-up capabilities. Controls provide the functionality of the application, while programmers can insert additional logic within the appropriate event handlers. For example, a drop-down combination box automatically displays a list, when the user selects an element, an event handler is called that executes code that the programmer created to perform the action for that list item. Alternatively, a Visual Basic component can have no user interface and this allows for server-side processing or an add-in module
18.
MATLAB
–
MATLAB is a multi-paradigm numerical computing environment and fourth-generation programming language. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, an additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. In 2004, MATLAB had around one million users across industry, MATLAB users come from various backgrounds of engineering, science, and economics. Cleve Moler, the chairman of the science department at the University of New Mexico. He designed it to give his students access to LINPACK and EISPACK without them having to learn Fortran and it soon spread to other universities and found a strong audience within the applied mathematics community. Jack Little, an engineer, was exposed to it during a visit Moler made to Stanford University in 1983, recognizing its commercial potential, he joined with Moler and Steve Bangert. They rewrote MATLAB in C and founded MathWorks in 1984 to continue its development and these rewritten libraries were known as JACKPAC. In 2000, MATLAB was rewritten to use a set of libraries for matrix manipulation. MATLAB was first adopted by researchers and practitioners in control engineering, Littles specialty and it is now also used in education, in particular the teaching of linear algebra, numerical analysis, and is popular amongst scientists involved in image processing. The MATLAB application is built around the MATLAB scripting language, common usage of the MATLAB application involves using the Command Window as an interactive mathematical shell or executing text files containing MATLAB code. Variables are defined using the assignment operator, =, MATLAB is a weakly typed programming language because types are implicitly converted. It is a typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects. Values can come from constants, from computation involving values of other variables, for example, A simple array is defined using the colon syntax, init, increment, terminator. For instance, defines a variable named array which is an array consisting of the values 1,3,5,7 and that is, the array starts at 1, increments with each step from the previous value by 2, and stops once it reaches 9. The increment value can actually be left out of this syntax, assigns to the variable named ari an array with the values 1,2,3,4, and 5, since the default value of 1 is used as the incrementer. Indexing is one-based, which is the convention for matrices in mathematics, although not for some programming languages such as C, C++. Matrices can be defined by separating the elements of a row with blank space or comma, the list of elements should be surrounded by square brackets. Parentheses, are used to access elements and subarrays, sets of indices can be specified by expressions such as 2,4, which evaluates to
19.
Python (programming language)
–
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. The language provides constructs intended to enable writing clear programs on both a small and large scale and it has a large and comprehensive standard library. Python interpreters are available for operating systems, allowing Python code to run on a wide variety of systems. CPython, the implementation of Python, is open source software and has a community-based development model. CPython is managed by the non-profit Python Software Foundation, about the origin of Python, Van Rossum wrote in 1996, Over six years ago, in December 1989, I was looking for a hobby programming project that would keep me occupied during the week around Christmas. Would be closed, but I had a computer. I decided to write an interpreter for the new scripting language I had been thinking about lately, I chose Python as a working title for the project, being in a slightly irreverent mood. Python 2.0 was released on 16 October 2000 and had major new features, including a cycle-detecting garbage collector. With this release the development process was changed and became more transparent, Python 3.0, a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its features have been backported to the backwards-compatible Python 2.6. x and 2.7. x version series. The End Of Life date for Python 2.7 was initially set at 2015, many other paradigms are supported via extensions, including design by contract and logic programming. Python uses dynamic typing and a mix of reference counting and a garbage collector for memory management. An important feature of Python is dynamic name resolution, which binds method, the design of Python offers some support for functional programming in the Lisp tradition. The language has map, reduce and filter functions, list comprehensions, dictionaries, and sets, the standard library has two modules that implement functional tools borrowed from Haskell and Standard ML. Python can also be embedded in existing applications that need a programmable interface, while offering choice in coding methodology, the Python philosophy rejects exuberant syntax, such as in Perl, in favor of a sparser, less-cluttered grammar. As Alex Martelli put it, To describe something as clever is not considered a compliment in the Python culture. Pythons philosophy rejects the Perl there is more one way to do it approach to language design in favor of there should be one—and preferably only one—obvious way to do it. Pythons developers strive to avoid premature optimization, and moreover, reject patches to non-critical parts of CPython that would offer an increase in speed at the cost of clarity
20.
Ruby (programming language)
–
Ruby is a dynamic, reflective, object-oriented, general-purpose programming language. It was designed and developed in the mid-1990s by Yukihiro Matz Matsumoto in Japan, according to its creator, Ruby was influenced by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple programming paradigms, including functional, object-oriented, and it also has a dynamic type system and automatic memory management. Ruby was conceived on February 24,1993, I knew Perl, but I didnt like it really, because it had the smell of a toy language. The object-oriented language seemed very promising, but I didnt like it, because I didnt think it was a true object-oriented language — OO features appeared to be add-on to the language. As a language maniac and OO fan for 15 years, I really wanted a genuine object-oriented, I looked for but couldnt find one. The name Ruby originated during a chat session between Matsumoto and Keiju Ishitsuka on February 24,1993, before any code had been written for the language. Initially two names were proposed, Coral and Ruby, Matsumoto chose the latter in a later e-mail to Ishitsuka. Matsumoto later noted a factor in choosing the name Ruby – it was the birthstone of one of his colleagues, the first public release of Ruby 0.95 was announced on Japanese domestic newsgroups on December 21,1995. Subsequently, three versions of Ruby were released in two days. The release coincided with the launch of the Japanese-language ruby-list mailing list, in the same year, Matsumoto was hired by netlab. jp to work on Ruby as a full-time developer. In 1998, the Ruby Application Archive was launched by Matsumoto, in 1999, the first English language mailing list ruby-talk began, which signaled a growing interest in the language outside Japan. In this same year, Matsumoto and Keiju Ishitsuka wrote the first book on Ruby, The Object-oriented Scripting Language Ruby and it would be followed in the early 2000s by around 20 books on Ruby published in Japanese. By 2000, Ruby was more popular than Python in Japan, in September 2000, the first English language book Programming Ruby was printed, which was later freely released to the public, further widening the adoption of Ruby amongst English speakers. In early 2002, the English-language ruby-talk mailing list was receiving more messages than the Japanese-language ruby-list, Ruby 1.8 was initially released in August 2003, was stable for a long time, and was retired June 2013. Although deprecated, there is still based on it. Ruby 1.8 is only compatible with Ruby 1.9. Ruby 1.8 has been the subject of industry standards
21.
PHP
–
PHP is a server-side scripting language designed primarily for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994, the PHP reference implementation is now produced by The PHP Development Team, PHP originally stood for Personal Home Page, but it now stands for the recursive acronym PHP, Hypertext Preprocessor. PHP code may be embedded into HTML or HTML5 code, or it can be used in combination with various web template systems, web content management systems and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface executable. The web server combines the results of the interpreted and executed PHP code, PHP code may also be executed with a command-line interface and can be used to implement standalone graphical applications. The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License, PHP has been widely ported and can be deployed on most web servers on almost every operating system and platform, free of charge. The PHP language evolved without a formal specification or standard until 2014. Since 2014 work has gone on to create a formal PHP specification, PHP development began in 1995 when Rasmus Lerdorf wrote several Common Gateway Interface programs in C, which he used to maintain his personal homepage. He extended them to work with web forms and to communicate with databases, PHP/FI could help to build simple, dynamic web applications. This release already had the functionality that PHP has as of 2013. This included Perl-like variables, form handling, and the ability to embed HTML, the syntax resembled that of Perl but was simpler, more limited and less consistent. A development team began to form and, after months of work and beta testing, the fact that PHP lacked an original overall design but instead developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters. Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP3, changing the name to the recursive acronym PHP. Afterwards, public testing of PHP3 began, and the launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHPs core and they also founded Zend Technologies in Ramat Gan, Israel. On May 22,2000, PHP4, powered by the Zend Engine 1.0, was released, as of August 2008 this branch reached version 4.4.9. PHP4 is no longer under development nor will any security updates be released, on July 13,2004, PHP5 was released, powered by the new Zend Engine II. PHP5 included new features such as improved support for object-oriented programming, the PHP Data Objects extension, in 2008 PHP5 became the only stable version under development
22.
OCaml
–
A member of the ML language family, OCaml extends the core Caml language with object-oriented programming constructs. OCamls toolset includes an interactive top-level interpreter, a compiler, a reversible debugger, a package manager. OCaml is the successor to Caml Light, the acronym CAML originally stood for Categorical Abstract Machine Language, although OCaml omits this abstract machine. OCaml is a free and open-source software project managed and principally maintained by French Institute for Research in Computer Science, in the early 2000s, many new languages adopted elements from OCaml, most notably F# and Scala. ML-derived languages are best known for their type systems and type-inferring compilers. OCaml unifies functional, imperative, and object-oriented programming under an ML-like type system, thus, programmers need not be highly familiar with the pure functional language paradigm to use OCaml. OCamls static type system can help eliminate problems at runtime, however, it also forces the programmer to conform to the constraints of the type system, which can require careful thought and close attention. A type-inferring compiler greatly reduces the need for manual type annotations, for example, the data type of variables and the signature of functions usually need not be declared explicitly, as they do in Java. Nonetheless, effective use of OCamls type system can require some sophistication on the part of a programmer, OCaml is perhaps most distinguished from other languages with origins in academia, by its emphasis on performance. These are rare enough that avoiding them is possible in practice. Aside from type-checking overhead, functional programming languages are, in general, challenging to compile to efficient machine language code, xavier Leroy has stated that OCaml delivers at least 50% of the performance of a decent C compiler, but a direct comparison is impossible. Some functions in the OCaml standard library are implemented with faster algorithms than equivalent functions in the libraries of other languages. OCaml is notable for extending ML-style type inference to a system in a general-purpose language. A foreign function interface for linking to C primitives is provided, portability is achieved through native code generation support for major architectures, IA-32, X86-64, Power, SPARC, ARM, and ARM64. OCaml bytecode and native code programs can be written in a multithreaded style, however, because the garbage collector of the INRIA OCaml system is not designed for concurrency, symmetric multiprocessing is unsupported. OCaml threads in the same process execute by time sharing only, there are however several libraries for distributed computing such as Functory and ocamlnet/Plasma. Ocamlcc is a compiler from OCaml to C, to complement the native compiler for unsupported platforms. OCamlJava, developed by INRIA, is a compiler from OCaml to the Java virtual machine, oCaPic, developed by Lip6, is a compiler from OCaml to PIC microcontroller
23.
GNU Octave
–
GNU Octave is software featuring a high-level programming language, primarily intended for numerical computations. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is compatible with Matlab. It may also be used as a batch-oriented language, since it is part of the GNU Project, it is free software under the terms of the GNU General Public License. Octave is one of the free alternatives to Matlab, others being FreeMat. Scilab, however, puts emphasis on syntactic compatibility with Matlab than Octave does. The project was conceived around 1988, at first it was intended to be a companion to a chemical reactor design course. Real development was started by John W. Eaton in 1992, the first alpha release dates back to January 4,1993 and on February 17,1994 version 1.0 was released. Version 4.0.0 was released on May 29,2015, the program is named after Octave Levenspiel, a former professor of the principal author. Levenspiel is known for his ability to perform quick back-of-the-envelope calculations, in addition to use on desktops for personal scientific computing, Octave is used in academia and industry. For example, Octave was used on a parallel computer at Pittsburgh supercomputing center to find vulnerabilities related to guessing social security numbers. Octave is written in C++ using the C++ standard library, Octave uses an interpreter to execute the Octave scripting language. Octave is extensible using dynamically loadable modules, Octave interpreter has an OpenGL-based graphics engine to create plots, graphs and charts and to save or print them. Alternatively, gnuplot can be used for the same purpose, Octave includes a Graphical User Interface in addition to the traditional Command Line Interface, see #User interfaces for details. The Octave language is a programming language. It is a programming language and supports many common C standard library functions. However, it does not support passing arguments by reference, Octave programs consist of a list of function calls or a script. The syntax is matrix-based and provides functions for matrix operations. It supports various data structures and allows object-oriented programming and its syntax is very similar to Matlab, and careful programming of a script will allow it to run on both Octave and Matlab
24.
PostScript
–
PostScript is a page description language in the electronic publishing and desktop publishing business. It is a typed, concatenative programming language and was created at Adobe Systems by John Warnock, Charles Geschke, Doug Brotz, Ed Taft. The concepts of the PostScript language were seeded in 1976 when John Warnock was working at Evans & Sutherland, at that time John Warnock was developing an interpreter for a large three-dimensional graphics database of New York harbor. Warnock conceived the Design System language to process the graphics, concurrently, researchers at Xerox PARC had developed the first laser printer and had recognized the need for a standard means of defining page images. In 1975-76 Bob Sproull and William Newman developed the Press format, but Press, a data format rather than a language, lacked flexibility, and PARC mounted the Interpress effort to create a successor. In 1978 Evans & Sutherland asked Warnock to move from the San Francisco Bay Area to their headquarters in Utah. He then joined Xerox PARC to work with Martin Newell and they rewrote Design System to create J & M which was used for VLSI design and the investigation of type and graphics printing. This work later evolved and expanded into the Interpress language, Warnock left with Chuck Geschke and founded Adobe Systems in December 1982. They, together with Doug Brotz, Ed Taft and Bill Paxton created a language, similar to Interpress, called PostScript. At about this time they were visited by Steve Jobs, who urged them to adapt PostScript to be used as the language for driving laser printers. In March 1985, the Apple LaserWriter was the first printer to ship with PostScript, the combination of technical merits and widespread availability made PostScript a language of choice for graphical output for printing applications. For a time an interpreter for the PostScript language was a component of laser printers. However, the cost of implementation was high, computers output raw PS code that would be interpreted by the printer into an image at the printers natural resolution. This required high performance microprocessors and ample memory, the LaserWriter used a 12 MHz Motorola 68000, making it faster than any of the Macintosh computers to which it attached. When the laser printer engines themselves cost over a thousand dollars the added cost of PS was marginal, the first version of the PostScript language was released to the market in 1984. The term Level 1 was added when Level 2 was introduced, PostScript 3 came at the end of 1997, and along with many new dictionary-based versions of older operators, introduced better color handling, and new filters. Prior to the introduction of PostScript, printers were designed to print character output given the text—typically in ASCII—as input and this changed to some degree with the increasing popularity of dot matrix printers. The characters on these systems were drawn as a series of dots, dot matrix printers also introduced the ability to print raster graphics
25.
Embedded system
–
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a device often including hardware. Embedded systems control many devices in use today. Ninety-eight percent of all microprocessors are manufactured as components of embedded systems, examples of properties of typically embedded computers when compared with general-purpose counterparts are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the price of limited processing resources, which make them more difficult to program. For example, intelligent techniques can be designed to power consumption of embedded systems. Modern embedded systems are based on microcontrollers, but ordinary microprocessors are also common. In either case, the processor used may be ranging from general purpose to those specialised in certain class of computations. A common standard class of dedicated processors is the signal processor. Since the embedded system is dedicated to tasks, design engineers can optimize it to reduce the size and cost of the product and increase the reliability. Some embedded systems are mass-produced, benefiting from economies of scale, complexity varies from low, with a single microcontroller chip, to very high with multiple units, peripherals and networks mounted inside a large chassis or enclosure. One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, an early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. Since these early applications in the 1960s, embedded systems have come down in price and there has been a rise in processing power. An early microprocessor for example, the Intel 4004, was designed for calculators and other systems but still required external memory. By the early 1980s, memory, input and output system components had been integrated into the chip as the processor forming a microcontroller. Microcontrollers find applications where a computer would be too costly. A comparatively low-cost microcontroller may be programmed to fulfill the role as a large number of separate components
26.
Half-precision floating-point format
–
In computing, half precision is a binary floating-point computer number format that occupies 16 bits in computer memory. In IEEE 754-2008 the 16-bit base 2 format is referred to as binary16. It is intended for storage of many floating-point values where higher precision is not needed, nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. The hardware-accelerated programmable shading group led by John Airey at SGI invented the s10e5 data type in 1997 as part of the design effort. This is described in a SIGGRAPH2000 paper and further documented in US patent 7518615 and this format is used in several computer graphics environments including OpenEXR, JPEG XR, OpenGL, Cg, and D3DX. The advantage over 8-bit or 16-bit binary integers is that the dynamic range allows for more detail to be preserved in highlights. The advantage over 32-bit single-precision binary formats is that it half the storage. The F16C extension allows x86 processors to convert half-precision floats to, thus only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE754 parlance, there are 10 bits of significand, the half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15, also known as exponent bias in the IEEE754 standard. The stored exponents 000002 and 111112 are interpreted specially, the minimum strictly positive value is 2−24 ≈5.96 × 10−8. The minimum positive value is 2−14 ≈6.10 × 10−5. The maximum representable value is ×215 =65504 and these examples are given in bit representation of the floating-point value. This includes the sign bit, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, ARM processors support an alternative half-precision format, which does away with the special case for an exponent value of 31. It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs, instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008
27.
Decimal32 floating-point format
–
In computing, decimal32 is a decimal floating-point computer numbering format that occupies 4 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, like the binary16 format, it is intended for memory saving storage. Decimal32 supports 7 decimal digits of significand and an exponent range of −95 to +96, because the significand is not normalized, most values with less than 7 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal32 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal32 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,7 digits of significand and 3×26=192 possible exponent values. The remaining combinations encode infinities and NaNs and this format uses a binary significand from 0 to 107−1 =9999999 = 98967F16 =1001100010010110011111112. The encoding can represent binary significands up to 10×220−1 =10485759 = 9FFFFF16 =1001111111111111111111112, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 8-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. These six bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 20 bits are the significand continuation field, consisting of two 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each. The bits marked x in the table above are ignored on input, but will always be 0 in computed results
28.
Decimal64 floating-point format
–
In computing, decimal64 is a decimal floating-point computer numbering format that occupies 8 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal64 supports 16 decimal digits of significand and an exponent range of −383 to +384, i. e. ±0. 000000000000000×10^−383 to ±9. 999999999999999×10^384. In contrast, the binary format, which is the most commonly used type, has an approximate range of ±0. 000000000000001×10^−308 to ±1. 797693134862315×10^308. Because the significand is not normalized, most values with less than 16 significant digits have multiple representations, 1×102=0. 1×103=0. 01×104. Decimal64 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal64 values. Both alternatives provide exactly the range of representable numbers,16 digits of significand. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of a 5-bit field, the remaining combinations encode infinities and NaNs. In the cases of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. This format uses a binary significand from 0 to 1016−1 =9999999999999999 = 2386F26FC0FFFF16 =1000111000011011110010011011111100000011111111111111112, the encoding, completely stored on 64 bits, can represent binary significands up to 10×250−1 =11258999068426239 = 27FFFFFFFFFFFF16, but values larger than 1016−1 are illegal. As described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 10-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 for the most bits of the true significand, compare having an implicit 1-bit prefix 1 in the significand of normal values for the binary formats. Note also that the 2-bit sequences 00,01, or 10 after the bit are part of the exponent field. Note that the bits of the significand field do not encode the most significant decimal digit. The highest valid significant is 9999999999999999 whose binary encoding is 0111000011011110010011011111100000011111111111111112, the leading digit is between 0 and 9, and the rest of the significand uses the densely packed decimal encoding. This eight bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 50 bits are the significand continuation field, consisting of five 10-bit declets
29.
Quadruple-precision floating-point format
–
That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. In IEEE 754-2008 the 128-bit base-2 format is referred to as binary128. The IEEE754 standard specifies a binary128 as having, Sign bit,1 bit Exponent width,15 bits Significand precision,113 bits This gives from 33 to 36 significant decimal digits precision. The format is written with an implicit lead bit with value 1 unless the exponent is stored with all zeros, thus only 112 bits of the significand appear in the memory format, but the total precision is 113 bits. The bits are laid out as, A binary256 would have a precision of 237 bits. The stored exponents 000016 and 7FFF16 are interpreted specially, the minimum strictly positive value is 2−16494 ≈ 10−4965 and has a precision of only one bit. The minimum positive value is 2−16382 ≈3.3621 × 10−4932 and has a precision of 113 bits. The maximum representable value is 216384 −216271 ≈1.1897 ×104932 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, a common software technique to implement nearly quadruple precision using pairs of double-precision values is sometimes called double-double arithmetic. That is, the pair is stored in place of q, note that double-double arithmetic has the following special characteristics, As the magnitude of the value decreases, the amount of extra precision also decreases. Therefore, the smallest number in the range is narrower than double precision. The smallest number with full precision is 1000.02 × 2−1074, numbers whose magnitude is smaller than 2−1021 will not have additional precision compared with double precision. The actual number of bits of precision can vary, in general, the magnitude of the low-order part of the number is no greater than half ULP of the high-order part. If the low-order part is less than half ULP of the high-order part, certain algorithms that rely on having a fixed number of bits in the significand can fail when using 128-bit long double numbers. Because of the reason above, it is possible to represent values like 1 + 2−1074 and they are represented as a sum of three double-precision values respectively. They can represent operations with at least 159/161 and 212/215 bits respectively, a similar technique can be used to produce a double-quad arithmetic, which is represented as a sum of two quadruple-precision values. They can represent operations with at least 226 bits, quadruple precision is often implemented in software by a variety of techniques, since direct hardware support for quadruple precision is, as of 2016, less common
30.
Decimal128 floating-point format
–
In computing, decimal128 is a decimal floating-point computer numbering format that occupies 16 bytes in computer memory. It is intended for applications where it is necessary to emulate decimal rounding exactly, such as financial, decimal128 supports 34 decimal digits of significand and an exponent range of −6143 to +6144, i. e. ±0. 000000000000000000000000000000000×10^−6143 to ±9. 999999999999999999999999999999999×10^6144. Therefore, decimal128 has the greatest range of values compared with other IEEE basic floating point formats, because the significand is not normalized, most values with less than 34 significant digits have multiple possible representations, 1×102=0. 1×103=0. 01×104, etc. Decimal128 floating point is a new decimal floating-point format, formally introduced in the 2008 version of IEEE754 as well as with ISO/IEC/IEEE60559,2011. IEEE754 allows two alternative methods for decimal128 values. The standard does not specify how to signify which representation is used, in one representation method, based on binary integer decimal, the significand is represented as binary coded positive integer. The other, alternative, representation method is based on densely packed decimal for most of the significand, both alternatives provide exactly the same range of representable numbers,34 digits of significand and 3×212 =12288 possible exponent values. In both cases, the most significant 4 bits of the significand are combined with the most significant 2 bits of the exponent to use 30 of the 32 possible values of 5 bits in the combination field, the remaining combinations encode infinities and NaNs. In the case of Infinity and NaN, all bits of the encoding are ignored. Thus, it is possible to initialize an array to Infinities or NaNs by filling it with a byte value. The encoding can represent binary significands up to 10×2110−1 =12980742146337069071326240823050239, as described above, the encoding varies depending on whether the most significant 4 bits of the significand are in the range 0 to 7, or higher. If the 2 bits after the bit are 11, then the 14-bit exponent field is shifted 2 bits to the right. In this case there is an implicit leading 3-bit sequence 100 in the true significand, compare having an implicit 1 in the significand of normal values for the binary formats. Note also that the 00,01, or 10 bits are part of the exponent field, for the decimal128 format, all of these significands are out of the valid range, and are thus decoded as zero, but the pattern is same as decimal32 and decimal64. The leading digit is between 0 and 9, and the rest of the uses the densely packed decimal encoding. This twelve bits after that are the exponent continuation field, providing the less-significant bits of the exponent, the last 110 bits are the significand continuation field, consisting of eleven 10-bit declets. Each declet encodes three decimal digits using the DPD encoding, the DPD/3BCD transcoding for the declets is given by the following table. B9. b0 are the bits of the DPD, and d2. d0 are the three BCD digits, the 8 decimal values whose digits are all 8s or 9s have four codings each
31.
Octuple-precision floating-point format
–
In computing, octuple precision is a binary floating-point-based computer number format that occupies 32 bytes in computer memory. This 256-bit octuple precision is for applications requiring results in higher than quadruple precision and this format is rarely used and very few things support it. Thus only 236 bits of the significand appear in the memory format, the stored exponents 0000016 and 7FFFF16 are interpreted specially. The minimum strictly positive value is 2−262378 ≈ 10−78984 and has a precision of one bit. The minimum positive value is 2−262142 ≈2.4824 × 10−78913. The maximum representable value is 2262144 −2261907 ≈1.6113 ×1078913 and these examples are given in bit representation, in hexadecimal, of the floating-point value. This includes the sign, exponent, and significand, so the bits beyond the rounding point are 0101. Which is less than 1/2 of a unit in the last place, octuple precision is rarely implemented since usage of it is extremely rare. Apple Inc. had an implementation of addition, subtraction and multiplication of numbers with a 224-bit twos complement significand. One can use general arbitrary-precision arithmetic libraries to obtain octuple precision, there is little to no hardware support for it. Octuple-precision arithmetic is too impractical for most commercial uses of it, IEEE Standard for Floating-Point Arithmetic ISO/IEC10967, Language-independent arithmetic Primitive data type
32.
Extended precision
–
Extended precision refers to floating point number formats that provide greater precision than the basic floating point formats. Extended precision formats support a basic format by minimizing roundoff and overflow errors in values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types using special software, the IBM1130 offered two floating point formats, a 32-bit standard precision format and a 40-bit extended precision format. Standard precision format contained a 24-bit twos complement significand while extended precision utilized a 32-bit twos complement significand, the latter format could make full use of the cpus 32-bit integer operations. The characteristic in both formats was an 8-bit field containing the power of two biased by 128, floating-point arithmetic operations were performed by software, and double precision was not supported at all. The extended format occupied three 16-bit words, with the extra space simply ignored, the IBM System/360 supports a 32-bit short floating point format and a 64-bit long floating point format. The 360/85 and follow-on System/370 added support for a 128-bit extended format and these formats are still supported in the current design, where they are now called the hexadecimal floating point formats. The IEEE754 floating point standard recommends that implementations provide extended precision formats, the standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementors choice, the IA32 and x86-64 and Itanium processors support an 80-bit double extended extended precision format with a 64-bit significand. The Intel 8087 math coprocessor was the first x86 device which supported floating point arithmetic in hardware and it was designed to support a 32-bit single precision format and a 64-bit double precision format for encoding and interchanging floating point numbers. To mitigate such issues the internal registers in the 8087 were designed to hold intermediate results in an 80-bit extended precision format, the floating-point unit on all subsequent x86 processors have supported this format. As a result software can be developed which takes advantage of the higher precision provided by this format and that kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors support this same 64-bit significand extended precision type, the follow-on Coldfire processors do not support this 96-bit extended precision format. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE754 double extended format and this 80-bit format uses one bit for the sign of the significand,15 bits for the exponent field and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the power of 2. An exponent field value of 32767 is reserved so as to enable the representation of states such as infinity. If the exponent field is zero, the value is a denormal number, the m field is the combination of the integer and fraction parts in the above diagram. In contrast to the single and double-precision formats, this format does not utilize an implicit/hidden bit, rather, bit 63 contains the integer part of the significand and bits 62-0 hold the fractional part
33.
Minifloat
–
In computing, minifloats are floating point values represented with very few bits. Predictably, they are not well suited for general purpose numerical calculations and they are used for special purposes most often in computer graphics where iterations are small and precision has aesthetic effects. Additionally they are encountered as a pedagogical tool in computer science courses to demonstrate the properties and structures of floating point arithmetic. Minifloats with 16 bits are half-precision numbers, there are also minifloats with 8 bits or even fewer. Minifloats can be designed following the principles of the IEEE754 standard, in this case they must obey the rules for the frontier between subnormal and normal numbers and they must have special patterns for infinity and NaN. Normalized numbers are stored with a biased exponent, the new revision of the standard, IEEE 754-2008, has 16-bit binary minifloats. The Radeon R300 and R420 GPUs used an fp24 floating-point format with 7 bits of exponent and 16 bits of mantissa, Full Precision in Direct3D9.0 is a proprietary 24-bit floating point format. Microsofts D3D9 graphics API initially supported both FP24 and FP32 as Full Precision as well as FP16 as Partial Precision for vertex, in computer graphics minifloats are sometimes used to represent only integral values. If at the same time subnormal values should exist, the least subnormal number has to be 1 and this statement can be used to calculate the bias value. The following example demonstrates the calculation as well as the underlying principles, a minifloat in one byte with one sign bit, four exponent bits and three mantissa bits should be used to represent integral values. All IEEE754 principles should be valid, the only free value is the exponent bias, which will come out as −2. The unknown exponent is called for the moment x, numbers in a different base are marked as. base. The bit patterns have spaces to visualize their parts,00000000 =0 The mantissa is extended with 0. 00000001 =0.0012 × 2x =0.125 × 2x =1,00000111 =0.1112 × 2x =0.875 × 2x =7 The mantissa is extended with 1. 00001000 =1.0002 × 2x =1 × 2x =800001001 =1.0012 × 2x =1.125 × 2x =9. 00010000 =1.0002 × 2x+1 =1 × 2x+1 =1600010001 =1.0012 × 2x+1 =1.125 × 2x+1 =18. 01110000 =1.0002 × 2x+13 =1.000 × 2x+13 =6553601110001 =1.0012 × 2x+13 =1.125 × 2x+13 =73728. Therefore the bias has to be −2, that is every stored exponent has to be decreased by −2 or has to be increased by 2, to get the numerical exponent
34.
Microsoft Binary Format
–
In computing, Microsoft Binary Format was a format for floating point numbers used in Microsofts BASIC language products including MBASIC, GW-BASIC and QuickBasic prior to version 4.00. In 1975, Bill Gates and Paul Allen were working on Altair BASIC, one thing still missing was code to handle floating point numbers, needed to support calculations with very big and very small numbers, which would be particularly useful for science and engineering. One of the uses of the Altair was as a scientific calculator. At a dinner at Currier House, a residential house at Harvard, Gates. One of them, Monte Davidoff, told them he had written floating point routines before and convinced Gates, at the time there was no standard for floating point numbers, so Davidoff had to come up with his own. He decided 32 bits would allow enough range and precision, when Allen had to demonstrate it to MITS, it was the first time it ran on an actual Altair. But it worked and when he entered ‘PRINT 2+2’, Davidoffs adding routine gave the right answer, the source code for Altair BASIC was thought to have been lost to history, but resurfaced in 2000. It had been sitting behind Gatess former tutor and dean Harry Lewiss file cabinet, a comment in the source credits Davidoff as the writer of Altair BASICs math package. Altair BASIC took off and soon most early home computers ran some form of Microsoft BASIC, the BASIC port for the 6502 CPU, such as used in the Commodore PET, took up more space due to the lower code density of the 6502. Because of this it would not fit in a single ROM chip together with the machine-specific input and output code. Since an extra chip was necessary, extra space was available, not long afterwards the Z80 ports, such as Level II BASIC for the TRS-80, introduced the 64 bit, double precision format as a separate data type from 32 bit, single precision. Even so, for a while MBF became the de facto floating point format on computers, to the point where people still occasionally encounter legacy files. As early as in 1976, Intel was starting the development of a floating point coprocessor, Intel hoped to be able to sell a chip containing good implementations of all the operations found in the widely varying maths software libraries. John Palmer, who managed the project, contacted William Kahan, the first VAX, the VAX-11/780 had just come out in late 1977 and its floating point was highly regarded. However, seeking to market their chip to the broadest possible market, Intel wanted the best floating point possible, when rumours of Intels new chip reached its competitors they started a standardization effort, called IEEE754, to prevent Intel from gaining too much ground. Kahan got Palmers permission to participate, he was allowed to explain Intels design decisions and their underlying reasoning, vAXs floating point formats differed from MBF only in that it had the sign in the most significant bit. It turned out that for double precision numbers, an 8 bit exponent isnt wide enough for some wanted operations, both Kahans proposal and a counter-proposal by DEC therefore used 11 bits, like the time-tested 60 bits floating point format of the CDC6600 from 1965. The next year DEC had a study done in order to demonstrate that gradual underflow was a bad idea, in 1985 the standard was ratified, but it had already become the de facto standard a year earlier, implemented by many manufacturers