1.
File comparison
–
In computing, file comparison is the calculation and display of the differences and similarities between data objects, typically text files such as source code. The methods, implementations, and results are called a diff. The output may be presented in a user interface or used as part of larger tasks in networks, file systems. Some widely used file comparison programs are diff, cmp, FileMerge, WinMerge, Beyond Compare, many text editors and word processors perform file comparison to highlight the changes to a document. Most file comparison tools find the longest common subsequence between two files, any data not in the longest common subsequence is presented as an insertion or deletion. In 1978, Paul Heckel published an algorithm that identifies most moved blocks of text and this is used in the IBM History Flow tool. Other file comparison programs find block moves, some specialized file comparison tools find the longest increasing subsequence between two files. The rsync protocol uses a hash function to compare two files on two distant computers with low communication overhead. File comparison in word processors is typically at the word level, byte or character-level comparison is useful in some specialized applications. Display of file comparison varies, with the main approaches being either showing two files side-by-side, or showing a single file, with markup showing the changes from one file to the other. In either case, particularly side-by-side viewing, code folding or text folding may be used to hide unchanged portions of the file, Comparison tools are used for various reasons. When one wishes to compare files, byte-level is probably best. But if one wishes to compare text files or computer programs, File comparison is an important, and most likely integral, part of file synchronization and backup. In backup methodologies, the issue of corruption is an important one. Corruption occurs without warning and without our knowledge, at least usually until too late to recover the missing parts, usually, the only way to know for sure if a file has become corrupted is when it is next used or opened. Barring that, one must use a tool to at least recognize that a difference has occurred. Therefore, all file sync or backup programs must include file comparison if these programs are to be actually useful, prior to file comparison, machines existed to compare magnetic tapes or punch cards. The IBM519 Card Reproducer could determine whether a deck of punched cards were equivalent, in 1957, John Van Gardner developed a system to compare the check sums of loaded sections of Fortran programs to debug compilation problems on the IBM704
2.
Computer science
–
Computer science is the study of the theory, experimentation, and engineering that form the basis for the design and use of computers. An alternate, more succinct definition of science is the study of automating algorithmic processes that scale. A computer scientist specializes in the theory of computation and the design of computational systems and its fields can be divided into a variety of theoretical and practical disciplines. Some fields, such as computational complexity theory, are highly abstract, other fields still focus on challenges in implementing computation. Human–computer interaction considers the challenges in making computers and computations useful, usable, the earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, further, algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment. Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623, in 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. He may be considered the first computer scientist and information theorist, for, among other reasons and he started developing this machine in 1834, and in less than two years, he had sketched out many of the salient features of the modern computer. A crucial step was the adoption of a card system derived from the Jacquard loom making it infinitely programmable. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information, when the machine was finished, some hailed it as Babbages dream come true. During the 1940s, as new and more powerful computing machines were developed, as it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. Computer science began to be established as an academic discipline in the 1950s. The worlds first computer science program, the Cambridge Diploma in Computer Science. The first computer science program in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights and it is the now well-known IBM brand that formed part of the computer science revolution during this time. IBM released the IBM704 and later the IBM709 computers, still, working with the IBM was frustrating if you had misplaced as much as one letter in one instruction, the program would crash, and you would have to start the whole process over again. During the late 1950s, the science discipline was very much in its developmental stages. Time has seen significant improvements in the usability and effectiveness of computing technology, modern society has seen a significant shift in the users of computer technology, from usage only by experts and professionals, to a near-ubiquitous user base
3.
Programming language
–
A programming language is a formal computer language designed to communicate instructions to a machine, particularly a computer. Programming languages can be used to programs to control the behavior of a machine or to express algorithms. From the early 1800s, programs were used to direct the behavior of such as Jacquard looms. Thousands of different programming languages have created, mainly in the computer field. Many programming languages require computation to be specified in an imperative form while other languages use forms of program specification such as the declarative form. The description of a language is usually split into the two components of syntax and semantics. Some languages are defined by a document while other languages have a dominant implementation that is treated as a reference. Some languages have both, with the language defined by a standard and extensions taken from the dominant implementation being common. A programming language is a notation for writing programs, which are specifications of a computation or algorithm, some, but not all, authors restrict the term programming language to those languages that can express all possible algorithms. For example, PostScript programs are created by another program to control a computer printer or display. More generally, a language may describe computation on some, possibly abstract. It is generally accepted that a specification for a programming language includes a description, possibly idealized. In most practical contexts, a programming language involves a computer, consequently, abstractions Programming languages usually contain abstractions for defining and manipulating data structures or controlling the flow of execution. Expressive power The theory of computation classifies languages by the computations they are capable of expressing, all Turing complete languages can implement the same set of algorithms. ANSI/ISO SQL-92 and Charity are examples of languages that are not Turing complete, markup languages like XML, HTML, or troff, which define structured data, are not usually considered programming languages. Programming languages may, however, share the syntax with markup languages if a computational semantics is defined, XSLT, for example, is a Turing complete XML dialect. Moreover, LaTeX, which is used for structuring documents. The term computer language is used interchangeably with programming language
4.
Inequality (mathematics)
–
In mathematics, an inequality is a relation that holds between two values when they are different. The notation a ≠ b means that a is not equal to b and it does not say that one is greater than the other, or even that they can be compared in size. If the values in question are elements of a set, such as the integers or the real numbers. The notation a < b means that a is less than b, the notation a > b means that a is greater than b. In either case, a is not equal to b and these relations are known as strict inequalities. The notation a < b may also be read as a is less than b. The notation a ≥ b means that a is greater than or equal to b, not less than can also be represented by the symbol for less than bisected by a vertical line, not. In engineering sciences, a formal use of the notation is to state that one quantity is much greater than another. The notation a ≪ b means that a is less than b. The notation a ≫ b means that a is greater than b. Inequalities are governed by the following properties, all of these properties also hold if all of the non-strict inequalities are replaced by their corresponding strict inequalities and monotonic functions are limited to strictly monotonic functions. The transitive property of inequality states, For any real numbers a, b, c, If a ≥ b and b ≥ c, If a ≤ b and b ≤ c, then a ≤ c. If either of the premises is an inequality, then the conclusion is a strict inequality. E. g. if a ≥ b and b > c, then a > c An equality is of course a special case of a non-strict inequality. E. g. if a = b and b > c, then a > c The relations ≤ and ≥ are each others converse, For any real numbers a and b, If a ≤ b, then b ≥ a. If a ≥ b, then a + c ≥ b + c, If a ≤ b and c >0, then ac ≤ bc and a/c ≤ b/c. If c is negative, then multiplying or dividing by c inverts the inequality, If a ≥ b and c <0, then ac ≤ bc, If a ≤ b and c <0, then ac ≥ bc and a/c ≥ b/c. More generally, this applies for a field, see below
5.
Pascal (programming language)
–
A derivative known as Object Pascal designed for object-oriented programming was developed in 1985. Pascal, named in honor of the French mathematician, philosopher, before his work on Pascal, Wirth had developed Euler and ALGOL W and later went on to develop the Pascal-like languages Modula-2 and Oberon. Initially, Pascal was largely, but not exclusively, intended to teach structured programming. A generation of students used Pascal as a language in undergraduate courses. Variants of Pascal have also frequently used for everything from research projects to PC games. Newer Pascal compilers exist which are widely used, Pascal was the primary high-level language used for development in the Apple Lisa, and in the early years of the Macintosh. Parts of the original Macintosh operating system were hand-translated into Motorola 68000 assembly language from the Pascal sources, apollo Computer used Pascal as the systems programming language for its operating systems beginning in 1980. Object Pascal is still used for developing Windows applications but also has the ability to compile the same code to Mac, iOS. Another cross-platform version called Free Pascal, with the Lazarus IDE, is popular with Linux users since it also offers write once, codeTyphon is a Lazarus distribution with more preinstalled packages and cross compilers. Wirths intention was to create an efficient language based on structured programming, important features included for this were records, enumerations, subranges, dynamically allocated variables with associated pointers, and sets. To make this possible and meaningful, Pascal has a strong typing on all objects, similar mechanisms are standard in many programming languages today. This enables a very simple and coherent syntax where a program is syntactically nearly identical to a single procedure or function. The first Pascal compiler was designed in Zürich for the CDC6000 series mainframe computer family, Niklaus Wirth reports that a first attempt to implement it in Fortran in 1969 was unsuccessful due to Fortrans inadequacy to express complex data structures. The second attempt was implemented in a C-like language and then translated by hand to Pascal itself for boot-strapping, the GNU Pascal compiler is one notable exception, being written in C. The first successful port of the CDC Pascal compiler to another mainframe was completed by Welsh, the target was the ICL1900 series. This compiler, in turn, was the parent of the Pascal compiler for the Information Computer Systems Multum minicomputer and it is thought that Multum Pascal, which was completed in the summer of 1973, may have been the first 16-bit implementation. A completely new compiler was completed by Welsh et al. at QUB in 1977 and it offered a source-language diagnostic feature that was implemented by Findlay and Watt at Glasgow University. This implementation was ported in 1980 to the ICL2900 series by a team based at Southampton University, the first Pascal compiler written in North America was constructed at the University of Illinois under Donald B
6.
Ada (programming language)
–
Ada is a structured, statically typed, imperative, wide-spectrum, and object-oriented high-level computer programming language, extended from Pascal and other languages. It has built-in language support for design-by-contract, extremely strong typing, explicit concurrency, offering tasks, synchronous message passing, protected objects, Ada improves code safety and maintainability by using the compiler to find errors in favor of runtime errors. Ada is a standard, the current version is defined by ISO/IEC8652,2012. Ada was named after Ada Lovelace, who has credited with being the first computer programmer. Ada was originally targeted at embedded and real-time systems, the Ada 95 revision, designed by S. Tucker Taft of Intermetrics between 1992 and 1995, improved support for systems, numerical, financial, and object-oriented programming. Features of Ada include, strong typing, modularity mechanisms, run-time checking, parallel processing, exception handling, Ada 95 added support for object-oriented programming, including dynamic dispatch. The syntax of Ada minimizes choices of ways to perform basic operations, Ada uses the basic arithmetical operators +, -, *, and /, but avoids using other symbols. Code blocks are delimited by words such as declare, begin, and end, in the case of conditional blocks this avoids a dangling else that could pair with the wrong nested if-expression in other languages like C or Java. Ada is designed for development of large software systems. Ada packages can be compiled separately, Ada package specifications can also be compiled separately without the implementation to check for consistency. This makes it possible to detect problems early during the design phase, for example, the syntax requires explicitly named closing of blocks to prevent errors due to mismatched end tokens. The adherence to strong typing allows detection of common software errors either during compile-time. As concurrency is part of the specification, the compiler can in some cases detect potential deadlocks. Compilers also commonly check for misspelled identifiers, visibility of packages, redundant declarations, etc. and can provide warnings and useful suggestions on how to fix the error. Ada also supports run-time checks to protect against access to unallocated memory, buffer overflow errors, range violations, off-by-one errors, array access errors and these checks can be disabled in the interest of runtime efficiency, but can often be compiled efficiently. It also includes facilities to help program verification, for these reasons, Ada is widely used in critical systems, where any anomaly might lead to very serious consequences, e. g. accidental death, injury or severe financial loss. Examples of systems where Ada is used include avionics, ATC, railways, banking, military, adas dynamic memory management is high-level and type-safe. Ada does not have generic or untyped pointers, nor does it implicitly declare any pointer type, instead, all dynamic memory allocation and deallocation must take place through explicitly declared access types
7.
C (programming language)
–
C was originally developed by Dennis Ritchie between 1969 and 1973 at Bell Labs, and used to re-implement the Unix operating system. C has been standardized by the American National Standards Institute since 1989, C is an imperative procedural language. Therefore, C was useful for applications that had formerly been coded in assembly language. Despite its low-level capabilities, the language was designed to encourage cross-platform programming, a standards-compliant and portably written C program can be compiled for a very wide variety of computer platforms and operating systems with few changes to its source code. The language has become available on a wide range of platforms. In C, all code is contained within subroutines, which are called functions. Function parameters are passed by value. Pass-by-reference is simulated in C by explicitly passing pointer values, C program source text is free-format, using the semicolon as a statement terminator and curly braces for grouping blocks of statements. The C language also exhibits the characteristics, There is a small, fixed number of keywords, including a full set of flow of control primitives, for, if/else, while, switch. User-defined names are not distinguished from keywords by any kind of sigil, There are a large number of arithmetical and logical operators, such as +, +=, ++, &, ~, etc. More than one assignment may be performed in a single statement, function return values can be ignored when not needed. Typing is static, but weakly enforced, all data has a type, C has no define keyword, instead, a statement beginning with the name of a type is taken as a declaration. There is no function keyword, instead, a function is indicated by the parentheses of an argument list, user-defined and compound types are possible. Heterogeneous aggregate data types allow related data elements to be accessed and assigned as a unit, array indexing is a secondary notation, defined in terms of pointer arithmetic. Unlike structs, arrays are not first-class objects, they cannot be assigned or compared using single built-in operators, There is no array keyword, in use or definition, instead, square brackets indicate arrays syntactically, for example month. Enumerated types are possible with the enum keyword and they are not tagged, and are freely interconvertible with integers. Strings are not a data type, but are conventionally implemented as null-terminated arrays of characters. Low-level access to memory is possible by converting machine addresses to typed pointers
8.
Inheritance (object-oriented programming)
–
Such an inherited class is called a subclass of its parent class or super class. It is a mechanism for code reuse and to independent extensions of the original software via public classes and interfaces. The relationships of objects or classes through inheritance give rise to a hierarchy, inheritance was invented in 1967 for Simula. Inheritance should not be confused with subtyping, to distinguish these concepts, subtyping is also known as interface inheritance, whereas inheritance as defined here is known as implementation inheritance or code inheritance. Still, inheritance is a commonly used mechanism for establishing subtype relationships, inheritance is contrasted with object composition, where one object contains another object, see composition over inheritance. Composition implements a has-a relationship, in contrast to the relationship of subtyping. There are various types of inheritance, based on paradigm and specific language, single inheritance where subclasses inherit the features of one superclass. A class acquires the properties of another class, multiple inheritance where one class can have more than one superclass and inherit features from all parent classes. Multilevel inheritance where a subclass is inherited from another subclass and it is not uncommon that a class is derived from another derived class as shown in the figure Multilevel inheritance. The class A serves as a class for the derived class B. The class B is known as intermediate base class because it provides a link for the inheritance between A and C, the chain ABC is known as inheritance path. A derived class with multilevel inheritance is declared as follows, This process can be extended to any number of levels, hierarchical inheritance where one class serves as a superclass for more than one sub class. Hybrid inheritance a mix of two or more of the types of inheritance. A Subclass, derived class, heir class, or child class is a modular, the semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits the instance variables and member functions of its superclasses. The general form of defining a class is, The colon indicates that the derived-class-name is derived from the base-class-name. The visibility-mode is optional and, if present, may be private or public. Visibility mode specifies whether the features of the class are privately derived or publicly derived. Some languages support the inheritance of other constructs also, for example, in Eiffel, contracts that define the specification of a class are also inherited by heirs
9.
Identity (mathematics)
–
In other words, A = B is an identity if A and B define the same functions. This means that an identity is an equality between functions that are differently defined, for example,2 = a2 + 2ab + b2 and cos2 + sin2 =1 are identities. Identities are sometimes indicated by the triple bar symbol ≡ instead of =, geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle, only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified, for example, the latter equation is true when θ =0, false when θ =2. The following identities hold for all integer exponents, provided that the base is non-zero and this contrasts with addition and multiplication, which are. For example,2 +3 =3 +2 =5 and 2 ·3 =3 ·2 =6, but 23 =8, whereas 32 =9. For example, +4 =2 + =9 and ·4 =2 · =24, but 23 to the 4 is 84 or 4,096, whereas 2 to the 34 is 281 or 2,417,851,639,229,258,349,412,352. Without parentheses to modify the order of calculation, by convention the order is top-down, not bottom-up, several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another. The logarithm of a product is the sum of the logarithms of the numbers being multiplied, the logarithm of the p-th power of a number is p times the logarithm of the number itself, the logarithm of a p-th root is the logarithm of the number divided by p. The following table lists these identities with examples, each of the identities can be derived after substitution of the logarithm definitions x = blogb, and/or y = blogb, in the left hand sides. The logarithm logb can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula, typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the formula, log b = log 10 log 10 = log e log e . Given a number x and its logarithm logb to a base b. The hyperbolic functions satisfy many identities, all of similar in form to the trigonometric identities. The Gudermannian function gives a relationship between the circular functions and the hyperbolic ones that does not involve complex numbers. Accounting identity List of mathematical identities Encyclopedia of Equation Online encyclopedia of mathematical identities A Collection of Algebraic Identities
10.
Reflexive relation
–
In mathematics, a binary relation R over a set X is reflexive if every element of X is related to itself. In mathematical notation, this is, ∀ a ∈ X An example of a relation is the relation is equal to on the set of real numbers. A reflexive relation is said to have the property or is said to possess reflexivity. A relation that is irreflexive, or anti-reflexive, is a relation on a set where no element is related to itself. An example is the greater than relation on the real numbers, note that not every relation which is not reflexive is irreflexive, it is possible to define relations where some elements are related to themselves but others are not. A relation ~ on a set S is called quasi-reflexive if every element that is related to some element is related to itself, formally, if ∀x, y∈S. The reflexive closure ≃ of a binary relation ~ on a set S is the smallest reflexive relation on S that is a superset of ~, equivalently, it is the union of ~ and the identity relation on S, formally, = ∪. For example, the closure of x<y is x≤y. The reflexive reduction, or irreflexive kernel, of a binary relation ~ on a set S is the smallest relation ≆ such that ≆ shares the same reflexive closure as ~ and it can be seen in a way as the opposite of the reflexive closure. It is equivalent to the complement of the identity relation on S with regard to ~, formally and that is, it is equivalent to ~ except for where x~x is true. For example, the reduction of x≤y is x<y. Authors in philosophical logic often use deviating designations, a reflexive and a quasi-reflexive relation in the mathematical sense is called a totally reflexive and a reflexive relation in philosophical logic sense, respectively. Binary relation Symmetric relation Antisymmetric relation Transitive relation Levy, A, basic Set Theory, Perspectives in Mathematical Logic, Springer-Verlag. ISBN 0-486-42079-5 Lidl, R. and Pilz, G, applied abstract algebra, Undergraduate Texts in Mathematics, Springer-Verlag. Hazewinkel, Michiel, ed. Reflexivity, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
11.
Symmetry
–
Symmetry in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, symmetry has a precise definition, that an object is invariant to any of various transformations. Although these two meanings of symmetry can sometimes be told apart, they are related, so they are discussed together. The opposite of symmetry is asymmetry, a geometric shape or object is symmetric if it can be divided into two or more identical pieces that are arranged in an organized fashion. This means that an object is symmetric if there is a transformation that moves individual pieces of the object, an object has rotational symmetry if the object can be rotated about a fixed point without changing the overall shape. An object has symmetry if it can be translated without changing its overall shape. An object has symmetry if it can be simultaneously translated and rotated in three-dimensional space along a line known as a screw axis. An object has symmetry if it does not change shape when it is expanded or contracted. Fractals also exhibit a form of symmetry, where small portions of the fractal are similar in shape to large portions. Other symmetries include glide reflection symmetry and rotoreflection symmetry, a dyadic relation R is symmetric if and only if, whenever its true that Rab, its true that Rba. Thus, is the age as is symmetrical, for if Paul is the same age as Mary. Symmetric binary logical connectives are and, or, biconditional, nand, xor, the set of operations that preserve a given property of the object form a group. In general, every kind of structure in mathematics will have its own kind of symmetry, examples include even and odd functions in calculus, the symmetric group in abstract algebra, symmetric matrices in linear algebra, and the Galois group in Galois theory. In statistics, it appears as symmetric probability distributions, and as skewness, symmetry in physics has been generalized to mean invariance—that is, lack of change—under any kind of transformation, for example arbitrary coordinate transformations. This concept has one of the most powerful tools of theoretical physics. See Noethers theorem, and also, Wigners classification, which says that the symmetries of the laws of physics determine the properties of the found in nature. Important symmetries in physics include continuous symmetries and discrete symmetries of spacetime, internal symmetries of particles, in biology, the notion of symmetry is mostly used explicitly to describe body shapes. Bilateral animals, including humans, are more or less symmetric with respect to the plane which divides the body into left
12.
NaN
–
In computing, NaN, standing for not a number, is a numeric data type value representing an undefined or unrepresentable value, especially in floating-point calculations. Systematic use of NaNs was introduced by the IEEE754 floating-point standard in 1985, two separate kinds of NaNs are provided, termed quiet NaNs and signaling NaNs. An invalid operation is not the same as an arithmetic overflow or an arithmetic underflow. For example, a bit-wise IEEE floating-point standard single precision NaN would be, s1111111 1xxx xxxx xxxx xxxx xxxx where s is the sign. Some bits from x are used to determine the type of NaN, the remaining bits encode a payload. Floating-point operations other than ordered comparisons normally propagate a quiet NaN, a comparison with a NaN always returns an unordered result even when comparing with itself. The comparison predicates are either signaling or non-signaling, the signaling versions signal the invalid operation exception for such comparisons, the equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN. The other standard comparison predicates are all signaling if they receive a NaN operand, the predicate isNaN determines if a value is a NaN and never signals an exception, even if x is a signaling NaN. The propagation of quiet NaNs through arithmetic operations allows errors to be detected at the end of a sequence of operations without extensive testing during intermediate stages. In section 6.2 of the revised IEEE 754-2008 standard there are two anomalous functions that favor numbers — if just one of the operands is a NaN then the value of the operand is returned. There are three kinds of operations that can return NaN, Operations with a NaN as at least one operand, indeterminate forms, The divisions 0/0 and ±∞/±∞. The additions ∞ +, + ∞ and equivalent subtractions, the standard has alternative functions for powers, The standard pow function and the integer exponent pown function define 00, 1∞, and ∞0 as 1. The powr function defines all three forms as invalid operations and so returns NaN. Real operations with complex results, for example, The square root of a negative number, the logarithm of a negative number. The inverse sine or cosine of a number that is less than −1 or greater than +1, NaNs may also be explicitly assigned to variables, typically as a representation for missing values. Prior to the IEEE standard, programmers often used a value to represent undefined or missing values. NaNs are not necessarily generated in all the above cases, if an operation can produce an exception condition and traps are not masked then the operation will cause a trap instead. If an operand is a quiet NaN, and there isnt also a signaling NaN operand, then there is no exception condition, explicit assignments will not cause an exception even for signaling NaNs
13.
String (computer science)
–
In computer programming, a string is traditionally a sequence of characters, either as a literal constant or as some kind of variable. The latter may allow its elements to be mutated and the length changed, a string is generally understood as a data type and is often implemented as an array of bytes that stores a sequence of elements, typically characters, using some character encoding. A string may also more general arrays or other sequence data types and structures. When a string appears literally in source code, it is known as a literal or an anonymous string. In formal languages, which are used in logic and theoretical computer science. Let Σ be a non-empty finite set of symbols, called the alphabet, no assumption is made about the nature of the symbols. A string over Σ is any sequence of symbols from Σ. For example, if Σ =, then 01011 is a string over Σ, the length of a string s is the number of symbols in s and can be any non-negative integer, it is often denoted as |s|. The empty string is the string over Σ of length 0. The set of all strings over Σ of length n is denoted Σn, for example, if Σ =, then Σ2 =. Note that Σ0 = for any alphabet Σ, the set of all strings over Σ of any length is the Kleene closure of Σ and is denoted Σ*. In terms of Σn, Σ ∗ = ⋃ n ∈ N ∪ Σ n For example, if Σ =, although the set Σ* itself is countably infinite, each element of Σ* is a string of finite length. A set of strings over Σ is called a language over Σ. For example, if Σ =, the set of strings with an number of zeros, is a formal language over Σ. Concatenation is an important binary operation on Σ*, for any two strings s and t in Σ*, their concatenation is defined as the sequence of symbols in s followed by the sequence of characters in t, and is denoted st. For example, if Σ =, s = bear, and t = hug, then st = bearhug, String concatenation is an associative, but non-commutative operation. The empty string ε serves as the identity element, for any string s, therefore, the set Σ* and the concatenation operation form a monoid, the free monoid generated by Σ. In addition, the length function defines a monoid homomorphism from Σ* to the non-negative integers, a string s is said to be a substring or factor of t if there exist strings u and v such that t = usv
14.
Fraction (mathematics)
–
A fraction represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, three-quarters. A common, vulgar, or simple fraction consists of an integer numerator displayed above a line, numerators and denominators are also used in fractions that are not common, including compound fractions, complex fractions, and mixed numerals. The numerator represents a number of parts, and the denominator. For example, in the fraction 3/4, the numerator,3, tells us that the fraction represents 3 equal parts, the picture to the right illustrates 34 or ¾ of a cake. Fractional numbers can also be written without using explicit numerators or denominators, by using decimals, percent signs, an integer such as the number 7 can be thought of as having an implicit denominator of one,7 equals 7/1. Other uses for fractions are to represent ratios and to represent division, thus the fraction ¾ is also used to represent the ratio 3,4 and the division 3 ÷4. The test for a number being a number is that it can be written in that form. In a fraction, the number of parts being described is the numerator. Informally, they may be distinguished by placement alone but in formal contexts they are separated by a fraction bar. The fraction bar may be horizontal, oblique, or diagonal and these marks are respectively known as the horizontal bar, the slash or stroke, the division slash, and the fraction slash. In typography, horizontal fractions are known as en or nut fractions and diagonal fractions as em fractions. The denominators of English fractions are expressed as ordinal numbers. When the denominator is 1, it may be expressed in terms of wholes but is commonly ignored. When the numerator is one, it may be omitted, a fraction may be expressed as a single composition, in which case it is hyphenated, or as a number of fractions with a numerator of one, in which case they are not. Fractions should always be hyphenated when used as adjectives, alternatively, a fraction may be described by reading it out as the numerator over the denominator, with the denominator expressed as a cardinal number. The term over is used even in the case of solidus fractions, Fractions with large denominators that are not powers of ten are often rendered in this fashion while those with denominators divisible by ten are typically read in the normal ordinal fashion. A simple fraction is a number written as a/b or a b
15.
Floating-point arithmetic
–
In computing, floating-point arithmetic is arithmetic using formulaic representation of real numbers as an approximation so as to support a trade-off between range and precision. A number is, in general, represented approximately to a number of significant digits and scaled using an exponent in some fixed base. For example,1.2345 =12345 ⏟ significand ×10 ⏟ base −4 ⏞ exponent, the term floating point refers to the fact that a numbers radix point can float, that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component, and thus the floating-point representation can be thought of as a kind of scientific notation. The result of dynamic range is that the numbers that can be represented are not uniformly spaced. Over the years, a variety of floating-point representations have been used in computers, however, since the 1990s, the most commonly encountered representation is that defined by the IEEE754 Standard. A floating-point unit is a part of a computer system designed to carry out operations on floating point numbers. A number representation specifies some way of encoding a number, usually as a string of digits, there are several mechanisms by which strings of digits can represent numbers. In common mathematical notation, the string can be of any length. If the radix point is not specified, then the string implicitly represents an integer, in fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might be to use a string of 8 decimal digits with the point in the middle. The scaling factor, as a power of ten, is then indicated separately at the end of the number, floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of, A signed digit string of a length in a given base. This digit string is referred to as the significand, mantissa, the length of the significand determines the precision to which numbers can be represented. The radix point position is assumed always to be somewhere within the significand—often just after or just before the most significant digit and this article generally follows the convention that the radix point is set just after the most significant digit. A signed integer exponent, which modifies the magnitude of the number, using base-10 as an example, the number 7005152853504700000♠152853.5047, which has ten decimal digits of precision, is represented as the significand 1528535047 together with 5 as the exponent. In storing such a number, the base need not be stored, since it will be the same for the range of supported numbers. Symbolically, this value is, s b p −1 × b e, where s is the significand, p is the precision, b is the base
16.
IEEE 754
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
17.
JavaScript
–
JavaScript is a high-level, dynamic, untyped, and interpreted programming language. It has been standardized in the ECMAScript language specification, JavaScript is prototype-based with first-class functions, making it a multi-paradigm language, supporting object-oriented, imperative, and functional programming styles. JavaScript was influenced by programming languages such as Self and Scheme, JavaScript is also used in environments that are not Web-based, such as PDF documents, site-specific browsers, and desktop widgets. Newer and faster JavaScript virtual machines and platforms built upon them have increased the popularity of JavaScript for server-side Web applications. On the client side, developers have traditionally implemented JavaScript as an interpreted language, programmers also use JavaScript in video-game development, in crafting desktop and mobile applications, and in server-side network programming with run-time environments such as Node. js. In 1994, a company called Mosaic Communications was founded in Mountain View, California, however, it intentionally shared no code with NCSA Mosaic. The internal codename for the browser was Mozilla, which stood for Mosaic killer. The first version of the Web browser, Mosaic Netscape 0.9, was released in late 1994, within four months it had already taken three-quarters of the browser market and became the main browser for Internet in the 1990s. To avoid trademark problems with the NCSA, the browser was subsequently renamed Netscape Navigator in the same year. Netscape Communications realized that the Web needed to more dynamic. In 1995, the company recruited Brendan Eich with the goal of embedding the Scheme programming language into its Netscape Navigator, to defend the idea of JavaScript against competing proposals, the company needed a prototype. Eich wrote one in 10 days, in May 1995, there is a common misconception that JavaScript was influenced by an earlier Web page scripting language developed by Nombas named C--. Brendan Eich, however, had never heard of C-- before he created LiveScript, Nombas did pitch their embedded Web page scripting to Netscape, though Web page scripting was not a new concept, as shown by the ViolaWWW Web browser. Nombas later switched to offering JavaScript instead of C-- in their ScriptEase product and was part of the TC39 group that standardized ECMAScript, in December 1995, soon after releasing JavaScript for browsers, Netscape introduced an implementation of the language for server-side scripting with Netscape Enterprise Server. Since the mid-2000s, additional server-side JavaScript implementations have been introduced, Microsoft script technologies including VBScript and JScript were released in 1996. JScript, an implementation of Netscapes JavaScript, was part of Internet Explorer 3. JScript was also available for server-side scripting in Internet Information Server, JavaScript began to acquire a reputation for being one of the roadblocks to a cross-platform and standards-driven Web. Some developers took on the task of trying to make their sites work in both major browsers, but many could not afford the time
18.
PHP
–
PHP is a server-side scripting language designed primarily for web development but also used as a general-purpose programming language. Originally created by Rasmus Lerdorf in 1994, the PHP reference implementation is now produced by The PHP Development Team, PHP originally stood for Personal Home Page, but it now stands for the recursive acronym PHP, Hypertext Preprocessor. PHP code may be embedded into HTML or HTML5 code, or it can be used in combination with various web template systems, web content management systems and web frameworks. PHP code is usually processed by a PHP interpreter implemented as a module in the web server or as a Common Gateway Interface executable. The web server combines the results of the interpreted and executed PHP code, PHP code may also be executed with a command-line interface and can be used to implement standalone graphical applications. The standard PHP interpreter, powered by the Zend Engine, is free software released under the PHP License, PHP has been widely ported and can be deployed on most web servers on almost every operating system and platform, free of charge. The PHP language evolved without a formal specification or standard until 2014. Since 2014 work has gone on to create a formal PHP specification, PHP development began in 1995 when Rasmus Lerdorf wrote several Common Gateway Interface programs in C, which he used to maintain his personal homepage. He extended them to work with web forms and to communicate with databases, PHP/FI could help to build simple, dynamic web applications. This release already had the functionality that PHP has as of 2013. This included Perl-like variables, form handling, and the ability to embed HTML, the syntax resembled that of Perl but was simpler, more limited and less consistent. A development team began to form and, after months of work and beta testing, the fact that PHP lacked an original overall design but instead developed organically has led to inconsistent naming of functions and inconsistent ordering of their parameters. Zeev Suraski and Andi Gutmans rewrote the parser in 1997 and formed the base of PHP3, changing the name to the recursive acronym PHP. Afterwards, public testing of PHP3 began, and the launch came in June 1998. Suraski and Gutmans then started a new rewrite of PHPs core and they also founded Zend Technologies in Ramat Gan, Israel. On May 22,2000, PHP4, powered by the Zend Engine 1.0, was released, as of August 2008 this branch reached version 4.4.9. PHP4 is no longer under development nor will any security updates be released, on July 13,2004, PHP5 was released, powered by the new Zend Engine II. PHP5 included new features such as improved support for object-oriented programming, the PHP Data Objects extension, in 2008 PHP5 became the only stable version under development
19.
Lexicographical order
–
In mathematics, the lexicographic or lexicographical order is a generalization of the way the alphabetical order of words is based on the alphabetical order of their component letters. This generalization consists primarily in defining a total order over the sequences of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering, another generalization defines an order on a Cartesian product of partially ordered sets, this order is a total order if and only if the factors of the Cartesian product are totally ordered. The word lexicographic is derived from lexicon, the set of words that are used in language and appear in dictionaries. The lexicographic order has thus been introduced for sorting the entries of dictionaries and this has been formalized in the following way. Consider a finite set A, often called alphabet, which is totally ordered, in dictionaries, this is the common alphabet, ordered by the alphabetical order. In book indexes, the alphabet is generally extended to all alphanumeric characters, the lexicographic order is a total order on the sequences of elements of A, often called words on A, which is defined as follows. Given two different sequences of the length, a1a2. ak and b1b2. bk, the first one is smaller than the second one for the lexicographical order, if ai<bi, for the first i where ai. To compare sequences of different lengths, the sequence is usually padded at the end with enough blanks. This way of comparing sequences of different lengths is always used in dictionaries, however, in combinatorics, an other convention is frequently used, whereby a shorter sequence is always smaller than a longer sequence. This variant of the order is sometimes called shortlex order. In dictionary order, the word Thomas appears before Thompson because the letter a comes before the letter p in the alphabet, the 5th letter is the first that is different in the two words, the first four letters are Thom in both. Because it is the first difference, the 5th letter is the most significant difference for the alphabetical ordering. An important property of the order on words of a fixed length on a finite alphabet is that it is a well-order. The lexicographical order is used not only in dictionaries, but also commonly for numbers, one of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. When negative numbers are considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers and this is one of the reasons for adopting twos complement representation for representing signed integers in computers. Another example of a use of lexicographical ordering appears in the ISO8601 standard for dates
20.
Sorting algorithm
–
A sorting algorithm is an algorithm that puts elements of a list in a certain order. The most-used orders are numerical order and lexicographical order, more formally, the output must satisfy two conditions, The output is in nondecreasing order, The output is a permutation of the input. Since the dawn of computing, the problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple. For example, bubble sort was analyzed as early as 1956, comparison sorting algorithms have a fundamental requirement of O comparisons, algorithms not based on comparisons, such as counting sort, can have better performance. Sorting algorithms are classified by, Computational complexity in terms of the size of the list. For typical serial sorting algorithms good behavior is O, with parallel sort in O, ideal behavior for a serial sort is O, but this is not possible in the average case. Optimal parallel sorting is O. Comparison-based sorting algorithms, need at least O comparisons for most inputs, in particular, some sorting algorithms are in-place. Strictly, an in-place sort needs only O memory beyond the items being sorted, some algorithms are either recursive or non-recursive, while others may be both. Stability, stable sorting algorithms maintain the order of records with equal keys. Whether or not they are a comparison sort, a comparison sort examines the data only by comparing two elements with a comparison operator. General method, insertion, exchange, selection, merging, etc, exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort, also whether the algorithm is serial or parallel. The remainder of this discussion almost exclusively concentrates upon serial algorithms, adaptability, Whether or not the presortedness of the input affects the running time. Algorithms that take this account are known to be adaptive. When sorting some kinds of data, only part of the data is examined when determining the sort order, for example, in the card sorting example to the right, the cards are being sorted by their rank, and their suit is being ignored. This allows the possibility of multiple different correctly sorted versions of the original list, more formally, the data being sorted can be represented as a record or tuple of values, and the part of the data that is used for sorting is called the key. In the card example, cards are represented as a record, and the key is the rank. A sorting algorithm is stable if there are two records R and S with the same key, and R appears before S in the original list
21.
Boolean algebra
–
In mathematics and mathematical logic, Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false, usually denoted 1 and 0 respectively. It is thus a formalism for describing logical relations in the way that ordinary algebra describes numeric relations. Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic, according to Huntington, the term Boolean algebra was first suggested by Sheffer in 1913. Boolean algebra has been fundamental in the development of digital electronics and it is also used in set theory and statistics. Booles algebra predated the modern developments in algebra and mathematical logic. In an abstract setting, Boolean algebra was perfected in the late 19th century by Jevons, Schröder, Huntington, in fact, M. H. Stone proved in 1936 that every Boolean algebra is isomorphic to a field of sets. Shannon already had at his disposal the abstract mathematical apparatus, thus he cast his switching algebra as the two-element Boolean algebra, in circuit engineering settings today, there is little need to consider other Boolean algebras, thus switching algebra and Boolean algebra are often used interchangeably. Efficient implementation of Boolean functions is a problem in the design of combinational logic circuits. Logic sentences that can be expressed in classical propositional calculus have an equivalent expression in Boolean algebra, thus, Boolean logic is sometimes used to denote propositional calculus performed in this way. Boolean algebra is not sufficient to capture logic formulas using quantifiers, the closely related model of computation known as a Boolean circuit relates time complexity to circuit complexity. Whereas in elementary algebra expressions denote mainly numbers, in Boolean algebra they denote the truth values false and these values are represented with the bits, namely 0 and 1. Addition and multiplication then play the Boolean roles of XOR and AND respectively, Boolean algebra also deals with functions which have their values in the set. A sequence of bits is a commonly used such function, another common example is the subsets of a set E, to a subset F of E is associated the indicator function that takes the value 1 on F and 0 outside F. The most general example is the elements of a Boolean algebra, as with elementary algebra, the purely equational part of the theory may be developed without considering explicit values for the variables. The basic operations of Boolean calculus are as follows, AND, denoted x∧y, satisfies x∧y =1 if x = y =1 and x∧y =0 otherwise. OR, denoted x∨y, satisfies x∨y =0 if x = y =0, NOT, denoted ¬x, satisfies ¬x =0 if x =1 and ¬x =1 if x =0. Alternatively the values of x∧y, x∨y, and ¬x can be expressed by tabulating their values with truth tables as follows, the first operation, x → y, or Cxy, is called material implication. If x is then the value of x → y is taken to be that of y
22.
Logical connective
–
The most common logical connectives are binary connectives which join two sentences which can be thought of as the functions operands. Also commonly, negation is considered to be a unary connective, logical connectives along with quantifiers are the two main types of logical constants used in formal systems such as propositional logic and predicate logic. Semantics of a logical connective is often, but not always, a logical connective is similar to but not equivalent to a conditional operator. In the grammar of natural languages two sentences may be joined by a grammatical conjunction to form a compound sentence. Some but not all such grammatical conjunctions are truth functions, for example, consider the following sentences, A, Jack went up the hill. B, Jill went up the hill, C, Jack went up the hill and Jill went up the hill. D, Jack went up the hill so Jill went up the hill, the words and and so are grammatical conjunctions joining the sentences and to form the compound sentences and. The and in is a connective, since the truth of is completely determined by and, it would make no sense to affirm. Various English words and word pairs express logical connectives, and some of them are synonymous, examples are, In formal languages, truth functions are represented by unambiguous symbols. These symbols are called logical connectives, logical operators, propositional operators, or, in classical logic, see well-formed formula for the rules which allow new well-formed formulas to be constructed by joining other well-formed formulas using truth-functional connectives. Logical connectives can be used to more than two statements, so one can speak about n-ary logical connective. For example, the meaning of the statements it is raining, comes from Booles interpretation of logic as an elementary algebra. True, the symbol 1 comes from Booles interpretation of logic as an algebra over the two-element Boolean algebra. False, the symbol 0 comes also from Booles interpretation of logic as a ring, some authors used letters for connectives at some time of the history, u. for conjunction and o. Such a logical connective as converse implication ← is actually the same as material conditional with swapped arguments, thus, in some logical calculi certain essentially different compound statements are logically equivalent. A less trivial example of a redundancy is the equivalence between ¬P ∨ Q and P → Q. There are sixteen Boolean functions associating the input truth values P and Q with four-digit binary outputs and these correspond to possible choices of binary logical connectives for classical logic. Different implementations of classical logic can choose different functionally complete subsets of connectives, One approach is to choose a minimal set, and define other connectives by some logical form, as in the example with the material conditional above
23.
Fortran
–
Fortran is a general-purpose, imperative programming language that is especially suited to numeric computation and scientific computing. It is a language for high-performance computing and is used for programs that benchmark. Fortran encompasses a lineage of versions, each of which evolved to add extensions to the language while usually retaining compatibility with prior versions, the names of earlier versions of the language through FORTRAN77 were conventionally spelled in all-capitals. The capitalization has been dropped in referring to newer versions beginning with Fortran 90, the official language standards now refer to the language as Fortran rather than all-caps FORTRAN. In late 1953, John W. Backus submitted a proposal to his superiors at IBM to develop a practical alternative to assembly language for programming their IBM704 mainframe computer. Backus historic FORTRAN team consisted of programmers Richard Goldberg, Sheldon F. Best, Harlan Herrick, Peter Sheridan, Roy Nutt, Robert Nelson, Irving Ziller, Lois Haibt, and David Sayre. Its concepts included easier entry of equations into a computer, a developed by J. Halcombe Laning and demonstrated in the Laning. A draft specification for The IBM Mathematical Formula Translating System was completed by mid-1954, the first manual for FORTRAN appeared in October 1956, with the first FORTRAN compiler delivered in April 1957. John Backus said during a 1979 interview with Think, the IBM employee magazine, the language was widely adopted by scientists for writing numerically intensive programs, which encouraged compiler writers to produce compilers that could generate faster and more efficient code. The inclusion of a complex data type in the language made Fortran especially suited to technical applications such as electrical engineering. By 1960, versions of FORTRAN were available for the IBM709,650,1620, significantly, the increasing popularity of FORTRAN spurred competing computer manufacturers to provide FORTRAN compilers for their machines, so that by 1963 over 40 FORTRAN compilers existed. For these reasons, FORTRAN is considered to be the first widely used programming language supported across a variety of computer architectures, the arithmetic IF statement was similar to a three-way branch instruction on the IBM704. However, the 704 branch instructions all contained only one destination address, an optimizing compiler like FORTRAN would most likely select the more compact and usually faster Transfers instead of the Compare. Also the Compare considered −0 and +0 to be different values while the Transfer Zero, the FREQUENCY statement in FORTRAN was used originally to give branch probabilities for the three branch cases of the arithmetic IF statement. The Monte Carlo technique is documented in Backus et al, many years later, the FREQUENCY statement had no effect on the code, and was treated as a comment statement, since the compilers no longer did this kind of compile-time simulation. A similar fate has befallen compiler hints in other programming languages. The first FORTRAN compiler reported diagnostic information by halting the program when an error was found and that code could be looked up by the programmer in a error messages table in the operators manual, providing them with a brief description of the problem. Before the development of disk files, text editors and terminals, programs were most often entered on a keyboard onto 80-column punched cards
24.
ALGOL 68
–
ALGOL68 was designed by the IFIP Working Group 2.1. On December 20,1968, the language was adopted by Working Group 2.1. ALGOL68 was defined using a grammar formalism invented by Adriaan van Wijngaarden. ALGOL68 has been criticized, most prominently by some members of its design such as C. A. R. In 1970, ALGOL 68-R became the first working compiler for ALGOL68, in the 1973 revision, certain features – such as proceduring, gommas and formal bounds – were omitted. The language of the unrevised report, steve Bourne, who was on the Algol 68 revision committee, took some of its ideas to his Bourne shell and to C. The complete history of the project can be found in C. H, lindseys A History of ALGOL68. For a full-length treatment of the language, see Programming Algol 68 Made Easy by Dr. Sian Mountbatten, or Learning Algol 68 Genie by Dr. Marcel van der Veer which includes the Revised Report. A Shorter History of Algol 68 ALGOL68 - 3rd generation ALGOL Mar.1968, Draft Report on the Algorithmic Language ALGOL68 - Edited by, A. van Wijngaarden, B. J. Mailloux, J. E. L. Oct.1968, Penultimate Draft Report on the Algorithmic Language ALGOL68 – Chapters 1-9 Chapters 10-12 – Edited by, A. van Wijngaarden, B. J. Mailloux, J. E. L. Dec.1968, Report on the Algorithmic Language ALGOL68 – Offprint from Numerische Mathematik,14, 79-218, – Edited by, A. van Wijngaarden, B. J. Mailloux, J. E. L. WG2.1 members active in the design of ALGOL68. 1973, Revised Report on the Algorithmic Language Algol 68 - Springer-Verlag 1976 - Edited by, A. van Wijngaarden, B. J. Mailloux, J. E. L. 1968, On December 20,1968, the Final Report was adopted by the Working Group, translations of the standard were made for Russian, German, French and Bulgarian, and then later Japanese and Chinese. The standard was made available in Braille. 1984, TC97 considered Algol 68 for standardisation as New Work Item TC97/N1642,1988, Subsequently ALGOL68 became one of the GOST standards in Russia. r0. The basic language construct is the unit, a unit may be a formula, an enclosed clause, a routine text or one of several technically needed constructs. The technical term enclosed clause unifies some of the inherently bracketing constructs known as block, do statement, when keywords are used, generally the reversed character sequence of the introducing keyword is used for terminating the enclosure, e. g
25.
APL (programming language)
–
APL is a programming language developed in the 1960s by Kenneth E. Iverson. Its central datatype is the multidimensional array and it uses a large range of special graphic symbols to represent most functions and operators, leading to very concise code. It has been an important influence on the development of concept modeling, spreadsheets, functional programming and it has also inspired several other programming languages. It is still used today for certain applications, the preface states its premise, Applied mathematics is largely concerned with the design and analysis of explicit procedures for calculating the exact or approximate values of various functions. Such explicit procedures are called algorithms or programs, because an effective notation for the description of programs exhibits considerable syntactic structure, it is called a programming language. In 1960, he work for IBM and, working with Adin Falkoff. Students tested their code in Hellermans lab and this implementation of a portion of the notation was called PAT. After this was published, the team turned their attention to an implementation of the notation on a computer system. One of the motivations for this focus of implementation was the interest of John L. Lawrence who had new duties with Science Research Associates, Lawrence asked Iverson and his group to help utilize the language as a tool for the development and use of computers in education. After Lawrence M. Breed and Philip S and this work was finished in late 1965 and later known as IVSYS. The basis of this implementation was described in detail by Abrams in a Stanford University Technical Report and this was formally supervised by Niklaus Wirth. Like Hellermans PAT system earlier, this implementation did not include the APL character set but used special English reserved words for functions and this was used on paper printing terminal workstations using the Selectric typewriter and typeball mechanism, such as the IBM1050 and IBM2741 terminal. Keycaps could be placed over the keys to show which APL characters would be entered and typed when that key was struck. For the first time, a programmer could actually type in and see real APL characters as used in Iversons notation, Falkoff and Iverson had the special APL Selectric typeballs,987 and 988, designed in late 1964, although no APL computer system was available to use them. Iverson cited Falkoff as the inspiration for the idea of using an IBM Selectric typeball for the APL character set, some APL symbols, even with the APL characters on the typeball, still had to be typed in by over-striking two existing typeball characters. An example would be the grade up character, which had to be made from a delta and this was necessary because the APL character set was larger than the 88 characters allowed on the Selectric typeball. IBM was chiefly responsible for the introduction of APL to the marketplace, APL was first available in 1967 for the IBM1130 as APL\1130. It would run in as little as 8k 16-bit words of memory, somewhat later, as suitably performing hardware was finally becoming available in the mid- to late-1980s, many users migrated their applications to the personal computer environment
26.
BASIC
–
BASIC is a family of general-purpose, high-level programming languages whose design philosophy emphasizes ease of use. In 1964, John G. Kemeny and Thomas E. Kurtz designed the original BASIC language at Dartmouth College in the U. S. state of New Hampshire and they wanted to enable students in fields other than science and mathematics to use computers. At the time, nearly all use of computers required writing custom software, versions of BASIC became widespread on microcomputers in the mid-1970s and 1980s. Microcomputers usually shipped with BASIC, often in the machines firmware, having an easy-to-learn language on these early personal computers allowed small business owners, professionals, hobbyists, and consultants to develop custom software on computers they could afford. In the 2010s, BASIC remains popular in many computing dialects and in new languages influenced by BASIC, before the mid-1960s, the only computers were huge mainframe computers. Users submitted jobs on punched cards or similar media to specialist computer operators, the computer stored these, then used a batch processing system to run this queue of jobs one after another, allowing very high levels of utilization of these expensive machines. As the performance of computing hardware rose through the 1960s, multi-processing was developed and this allowed a mix of batch jobs to be run together, but the real revolution was the development of time-sharing. The original BASIC language was released on May 1,1964 by John G. Kemeny and Thomas E. Kurtz, the acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. BASIC was designed to allow students to write computer programs for the Dartmouth Time-Sharing System. It was intended specifically for technical users who did not have or want the mathematical background previously expected. Being able to use a computer to support teaching and research was quite novel at the time, the language was based on FORTRAN II, with some influences from ALGOL60 and with additions to make it suitable for timesharing. Wanting use of the language to become widespread, its designers made the available free of charge. They also made it available to schools in the Hanover area. In the following years, as dialects of BASIC appeared, Kemeny. A version was a part of the Pick operating system from 1973 onward. During this period a number of computer games were written in BASIC. A number of these were collected by DEC employee David H. Ahl and he later collected a number of these into book form,101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a computer for education use