1.
Wilhelm Ackermann
–
Wilhelm Friedrich Ackermann was a German mathematician best known for the Ackermann function, an important example in the theory of computation. Ackermann was born in Herscheid municipality, Germany, and was awarded a Ph. D, from 1929 until 1948, he taught at the Arnoldinum Gymnasium in Burgsteinfurt, and then at Lüdenscheid until 1961. He was also a member of the Akademie der Wissenschaften in Göttingen. In 1928, Ackermann helped David Hilbert turn his 1917 –22 lectures on mathematical logic into a text. This text contained the first exposition ever of first-order logic, and posed the problem of its completeness, Ackermann went on to construct consistency proofs for set theory, full arithmetic, type-free logic, and a new axiomatization of set theory. Ackermann coding Ackermann ordinal Ackermann set theory Ackermann function Inverse Ackermann function 1928, on Hilberts construction of the real numbers in Jean van Heijenoort, ed.1967. From Frege to Gödel, A Source Book in Mathematical Logic, zur Widerspruchtfreisheit der Zahlentheorie, Mathematische Annalen, vol. Solvable cases of the decision problem, oConnor, John J. Robertson, Edmund F. Wilhelm Ackermann, MacTutor History of Mathematics archive, University of St Andrews. Wilhelm Ackermann at the Mathematics Genealogy Project Erich Friedmans page on Ackermann at Stetson University Hermes, In memoriam WILHELM ACKERMANN 1896-1962 Author profile in the database zbMATH
2.
Partial function
–
In mathematics, a partial function from X to Y is a function f, X ′ → Y, for some subset X ′ of X. It generalizes the concept of an f, X → Y by not forcing f to map every element of X to an element of Y. If X ′ = X, then f is called a function and is equivalent to a function. Partial functions are used when the exact domain, X, is not known. Specifically, we say that for any x ∈ X, either. For example, we can consider the square root function restricted to the g, Z → Z g = n. Thus g is defined for n that are perfect squares. So, g =5, but g is undefined, there are two distinct meanings in current mathematical usage for the notion of the domain of a partial function. Most mathematicians, including recursion theorists, use the domain of f for the set of all values x such that f is defined. But some, particularly category theorists, consider the domain of a function f, X → Y to be X. Similarly, the range can refer to either the codomain or the image of a function. Occasionally, a function with domain X and codomain Y is written as f, X ⇸ Y. A partial function is said to be injective or surjective when the function given by the restriction of the partial function to its domain of definition is. A partial function may be both injective and surjective, because a function is trivially surjective when restricted to its image, the term partial bijection denotes a partial function which is injective. An injective partial function may be inverted to a partial function. Furthermore, a function which is injective may be inverted to an injective partial function. The notion of transformation can be generalized to functions as well. A partial transformation is a function f, A → B, total function is a synonym for function
3.
David Hilbert
–
David Hilbert was a German mathematician. He is recognized as one of the most influential and universal mathematicians of the 19th, Hilbert discovered and developed a broad range of fundamental ideas in many areas, including invariant theory and the axiomatization of geometry. He also formulated the theory of Hilbert spaces, one of the foundations of functional analysis, Hilbert adopted and warmly defended Georg Cantors set theory and transfinite numbers. A famous example of his leadership in mathematics is his 1900 presentation of a collection of problems set the course for much of the mathematical research of the 20th century. Hilbert and his students contributed significantly to establishing rigor and developed important tools used in mathematical physics. Hilbert is known as one of the founders of theory and mathematical logic. In late 1872, Hilbert entered the Friedrichskolleg Gymnasium, but, after a period, he transferred to. Upon graduation, in autumn 1880, Hilbert enrolled at the University of Königsberg, in early 1882, Hermann Minkowski, returned to Königsberg and entered the university. Hilbert knew his luck when he saw it, in spite of his fathers disapproval, he soon became friends with the shy, gifted Minkowski. In 1884, Adolf Hurwitz arrived from Göttingen as an Extraordinarius, Hilbert obtained his doctorate in 1885, with a dissertation, written under Ferdinand von Lindemann, titled Über invariante Eigenschaften spezieller binärer Formen, insbesondere der Kugelfunktionen. Hilbert remained at the University of Königsberg as a Privatdozent from 1886 to 1895, in 1895, as a result of intervention on his behalf by Felix Klein, he obtained the position of Professor of Mathematics at the University of Göttingen. During the Klein and Hilbert years, Göttingen became the preeminent institution in the mathematical world and he remained there for the rest of his life. Among Hilberts students were Hermann Weyl, chess champion Emanuel Lasker, Ernst Zermelo, john von Neumann was his assistant. At the University of Göttingen, Hilbert was surrounded by a circle of some of the most important mathematicians of the 20th century, such as Emmy Noether. Between 1902 and 1939 Hilbert was editor of the Mathematische Annalen, good, he did not have enough imagination to become a mathematician. Hilbert lived to see the Nazis purge many of the prominent faculty members at University of Göttingen in 1933 and those forced out included Hermann Weyl, Emmy Noether and Edmund Landau. One who had to leave Germany, Paul Bernays, had collaborated with Hilbert in mathematical logic and this was a sequel to the Hilbert-Ackermann book Principles of Mathematical Logic from 1928. Hermann Weyls successor was Helmut Hasse, about a year later, Hilbert attended a banquet and was seated next to the new Minister of Education, Bernhard Rust
4.
Phi
–
Phi is the 21st letter of the Greek alphabet. In Ancient Greek, it represented a voiceless bilabial plosive. In modern Greek, it represents a voiceless fricative and is correspondingly romanized as f. Its origin is uncertain but it may be that phi originated as the letter qoppa, in traditional Greek numerals, phi has a value of 500 or 500 000. The Cyrillic letter Ef descends from phi, phi is also used as a symbol for the golden ratio and on other occasions in math and science. This use is separately encoded as the Unicode glyph ϕ, the modern Greek pronunciation of the letter is sometimes encountered in English when the letter is being used in this sense. The lower-case letter φ is often used to represent the following, Magnetic flux in physics The golden ratio 1 +52 ≈1.618033988749894848204586834. in mathematics, art, Eulers totient function φ in number theory, also called Eulers phi function. The cyclotomic polynomial functions Φn of algebra, in algebra, group or ring homomorphisms In probability theory, ϕ = −½e−x2/2 is the probability density function of the normal distribution. In probability theory, φX = E is the function of a random variable X. An angle, typically the second angle mentioned, after θ, especially, The argument of a complex number. The phase of a wave in signal processing, in spherical coordinates, mathematicians usually refer to phi as the polar angle. The convention in physics is to use phi as the azimuthal angle, one of the dihedral angles in the backbones of proteins in a Ramachandran plot Internal or effective angle of friction. The work function of a surface, in solid-state physics, a shorthand representation for an aromatic functional group in organic chemistry. The ratio of free energy destabilizations of protein mutants in phi value analysis, in cartography, geodesy and navigation, latitude. In aircraft flight mechanics as the symbol for bank angle, in combustion engineering, fuel–air equivalence ratio. The ratio between the fuel air ratio to the stoichiometric fuel air ratio. The Veblen function in set theory Porosity in geology and hydrology, strength reduction factor in structural engineering, used to account for statistical variabilities in materials and construction methods. The symbol for a voiceless fricative in the International Phonetic Alphabet In economics
5.
Addition
–
Addition is one of the four basic operations of arithmetic, with the others being subtraction, multiplication and division. The addition of two numbers is the total amount of those quantities combined. For example, in the picture on the right, there is a combination of three apples and two together, making a total of five apples. This observation is equivalent to the mathematical expression 3 +2 =5 i. e.3 add 2 is equal to 5, besides counting fruits, addition can also represent combining other physical objects. In arithmetic, rules for addition involving fractions and negative numbers have been devised amongst others, in algebra, addition is studied more abstractly. It is commutative, meaning that order does not matter, and it is associative, repeated addition of 1 is the same as counting, addition of 0 does not change a number. Addition also obeys predictable rules concerning related operations such as subtraction and multiplication, performing addition is one of the simplest numerical tasks. Addition of very small numbers is accessible to toddlers, the most basic task,1 +1, can be performed by infants as young as five months and even some members of other animal species. In primary education, students are taught to add numbers in the system, starting with single digits. Mechanical aids range from the ancient abacus to the modern computer, Addition is written using the plus sign + between the terms, that is, in infix notation. The result is expressed with an equals sign, for example, 3½ =3 + ½ =3.5. This notation can cause confusion since in most other contexts juxtaposition denotes multiplication instead, the sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, ∑ k =15 k 2 =12 +22 +32 +42 +52 =55. The numbers or the objects to be added in addition are collectively referred to as the terms, the addends or the summands. This is to be distinguished from factors, which are multiplied, some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an addend at all, today, due to the commutative property of addition, augend is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin, using the gerundive suffix -nd results in addend, thing to be added. Likewise from augere to increase, one gets augend, thing to be increased, sum and summand derive from the Latin noun summa the highest, the top and associated verb summare
6.
Multiplication
–
Multiplication is one of the four elementary, mathematical operations of arithmetic, with the others being addition, subtraction and division. Multiplication can also be visualized as counting objects arranged in a rectangle or as finding the area of a rectangle whose sides have given lengths, the area of a rectangle does not depend on which side is measured first, which illustrates the commutative property. The product of two measurements is a new type of measurement, for multiplying the lengths of the two sides of a rectangle gives its area, this is the subject of dimensional analysis. The inverse operation of multiplication is division, for example, since 4 multiplied by 3 equals 12, then 12 divided by 3 equals 4. Multiplication by 3, followed by division by 3, yields the original number, Multiplication is also defined for other types of numbers, such as complex numbers, and more abstract constructs, like matrices. For these more abstract constructs, the order that the operands are multiplied sometimes does matter, a listing of the many different kinds of products that are used in mathematics is given in the product page. In arithmetic, multiplication is often written using the sign × between the terms, that is, in infix notation, there are other mathematical notations for multiplication, Multiplication is also denoted by dot signs, usually a middle-position dot,5 ⋅2 or 5. 2 The middle dot notation, encoded in Unicode as U+22C5 ⋅ dot operator, is standard in the United States, the United Kingdom, when the dot operator character is not accessible, the interpunct is used. In other countries use a comma as a decimal mark. In algebra, multiplication involving variables is often written as a juxtaposition, the notation can also be used for quantities that are surrounded by parentheses. In matrix multiplication, there is a distinction between the cross and the dot symbols. The cross symbol generally denotes the taking a product of two vectors, yielding a vector as the result, while the dot denotes taking the dot product of two vectors, resulting in a scalar. In computer programming, the asterisk is still the most common notation and this is due to the fact that most computers historically were limited to small character sets that lacked a multiplication sign, while the asterisk appeared on every keyboard. This usage originated in the FORTRAN programming language, the numbers to be multiplied are generally called the factors. The number to be multiplied is called the multiplicand, while the number of times the multiplicand is to be multiplied comes from the multiplier. Usually the multiplier is placed first and the multiplicand is placed second, however sometimes the first factor is the multiplicand, additionally, there are some sources in which the term multiplicand is regarded as a synonym for factor. In algebra, a number that is the multiplier of a variable or expression is called a coefficient, the result of a multiplication is called a product. A product of integers is a multiple of each factor, for example,15 is the product of 3 and 5, and is both a multiple of 3 and a multiple of 5
7.
Exponentiation
–
Exponentiation is a mathematical operation, written as bn, involving two numbers, the base b and the exponent n. The exponent is usually shown as a superscript to the right of the base, Some common exponents have their own names, the exponent 2 is called the square of b or b squared, the exponent 3 is called the cube of b or b cubed. The exponent −1 of b, or 1 / b, is called the reciprocal of b, when n is a positive integer and b is not zero, b−n is naturally defined as 1/bn, preserving the property bn × bm = bn + m. The definition of exponentiation can be extended to any real or complex exponent. Exponentiation by integer exponents can also be defined for a variety of algebraic structures. The term power was used by the Greek mathematician Euclid for the square of a line, archimedes discovered and proved the law of exponents, 10a 10b = 10a+b, necessary to manipulate powers of 10. In the late 16th century, Jost Bürgi used Roman numerals for exponents, early in the 17th century, the first form of our modern exponential notation was introduced by Rene Descartes in his text titled La Géométrie, there, the notation is introduced in Book I. Nicolas Chuquet used a form of notation in the 15th century. The word exponent was coined in 1544 by Michael Stifel, samuel Jeake introduced the term indices in 1696. In the 16th century Robert Recorde used the square, cube, zenzizenzic, sursolid, zenzicube, second sursolid. Biquadrate has been used to refer to the power as well. Some mathematicians used exponents only for greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as ax + bxx + cx3 + d, another historical synonym, involution, is now rare and should not be confused with its more common meaning. In 1748 Leonhard Euler wrote consider exponentials or powers in which the exponent itself is a variable and it is clear that quantities of this kind are not algebraic functions, since in those the exponents must be constant. With this introduction of transcendental functions, Euler laid the foundation for the introduction of natural logarithm as the inverse function for y = ex. The expression b2 = b ⋅ b is called the square of b because the area of a square with side-length b is b2, the expression b3 = b ⋅ b ⋅ b is called the cube of b because the volume of a cube with side-length b is b3. The exponent indicates how many copies of the base are multiplied together, for example,35 =3 ⋅3 ⋅3 ⋅3 ⋅3 =243. The base 3 appears 5 times in the multiplication, because the exponent is 5
8.
Raphael M. Robinson
–
Raphael Mitchel Robinson was an American mathematician. Born in National City, California, Robinson was the youngest of four children of a lawyer and he was awarded from the University of California, Berkeley in mathematics, the BA, MA, and Ph. D. His Ph. D. thesis, on analysis, was titled Some results in the theory of Schlicht functions. In 1941, Robinson married his former student Julia Bowman and she became his Berkeley colleague and the first woman president of the American Mathematical Society. Robinson worked on logic, set theory, geometry, number theory. In 1937 he set out a simpler and more version of the John von Neumann 1923 axiomatic set theory. In 1950 Robinson proved that an essentially undecidable theory need not have a number of axioms by coming up with a counterexample. Q is finitely axiomatizable because it lacks Peano arithmetics axiom schema of induction, nevertheless Q, like Peano arithmetic, is incomplete, Robinson worked in number theory, even employing very early computers to obtain results. For example, he coded the Lucas-Lehmer primality test to determine whether 2n −1 was prime for all prime n <2304 on a SWAC. In 1952, he showed that these Mersenne numbers were all composite except for 17 values of n =2,3,5,7,13,17,19,31,61,89,107,127,521,607,1279,2203,2281. He discovered the last five of these Mersenne primes, the largest ones known at the time, alfred Tarski, A. Mostowski, and R. M. Robinson,1953. Leon Henkin,1995, In memoriam, Raphael Mitchell Robinson, in memoriam, Raphael Mitchell Robinson, Modern Logic 5,329. OConnor, John J. Robertson, Edmund F. Raphael M. Robinson, MacTutor History of Mathematics archive, the source for much of this entry. Raphael M. Robinson at the Mathematics Genealogy Project
9.
Recursion
–
Recursion occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic, the most common application of recursion is in mathematics and computer science, where a function being defined is applied within its own definition. While this apparently defines a number of instances, it is often done in such a way that no loop or infinite chain of references can occur. The ancestors of ones ancestors are also ones ancestors, the Fibonacci sequence is a classic example of recursion, Fib =0 as base case 1, Fib =1 as base case 2, For all integers n >1, Fib, = Fib + Fib. Many mathematical axioms are based upon recursive rules, for example, the formal definition of the natural numbers by the Peano axioms can be described as,0 is a natural number, and each natural number has a successor, which is also a natural number. By this base case and recursive rule, one can generate the set of all natural numbers, recursively defined mathematical objects include functions, sets, and especially fractals. There are various more tongue-in-cheek definitions of recursion, see recursive humor, Recursion is the process a procedure goes through when one of the steps of the procedure involves invoking the procedure itself. A procedure that goes through recursion is said to be recursive, to understand recursion, one must recognize the distinction between a procedure and the running of a procedure. A procedure is a set of steps based on a set of rules, the running of a procedure involves actually following the rules and performing the steps. An analogy, a procedure is like a recipe, running a procedure is like actually preparing the meal. Recursion is related to, but not the same as, a reference within the specification of a procedure to the execution of some other procedure. For instance, a recipe might refer to cooking vegetables, which is another procedure that in turn requires heating water, for this reason recursive definitions are very rare in everyday situations. An example could be the procedure to find a way through a maze. Proceed forward until reaching either an exit or a branching point, If the point reached is an exit, terminate. Otherwise try each branch in turn, using the procedure recursively, if every trial fails by reaching only dead ends, return on the path led to this branching point. Whether this actually defines a terminating procedure depends on the nature of the maze, in any case, executing the procedure requires carefully recording all currently explored branching points, and which of their branches have already been exhaustively tried. This can be understood in terms of a definition of a syntactic category. A sentence can have a structure in which what follows the verb is another sentence, Dorothy thinks witches are dangerous, so a sentence can be defined recursively as something with a structure that includes a noun phrase, a verb, and optionally another sentence
10.
Lexicographical order
–
In mathematics, the lexicographic or lexicographical order is a generalization of the way the alphabetical order of words is based on the alphabetical order of their component letters. This generalization consists primarily in defining a total order over the sequences of elements of a totally ordered set. There are several variants and generalizations of the lexicographical ordering, another generalization defines an order on a Cartesian product of partially ordered sets, this order is a total order if and only if the factors of the Cartesian product are totally ordered. The word lexicographic is derived from lexicon, the set of words that are used in language and appear in dictionaries. The lexicographic order has thus been introduced for sorting the entries of dictionaries and this has been formalized in the following way. Consider a finite set A, often called alphabet, which is totally ordered, in dictionaries, this is the common alphabet, ordered by the alphabetical order. In book indexes, the alphabet is generally extended to all alphanumeric characters, the lexicographic order is a total order on the sequences of elements of A, often called words on A, which is defined as follows. Given two different sequences of the length, a1a2. ak and b1b2. bk, the first one is smaller than the second one for the lexicographical order, if ai<bi, for the first i where ai. To compare sequences of different lengths, the sequence is usually padded at the end with enough blanks. This way of comparing sequences of different lengths is always used in dictionaries, however, in combinatorics, an other convention is frequently used, whereby a shorter sequence is always smaller than a longer sequence. This variant of the order is sometimes called shortlex order. In dictionary order, the word Thomas appears before Thompson because the letter a comes before the letter p in the alphabet, the 5th letter is the first that is different in the two words, the first four letters are Thom in both. Because it is the first difference, the 5th letter is the most significant difference for the alphabetical ordering. An important property of the order on words of a fixed length on a finite alphabet is that it is a well-order. The lexicographical order is used not only in dictionaries, but also commonly for numbers, one of the drawbacks of the Roman numeral system is that it is not always immediately obvious which of two numbers is the smaller. When negative numbers are considered, one has to reverse the order for comparing negative numbers. This is not usually a problem for humans, but it may be for computers and this is one of the reasons for adopting twos complement representation for representing signed integers in computers. Another example of a use of lexicographical ordering appears in the ISO8601 standard for dates
11.
Exponential growth
–
Exponential decay occurs in the same way when the growth rate is negative. In the case of a domain of definition with equal intervals, it is also called geometric growth or geometric decay. In either exponential growth or exponential decay, the ratio of the rate of change of the quantity to its current size remains constant over time. The formula for growth of a variable x at the growth rate r. This formula is transparent when the exponents are converted to multiplication, in this way, each increase in the exponent by a full interval can be seen to increase the previous total by another five percent. Since the time variable, which is the input to function, occurs as the exponent. Biology The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. Because exponential growth indicates constant growth rate, it is assumed that exponentially growing cells are at a steady-state. However, cells can grow exponentially at a constant rate while remodelling their metabolism, a virus typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people, human population, if the number of births and deaths per person per year were to remain at current levels. This means that the time of the American population is approximately 50 years. Physics Avalanche breakdown within a dielectric material, a free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons, the resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material. Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is an approximation to think of the first 53 generations as a latency period leading up to the actual explosion. Economics Economic growth is expressed in terms, implying exponential growth. For example, U. S. GDP per capita has grown at a rate of approximately two percent since World War 2. Finance Compound interest at a constant interest rate provides exponential growth of the capital, pyramid schemes or Ponzi schemes also show this type of growth resulting in high profits for a few initial investors and losses among great numbers of investors
12.
Exponential function
–
In mathematics, an exponential function is a function of the form in which the input variable x occurs as an exponent. A function of the form f = b x + c, as functions of a real variable, exponential functions are uniquely characterized by the fact that the growth rate of such a function is directly proportional to the value of the function. The constant of proportionality of this relationship is the logarithm of the base b. The argument of the function can be any real or complex number or even an entirely different kind of mathematical object. Its ubiquitous occurrence in pure and applied mathematics has led mathematician W. Rudin to opine that the function is the most important function in mathematics. In applied settings, exponential functions model a relationship in which a constant change in the independent variable gives the same change in the dependent variable. The graph of y = e x is upward-sloping, and increases faster as x increases, the graph always lies above the x -axis but can get arbitrarily close to it for negative x, thus, the x -axis is a horizontal asymptote. The slope of the tangent to the graph at each point is equal to its y -coordinate at that point, as implied by its derivative function. Its inverse function is the logarithm, denoted log, ln, or log e, because of this. The exponential function exp, C → C can be characterized in a variety of equivalent ways, the constant e is then defined as e = exp = ∑ k =0 ∞. The exponential function arises whenever a quantity grows or decays at a proportional to its current value. One such situation is continuously compounded interest, and in fact it was this observation that led Jacob Bernoulli in 1683 to the number lim n → ∞ n now known as e, later, in 1697, Johann Bernoulli studied the calculus of the exponential function. If instead interest is compounded daily, this becomes 365, letting the number of time intervals per year grow without bound leads to the limit definition of the exponential function, exp = lim n → ∞ n first given by Euler. This is one of a number of characterizations of the exponential function, from any of these definitions it can be shown that the exponential function obeys the basic exponentiation identity, exp = exp ⋅ exp which is why it can be written as ex. The derivative of the function is the exponential function itself. More generally, a function with a rate of change proportional to the function itself is expressible in terms of the exponential function and this function property leads to exponential growth and exponential decay. The exponential function extends to a function on the complex plane. Eulers formula relates its values at purely imaginary arguments to trigonometric functions, the exponential function also has analogues for which the argument is a matrix, or even an element of a Banach algebra or a Lie algebra
13.
Factorial
–
In mathematics, the factorial of a non-negative integer n, denoted by n. is the product of all positive integers less than or equal to n. =5 ×4 ×3 ×2 ×1 =120, the value of 0. is 1, according to the convention for an empty product. The factorial operation is encountered in areas of mathematics, notably in combinatorics, algebra. Its most basic occurrence is the fact there are n. ways to arrange n distinct objects into a sequence. This fact was known at least as early as the 12th century, fabian Stedman, in 1677, described factorials as applied to change ringing. After describing a recursive approach, Stedman gives a statement of a factorial, Now the nature of these methods is such, the factorial function is formally defined by the product n. = ∏ k =1 n k, or by the relation n. = {1 if n =0. The factorial function can also be defined by using the rule as n. All of the above definitions incorporate the instance 0, =1, in the first case by the convention that the product of no numbers at all is 1. This is convenient because, There is exactly one permutation of zero objects, = n. ×, valid for n >0, extends to n =0. It allows for the expression of many formulae, such as the function, as a power series. It makes many identities in combinatorics valid for all applicable sizes, the number of ways to choose 0 elements from the empty set is =0. More generally, the number of ways to choose n elements among a set of n is = n. n, the factorial function can also be defined for non-integer values using more advanced mathematics, detailed in the section below. This more generalized definition is used by advanced calculators and mathematical software such as Maple or Mathematica, although the factorial function has its roots in combinatorics, formulas involving factorials occur in many areas of mathematics. There are n. different ways of arranging n distinct objects into a sequence, often factorials appear in the denominator of a formula to account for the fact that ordering is to be ignored. A classical example is counting k-combinations from a set with n elements, one can obtain such a combination by choosing a k-permutation, successively selecting and removing an element of the set, k times, for a total of n k _ = n ⋯ possibilities. This however produces the k-combinations in an order that one wishes to ignore, since each k-combination is obtained in k. different ways. This number is known as the coefficient, because it is also the coefficient of Xk in n
14.
Turing machine
–
Despite the models simplicity, given any computer algorithm, a Turing machine can be constructed that is capable of simulating that algorithms logic. The machine operates on an infinite memory tape divided into discrete cells, the machine positions its head over a cell and reads the symbol there. The Turing machine was invented in 1936 by Alan Turing, who called it an a-machine, thus, Turing machines prove fundamental limitations on the power of mechanical computation. Turing completeness is the ability for a system of instructions to simulate a Turing machine, a Turing machine is a general example of a CPU that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. More specifically, it is a capable of enumerating some arbitrary subset of valid strings of an alphabet. Assuming a black box, the Turing machine cannot know whether it will eventually enumerate any one specific string of the subset with a given program and this is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing. The Turing machine is capable of processing an unrestricted grammar, which implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus, a Turing machine that is able to simulate any other Turing machine is called a universal Turing machine. The thesis states that Turing machines indeed capture the notion of effective methods in logic and mathematics. Studying their abstract properties yields many insights into computer science and complexity theory, at any moment there is one symbol in the machine, it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, however, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings, the Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, in the original article, Turing imagines not a mechanism, but a person whom he calls the computer, who executes these deterministic mechanical rules slavishly. If δ is not defined on the current state and the current tape symbol, Q0 ∈ Q is the initial state F ⊆ Q is the set of final or accepting states. The initial tape contents is said to be accepted by M if it eventually halts in a state from F, Anything that operates according to these specifications is a Turing machine. The 7-tuple for the 3-state busy beaver looks like this, Q = Γ = b =0 Σ = q 0 = A F = δ = see state-table below Initially all tape cells are marked with 0. In the words of van Emde Boas, p.6, The set-theoretical object provides only partial information on how the machine will behave and what its computations will look like. For instance, There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely
15.
Exponential object
–
In mathematics, specifically in category theory, an exponential object or map object is the categorical generalization of a function space in set theory. Categories with all products and exponential objects are called cartesian closed categories. Categories without adjoined products may still have an exponential law, let C be a category with binary products and let Z and Y be objects of C. For this reason, the morphisms λ g and g are sometimes called exponential adjoints of one another, in the category of sets, an exponential object Z Y is the set of all functions Y → Z. The map e v a l, → Z is just the evaluation map, for any map g, → Z the map λ g, X → Z Y is the curried form of g, λ g = g. A Heyting algebra H is just a lattice that has all exponential objects. Heyting implication, Y ⇒ Z, is a notation for Z Y. The above adjunction results translate to implication being right adjoint to meet and this adjunction can be written as ⊣ or more fully as, In the category of topological spaces, the exponential object Z Y exists provided that Y is a locally compact Hausdorff space. In that case, the space Z Y is the set of all functions from Y to Z together with the compact-open topology. The evaluation map is the same as in the category of sets, if Y is not locally compact Hausdorff, the exponential object may not exist. For this reason the category of topological spaces fails to be cartesian closed, however, the category of locally compact topological spaces is not cartesian closed either, since Z Y need not be locally compact for locally compact spaces Z and Y. A cartesian closed category of spaces is, for example, given by the full subcategory spanned by the compactly generated Hausdorff spaces. In functional programming languages, the morphism e v a l is often called a p p l y, the morphism e v a l here must not to be confused with the eval function in some programming languages, which evaluates quoted expressions. Closed monoidal category Adámek, Jiří, Horst Herrlich, George Strecker, Oxford New York, Oxford University Press. Interactive Web page which generates examples of objects and other categorical constructions
16.
Function composition
–
In mathematics, function composition is the pointwise application of one function to the result of another to produce a third function. The resulting composite function is denoted g ∘ f, X → Z, the notation g ∘ f is read as g circle f, or g round f, or g composed with f, g after f, g following f, or g of f, or g on f. Intuitively, composing two functions is a process in which the output of the inner function becomes the input of the outer function. The composition of functions is a case of the composition of relations. The composition of functions has some additional properties, Composition of functions on a finite set, If f =, and g =, then g ∘ f =. The composition of functions is always associative—a property inherited from the composition of relations, since there is no distinction between the choices of placement of parentheses, they may be left off without causing any ambiguity. In a strict sense, the composition g ∘ f can be only if fs codomain equals gs domain, in a wider sense it is sufficient that the former is a subset of the latter. The functions g and f are said to commute with each other if g ∘ f = f ∘ g, commutativity is a special property, attained only by particular functions, and often in special circumstances. For example, | x | +3 = | x + 3 | only when x ≥0, the composition of one-to-one functions is always one-to-one. Similarly, the composition of two functions is always onto. It follows that composition of two bijections is also a bijection, the inverse function of a composition has the property that −1 =. Derivatives of compositions involving differentiable functions can be using the chain rule. Higher derivatives of functions are given by Faà di Brunos formula. Suppose one has two functions f, X → X, g, X → X having the domain and codomain. Then one can form chains of transformations composed together, such as f ∘ f ∘ g ∘ f, such chains have the algebraic structure of a monoid, called a transformation monoid or composition monoid. In general, transformation monoids can have remarkably complicated structure, one particular notable example is the de Rham curve. The set of all functions f, X → X is called the transformation semigroup or symmetric semigroup on X. If the transformation are bijective, then the set of all combinations of these functions forms a transformation group
17.
Graham's number
–
Grahams number is an unimaginably large number that is a proven upper bound to the solution of a certain problem in Ramsey theory. Gardner later described the number in Scientific American in 1977, introducing it to the general public, the number was published in the 1980 Guinness Book of World Records which added to the popular interest in the number. Grahams number, although smaller than TREE, is larger than many other large numbers such as Skewes number and Mosers number. Even power towers of the form a b c ⋅ ⋅ ⋅ are insufficient for purpose, although it can be described by recursive formulas using Knuths up-arrow notation or equivalent. Though too large to be computed in full, many of the last digits of Grahams number can be derived through simple algorithms, the last 12 digits are,262464195387. Grahams number is connected to the problem in Ramsey theory. Colour each of the edges of this graph either red or blue, what is the smallest value of n for which every such colouring contains at least one single-coloured complete subgraph on four coplanar vertices. This was reduced in 2014 via upper bounds on the Hales–Jewett number to N ′ =2 ↑↑↑6, the lower bound of 6 was later improved to 11 by Geoff Exoo in 2003, and to 13 by Jerome Barkley in 2008. Thus, the best known bounds for N* are 13 ≤ N* ≤ N. Grahams number, G, is larger than N, f 64. This weaker upper bound for the problem, attributed to a work of Graham, was eventually published and named by Martin Gardner in Scientific American in November 1977. The 1980 Guinness Book of World Records repeated Gardners claim, adding to the popular interest in this number, according to physicist John Baez, Graham invented the quantity now known as Grahams number in conversation with Gardner. Because the number which Graham described to Gardner is larger than the number in the paper itself, both are valid upper bounds for the solution to the problem studied by Graham and Rothschild. Equivalently, G = f 64, where f =3 ↑ n 3, and the superscript on f indicates an iteration of the function, e. g. f 4 = f. Expressed in terms of the family of hyperoperations H0, H1, H2, ⋯, the f is the particular sequence f = H n +2. First, in terms of tetration alone, g 1 =3 ↑↑↑↑3 =3 ↑↑↑ =3 ↑↑ where the number of 3s in the expression on the right is 3 ↑↑↑3 =3 ↑↑. Note that the result of calculating the third tower is the value of n, the magnitude of this first term, g1, is so large that it is practically incomprehensible, even though the above display is relatively easy to comprehend. Even n, the number of towers in this formula for g1, is far greater than the number of Planck volumes into which one can imagine subdividing the observable universe. And after this first term, still another 63 terms remain in the rapidly growing g sequence before Grahams number G = g64 is reached
18.
Inverse function
–
I. e. f = y if and only if g = x. As a simple example, consider the function of a real variable given by f = 5x −7. Thinking of this as a procedure, to reverse this and get x back from some output value, say y. In this case means that we should add 7 to y. In functional notation this inverse function would be given by, g = y +75, with y = 5x −7 we have that f = y and g = x. Not all functions have inverse functions, in order for a function f, X → Y to have an inverse, it must have the property that for every y in Y there must be one, and only one x in X so that f = y. This property ensures that a function g, Y → X will exist having the necessary relationship with f, let f be a function whose domain is the set X, and whose image is the set Y. Then f is invertible if there exists a g with domain Y and image X, with the property. If f is invertible, the g is unique, which means that there is exactly one function g satisfying this property. That function g is called the inverse of f, and is usually denoted as f −1. Stated otherwise, a function is invertible if and only if its inverse relation is a function on the range Y, not all functions have an inverse. For a function to have an inverse, each element y ∈ Y must correspond to no more than one x ∈ X, a function f with this property is called one-to-one or an injection. If f −1 is to be a function on Y, then each element y ∈ Y must correspond to some x ∈ X. Functions with this property are called surjections. This property is satisfied by definition if Y is the image of f, to be invertible a function must be both an injection and a surjection. If a function f is invertible, then both it and its inverse function f−1 are bijections, there is another convention used in the definition of functions. This can be referred to as the set-theoretic or graph definition using ordered pairs in which a codomain is never referred to, under this convention all functions are surjections, and so, being a bijection simply means being an injection. Authors using this convention may use the phrasing that a function is invertible if, the two conventions need not cause confusion as long as it is remembered that in this alternate convention the codomain of a function is always taken to be the range of the function. With this type of function it is impossible to deduce an input from its output, such a function is called non-injective or, in some applications, information-losing
19.
Computational complexity theory
–
A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying the amount of resources needed to solve them, such as time and storage. Other complexity measures are used, such as the amount of communication, the number of gates in a circuit. One of the roles of computational complexity theory is to determine the limits on what computers can. Closely related fields in computer science are analysis of algorithms. More precisely, computational complexity theory tries to classify problems that can or cannot be solved with appropriately restricted resources, a computational problem can be viewed as an infinite collection of instances together with a solution for every instance. The input string for a problem is referred to as a problem instance. In computational complexity theory, a problem refers to the question to be solved. In contrast, an instance of this problem is a rather concrete utterance, for example, consider the problem of primality testing. The instance is a number and the solution is yes if the number is prime, stated another way, the instance is a particular input to the problem, and the solution is the output corresponding to the given input. For this reason, complexity theory addresses computational problems and not particular problem instances, when considering computational problems, a problem instance is a string over an alphabet. Usually, the alphabet is taken to be the binary alphabet, as in a real-world computer, mathematical objects other than bitstrings must be suitably encoded. For example, integers can be represented in binary notation, and graphs can be encoded directly via their adjacency matrices and this can be achieved by ensuring that different representations can be transformed into each other efficiently. Decision problems are one of the objects of study in computational complexity theory. A decision problem is a type of computational problem whose answer is either yes or no. A decision problem can be viewed as a language, where the members of the language are instances whose output is yes. The objective is to decide, with the aid of an algorithm, if the algorithm deciding this problem returns the answer yes, the algorithm is said to accept the input string, otherwise it is said to reject the input. An example of a problem is the following
20.
Algorithm
–
In mathematics and computer science, an algorithm is a self-contained sequence of actions to be performed. Algorithms can perform calculation, data processing and automated reasoning tasks, an algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. The transition from one state to the next is not necessarily deterministic, some algorithms, known as randomized algorithms, giving a formal definition of algorithms, corresponding to the intuitive notion, remains a challenging problem. In English, it was first used in about 1230 and then by Chaucer in 1391, English adopted the French term, but it wasnt until the late 19th century that algorithm took on the meaning that it has in modern English. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu and it begins thus, Haec algorismus ars praesens dicitur, in qua / Talibus Indorum fruimur bis quinque figuris. Which translates as, Algorism is the art by which at present we use those Indian figures, the poem is a few hundred lines long and summarizes the art of calculating with the new style of Indian dice, or Talibus Indorum, or Hindu numerals. An informal definition could be a set of rules that precisely defines a sequence of operations, which would include all computer programs, including programs that do not perform numeric calculations. Generally, a program is only an algorithm if it stops eventually, but humans can do something equally useful, in the case of certain enumerably infinite sets, They can give explicit instructions for determining the nth member of the set, for arbitrary finite n. An enumerably infinite set is one whose elements can be put into one-to-one correspondence with the integers, the concept of algorithm is also used to define the notion of decidability. That notion is central for explaining how formal systems come into being starting from a set of axioms. In logic, the time that an algorithm requires to complete cannot be measured, from such uncertainties, that characterize ongoing work, stems the unavailability of a definition of algorithm that suits both concrete and abstract usage of the term. Algorithms are essential to the way computers process data, thus, an algorithm can be considered to be any sequence of operations that can be simulated by a Turing-complete system. Although this may seem extreme, the arguments, in its favor are hard to refute. Gurevich. Turings informal argument in favor of his thesis justifies a stronger thesis, according to Savage, an algorithm is a computational process defined by a Turing machine. Typically, when an algorithm is associated with processing information, data can be read from a source, written to an output device. Stored data are regarded as part of the state of the entity performing the algorithm. In practice, the state is stored in one or more data structures, for some such computational process, the algorithm must be rigorously defined, specified in the way it applies in all possible circumstances that could arise. That is, any conditional steps must be dealt with, case-by-case
21.
Disjoint-set data structure
–
It supports two useful operations, Find, Determine which subset a particular element is in. Find typically returns an item from this set serves as its representative. Union, Join two subsets into a single subset, the other important operation, MakeSet, which makes a set containing only a given element, is generally trivial. With these three operations, many practical partitioning problems can be solved, in order to define these operations more precisely, some way of representing the sets is needed. One common approach is to select an element of each set, called its representative. Then, Find returns the representative of the set that x belongs to, a simple disjoint-set data structure uses a linked list for each set. The element at the head of each list is chosen as its representative, MakeSet creates a list of one element. Union appends the two lists, an operation if the list carries a pointer to its tail. The drawback of this implementation is that Find requires O or linear time to traverse the list backwards from an element to the head of the list. This can be avoided by including in each linked list node a pointer to the head of the list, then Find takes constant time, since this pointer refers directly to the set representative. However, Union now has to each element of the list being appended to make it point to the head of the new combined list. When the length of each list is tracked, the time can be improved by always appending the smaller list to the longer. Using this weighted-union heuristic, a sequence of m MakeSet, Union, for asymptotically faster operations, a different data structure is needed. We now explain the bound O above, suppose you have a collection of lists and each node of each list contains an object, the name of the list to which it belongs, and the number of elements in that list. Also assume that the number of elements in all lists is n. We wish to be able to merge any two of these lists, and update all of their nodes so that they contain the name of the list to which they belong. The rule for merging the lists A and B is that if A is larger than B then merge the elements of B into A and update the elements used to belong to B. Choose an arbitrary element of list L, say x and we wish to count how many times in the worst case will x need to have the name of the list to which it belongs updated
22.
Bernard Chazelle
–
Bernard Chazelle is a French-American computer scientist. He is currently the Eugene Higgins Professor of Computer Science at Princeton University and he is also known for his invention of the soft heap data structure and the most asymptotically efficient known algorithm for finding minimum spanning trees. Chazelle was born in Clamart, France, the son of Marie-Claire and he grew up in Paris, France, where he received his bachelors degree and masters degree in applied mathematics at the École des mines de Paris in 1977. Then, at the age of 21, he attended Yale University in the United States and he is the father of director Damien Chazelle, the youngest person in history to win an Academy Award for Best Director, and Anna Chazelle, an entertainer. He is a fellow of the ACM, the American Academy of Arts and Sciences, the John Simon Guggenheim Memorial Foundation and he has also written essays about music and politics. The Discrepancy Method, Randomness and Complexity
23.
Minimum spanning tree
–
That is, it is a spanning tree whose sum of edge weights is as small as possible. More generally, any undirected graph has a spanning forest. There are quite a few use cases for minimum spanning trees, one example would be a telecommunications company which is trying to lay out cables in new neighborhood. If it is constrained to bury the cable only along certain paths, some of those paths might be more expensive, because they are longer, or require the cable to be buried deeper, these paths would be represented by edges with larger weights. Currency is a unit for edge weight – there is no requirement for edge lengths to obey normal rules of geometry such as the triangle inequality. A spanning tree for that graph would be a subset of paths that has no cycles but still connects to every house. A minimum spanning tree would be one with the lowest total cost, if there are n vertices in the graph, then each spanning tree has n −1 edges. There may be several spanning trees of the same weight, in particular, if all the edge weights of a given graph are the same. If each edge has a distinct weight then there will be only one and this is true in many realistic situations, such as the telecommunications company example above, where its unlikely any two paths have exactly the same cost. This generalizes to spanning forests as well, proof, Assume the contrary, that there are two different MSTs A and B. Since A and B differ despite containing the same nodes, there is at least one edge that belongs to one, among such edges, let e1 be the one with least weight, this choice is unique because the edge weights are all distinct. Without loss of generality, assume e1 is in A, as B is a MST, ∪ B must contain a cycle C. As a tree, A contains no cycles, therefore C must have an edge e2 that is not in A. Since e1 was chosen as the unique lowest-weight edge among those belonging to one of A and B. Replacing e2 with e1 in B therefore yields a spanning tree with a smaller weight and this contradicts the assumption that B is a MST. More generally, if the weights are not all distinct then only the set of weights in minimum spanning trees is certain to be unique. If the weights are positive, then a minimum spanning tree is in fact a minimum-cost subgraph connecting all vertices, since subgraphs containing cycles necessarily have more total weight. For any cycle C in the graph, if the weight of an edge e of C is larger than the weights of all other edges of C
24.
Floor function
–
In mathematics and computer science, the floor and ceiling functions map a real number to the greatest preceding or the least succeeding integer, respectively. More precisely, floor = ⌊ x ⌋ is the greatest integer less than or equal to x, carl Friedrich Gauss introduced the square bracket notation for the floor function in his third proof of quadratic reciprocity. This remained the standard in mathematics until Kenneth E. Iverson introduced the names floor and ceiling, both notations are now used in mathematics, this article follows Iverson. e. The value of x rounded to an integer towards 0, the language APL uses ⌊x, other computer languages commonly use notations like entier, INT, or floor. In mathematics, it can also be written with boldface or double brackets, the ceiling function is usually denoted by ceil or ceiling in non-APL computer languages that have a notation for this function. The J Programming Language, a follow on to APL that is designed to use standard symbols, uses >. for ceiling. In mathematics, there is another notation with reversed boldface or double brackets ] ] x x[\. x[, the fractional part is the sawtooth function, denoted by for real x and defined by the formula = x − ⌊ x ⌋. HTML4.0 uses the names, &lfloor, &rfloor, &lceil. Unicode contains codepoints for these symbols at U+2308–U+230B, ⌈x⌉, ⌊x⌋, in the following formulas, x and y are real numbers, k, m, and n are integers, and Z is the set of integers. Floor and ceiling may be defined by the set equations ⌊ x ⌋ = max, ⌈ x ⌉ = min. Since there is exactly one integer in an interval of length one. Then ⌊ x ⌋ = m and ⌈ x ⌉ = n may also be taken as the definition of floor and these formulas can be used to simplify expressions involving floors and ceilings. In the language of order theory, the function is a residuated mapping. These formulas show how adding integers to the arguments affect the functions, negating the argument complements the fractional part, + = {0 if x ∈ Z1 if x ∉ Z. The floor, ceiling, and fractional part functions are idempotent, the result of nested floor or ceiling functions is the innermost function, ⌊ ⌈ x ⌉ ⌋ = ⌈ x ⌉, ⌈ ⌊ x ⌋ ⌉ = ⌊ x ⌋. If m and n are integers and n ≠0,0 ≤ ≤1 −1 | n |. If n is a positive integer ⌊ x + m n ⌋ = ⌊ ⌊ x ⌋ + m n ⌋, ⌈ x + m n ⌉ = ⌈ ⌈ x ⌉ + m n ⌉. For m =2 these imply n = ⌊ n 2 ⌋ + ⌈ n 2 ⌉
25.
Floor and ceiling functions
–
In mathematics and computer science, the floor and ceiling functions map a real number to the greatest preceding or the least succeeding integer, respectively. More precisely, floor = ⌊ x ⌋ is the greatest integer less than or equal to x, carl Friedrich Gauss introduced the square bracket notation for the floor function in his third proof of quadratic reciprocity. This remained the standard in mathematics until Kenneth E. Iverson introduced the names floor and ceiling, both notations are now used in mathematics, this article follows Iverson. e. The value of x rounded to an integer towards 0, the language APL uses ⌊x, other computer languages commonly use notations like entier, INT, or floor. In mathematics, it can also be written with boldface or double brackets, the ceiling function is usually denoted by ceil or ceiling in non-APL computer languages that have a notation for this function. The J Programming Language, a follow on to APL that is designed to use standard symbols, uses >. for ceiling. In mathematics, there is another notation with reversed boldface or double brackets ] ] x x[\. x[, the fractional part is the sawtooth function, denoted by for real x and defined by the formula = x − ⌊ x ⌋. HTML4.0 uses the names, &lfloor, &rfloor, &lceil. Unicode contains codepoints for these symbols at U+2308–U+230B, ⌈x⌉, ⌊x⌋, in the following formulas, x and y are real numbers, k, m, and n are integers, and Z is the set of integers. Floor and ceiling may be defined by the set equations ⌊ x ⌋ = max, ⌈ x ⌉ = min. Since there is exactly one integer in an interval of length one. Then ⌊ x ⌋ = m and ⌈ x ⌉ = n may also be taken as the definition of floor and these formulas can be used to simplify expressions involving floors and ceilings. In the language of order theory, the function is a residuated mapping. These formulas show how adding integers to the arguments affect the functions, negating the argument complements the fractional part, + = {0 if x ∈ Z1 if x ∉ Z. The floor, ceiling, and fractional part functions are idempotent, the result of nested floor or ceiling functions is the innermost function, ⌊ ⌈ x ⌉ ⌋ = ⌈ x ⌉, ⌈ ⌊ x ⌋ ⌉ = ⌊ x ⌋. If m and n are integers and n ≠0,0 ≤ ≤1 −1 | n |. If n is a positive integer ⌊ x + m n ⌋ = ⌊ ⌊ x ⌋ + m n ⌋, ⌈ x + m n ⌉ = ⌈ ⌈ x ⌉ + m n ⌉. For m =2 these imply n = ⌊ n 2 ⌋ + ⌈ n 2 ⌉
26.
Compiler
–
A compiler is a computer program that transforms source code written in a programming language into another computer language, with the latter often having a binary form known as object code. The most common reason for converting source code is to create an executable program, the name compiler is primarily used for programs that translate source code from a high-level programming language to a lower level language. If the compiled program can run on a computer whose CPU or operating system is different from the one on which the compiler runs, more generally, compilers are a specific type of translator. While all programs that take a set of programming specifications and translate them, a program that translates from a low-level language to a higher level one is a decompiler. A program that translates between high-level languages is called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language, the term compiler-compiler is sometimes used to refer to a parser generator, a tool often used to help create the lexer and parser. A compiler is likely to many or all of the following operations, lexical analysis, preprocessing, parsing, semantic analysis, code generation. Program faults caused by incorrect compiler behavior can be difficult to track down and work around, therefore. Software for early computers was written in assembly language. The notion of a high level programming language dates back to 1943, no actual implementation occurred until the 1970s, however. The first actual compilers date from the 1950s, identifying the very first is hard, because there is subjectivity in deciding when programs become advanced enough to count as the full concept rather than a precursor. 1952 saw two important advances. Grace Hopper wrote the compiler for the A-0 programming language, though the A-0 functioned more as a loader or linker than the notion of a full compiler. Also in 1952, the first autocode compiler was developed by Alick Glennie for the Mark 1 computer at the University of Manchester and this is considered by some to be the first compiled programming language. The FORTRAN team led by John Backus at IBM is generally credited as having introduced the first unambiguously complete compiler, COBOL was an early language to be compiled on multiple architectures, in 1960. In many application domains the idea of using a higher level language quickly caught on, because of the expanding functionality supported by newer programming languages and the increasing complexity of computer architectures, compilers have become more complex. Early compilers were written in assembly language, the first self-hosting compiler – capable of compiling its own source code in a high-level language – was created in 1962 for the Lisp programming language by Tim Hart and Mike Levin at MIT. Since the 1970s, it has become practice to implement a compiler in the language it compiles
27.
Recursion (computer science)
–
Recursion in computer science is a method where the solution to a problem depends on solutions to smaller instances of the same problem. The approach can be applied to types of problems. The power of recursion evidently lies in the possibility of defining a set of objects by a finite statement. In the same manner, a number of computations can be described by a finite recursive program. Most computer programming languages support recursion by allowing a function to call itself within the program text, some functional programming languages do not define any looping constructs but rely solely on recursion to repeatedly call code. A common computer programming tactic is to divide a problem into sub-problems of the type as the original, solve those sub-problems. For example, the function can be defined recursively by the equations 0. =1 and, for all n >0, n. = n, neither equation by itself constitutes a complete definition, the first is the base case, and the second is the recursive case. Because the base case breaks the chain of recursion, it is also called the terminating case. The job of the cases can be seen as breaking down complex inputs into simpler ones. In a properly designed recursive function, with each recursive call, neglecting to write a base case, or testing for it incorrectly, can cause an infinite loop. For some functions there is not a base case implied by the input data. Many computer programs must process or generate a large quantity of data. Recursion is one technique for representing data whose exact size the programmer does not know, there are two types of self-referential definitions, inductive and coinductive definitions. An inductively defined recursive data definition is one that specifies how to construct instances of the data. For example, linked lists can be defined inductively, The code above specifies a list of strings to be empty, or a structure that contains a string. The self-reference in the definition permits the construction of lists of any number of strings, another example of inductive definition is the natural numbers, A natural number is either 1 or n+1, where n is a natural number. Similarly recursive definitions are used to model the structure of expressions
28.
International Standard Book Number
–
The International Standard Book Number is a unique numeric commercial book identifier. An ISBN is assigned to each edition and variation of a book, for example, an e-book, a paperback and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, the method of assigning an ISBN is nation-based and varies from country to country, often depending on how large the publishing industry is within a country. The initial ISBN configuration of recognition was generated in 1967 based upon the 9-digit Standard Book Numbering created in 1966, the 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108. Occasionally, a book may appear without a printed ISBN if it is printed privately or the author does not follow the usual ISBN procedure, however, this can be rectified later. Another identifier, the International Standard Serial Number, identifies periodical publications such as magazines, the ISBN configuration of recognition was generated in 1967 in the United Kingdom by David Whitaker and in 1968 in the US by Emery Koltay. The 10-digit ISBN format was developed by the International Organization for Standardization and was published in 1970 as international standard ISO2108, the United Kingdom continued to use the 9-digit SBN code until 1974. The ISO on-line facility only refers back to 1978, an SBN may be converted to an ISBN by prefixing the digit 0. For example, the edition of Mr. J. G. Reeder Returns, published by Hodder in 1965, has SBN340013818 -340 indicating the publisher,01381 their serial number. This can be converted to ISBN 0-340-01381-8, the check digit does not need to be re-calculated, since 1 January 2007, ISBNs have contained 13 digits, a format that is compatible with Bookland European Article Number EAN-13s. An ISBN is assigned to each edition and variation of a book, for example, an ebook, a paperback, and a hardcover edition of the same book would each have a different ISBN. The ISBN is 13 digits long if assigned on or after 1 January 2007, a 13-digit ISBN can be separated into its parts, and when this is done it is customary to separate the parts with hyphens or spaces. Separating the parts of a 10-digit ISBN is also done with either hyphens or spaces, figuring out how to correctly separate a given ISBN number is complicated, because most of the parts do not use a fixed number of digits. ISBN issuance is country-specific, in that ISBNs are issued by the ISBN registration agency that is responsible for country or territory regardless of the publication language. Some ISBN registration agencies are based in national libraries or within ministries of culture, in other cases, the ISBN registration service is provided by organisations such as bibliographic data providers that are not government funded. In Canada, ISBNs are issued at no cost with the purpose of encouraging Canadian culture. In the United Kingdom, United States, and some countries, where the service is provided by non-government-funded organisations. Australia, ISBNs are issued by the library services agency Thorpe-Bowker
29.
Wayback Machine
–
The Internet Archive launched the Wayback Machine in October 2001. It was set up by Brewster Kahle and Bruce Gilliat, and is maintained with content from Alexa Internet, the service enables users to see archived versions of web pages across time, which the archive calls a three dimensional index. Since 1996, the Wayback Machine has been archiving cached pages of websites onto its large cluster of Linux nodes and it revisits sites every few weeks or months and archives a new version. Sites can also be captured on the fly by visitors who enter the sites URL into a search box, the intent is to capture and archive content that otherwise would be lost whenever a site is changed or closed down. The overall vision of the machines creators is to archive the entire Internet, the name Wayback Machine was chosen as a reference to the WABAC machine, a time-traveling device used by the characters Mr. Peabody and Sherman in The Rocky and Bullwinkle Show, an animated cartoon. These crawlers also respect the robots exclusion standard for websites whose owners opt for them not to appear in search results or be cached, to overcome inconsistencies in partially cached websites, Archive-It. Information had been kept on digital tape for five years, with Kahle occasionally allowing researchers, when the archive reached its fifth anniversary, it was unveiled and opened to the public in a ceremony at the University of California, Berkeley. Snapshots usually become more than six months after they are archived or, in some cases, even later. The frequency of snapshots is variable, so not all tracked website updates are recorded, Sometimes there are intervals of several weeks or years between snapshots. After August 2008 sites had to be listed on the Open Directory in order to be included. As of 2009, the Wayback Machine contained approximately three petabytes of data and was growing at a rate of 100 terabytes each month, the growth rate reported in 2003 was 12 terabytes/month, the data is stored on PetaBox rack systems manufactured by Capricorn Technologies. In 2009, the Internet Archive migrated its customized storage architecture to Sun Open Storage, in 2011 a new, improved version of the Wayback Machine, with an updated interface and fresher index of archived content, was made available for public testing. The index driving the classic Wayback Machine only has a bit of material past 2008. In January 2013, the company announced a ground-breaking milestone of 240 billion URLs, in October 2013, the company announced the Save a Page feature which allows any Internet user to archive the contents of a URL. This became a threat of abuse by the service for hosting malicious binaries, as of December 2014, the Wayback Machine contained almost nine petabytes of data and was growing at a rate of about 20 terabytes each week. Between October 2013 and March 2015 the websites global Alexa rank changed from 162 to 208, in a 2009 case, Netbula, LLC v. Chordiant Software Inc. defendant Chordiant filed a motion to compel Netbula to disable the robots. Netbula objected to the motion on the ground that defendants were asking to alter Netbulas website, in an October 2004 case, Telewizja Polska USA, Inc. v. Echostar Satellite, No.02 C3293,65 Fed. 673, a litigant attempted to use the Wayback Machine archives as a source of admissible evidence, Telewizja Polska is the provider of TVP Polonia and EchoStar operates the Dish Network
30.
Solomon Marcus
–
Solomon Marcus was a Romanian mathematician, member of the Mathematical Section of the Romanian Academy and emeritus professor of the University of Bucharests Faculty of Mathematics. He was born in Bacău, Romania, to Sima and Alter Marcus, from an early age he had to live through dictatorships, war, infringements on free speech and free thinking as well as anti-Semitism. At the age of 16 or 17 he started tutoring younger pupils in order to help his family financially and he graduated from Ferdinand I High School in 1944, and completed his studies at the University of Bucharests Faculty of Science, Department of Mathematics, in 1949. He obtained his PhD in Mathematics in 1956, with a thesis on the Monotonic functions of two variables, written under the direction of Miron Nicolescu and he was appointed Lecturer in 1955, Associate Professor in 1964, and became a Professor in 1966. Marcus is featured in People and Ideas in Theoretical Computer Science, a collection of his papers in English followed by some interviews and a brief autobiography was published in 2007 as Words and Languages Everywhere. It also contains a longer autobiography and he died of cardiac problems at the Fundeni Clinical Institute in Bucharest. I, No.1, pp. 73–79, Grigore C. Moisil, A Life Becoming a Myth, by Solomon Marcus, Editors note about the author Marcus articles on semiotics at Potlatch Solomon Marcus at the University of Bucharest
31.
Bulletin of the American Mathematical Society
–
The Bulletin of the American Mathematical Society is a quarterly mathematical journal published by the American Mathematical Society. It publishes surveys on contemporary topics, written at a level accessible to non-experts. It also publishes, by only, book reviews and short Mathematical Perspectives articles. It began as the Bulletin of the New York Mathematical Society, the Bulletins function has changed over the years, its original function was to serve as a research journal for its members. The Bulletin is indexed in Mathematical Reviews, Science Citation Index, ISI Alerting Services, CompuMath Citation Index, and Current Contents/Physical, Chemical & Earth Sciences