1.
Consensus decision-making
–
Consensus decision-making is a group decision-making process in which group members develop, and agree to support a decision in the best interest of the whole. Consensus may be defined professionally as a resolution, one that can be supported. Consensus is defined by Merriam-Webster as, first, general agreement and it has its origin in the Latin word cōnsēnsus, which is from cōnsentiō meaning literally feel together. It is used to both the decision and the process of reaching a decision. Consensus decision-making is thus concerned with the process of deliberating and finalizing a decision, as a decision-making process, consensus decision-making aims to be, Agreement Seeking, A consensus decision making process attempts to generate as much agreement as possible. Collaborative, Participants contribute to a proposal and shape it into a decision that meets the concerns of all group members as much as possible. Cooperative, Participants in a consensus process should strive to reach the best possible decision for the group and all of its members. Egalitarian, All members of a consensus decision-making body should be afforded, as much as possible, All members have the opportunity to present, and amend proposals. Inclusive, As many stakeholders as possible should be involved in the consensus decision-making process, participatory, The consensus process should actively solicit the input and participation of all decision-makers. Consensus decision-making is an alternative to commonly practiced group decision-making processes, roberts Rules of Order, for instance, is a guide book used by many organizations. This book allows the structuring of debate and passage of proposals that can be approved through majority vote and it does not emphasize the goal of full agreement. Critics of such a process believe that it can involve adversarial debate and these dynamics may harm group member relationships and undermine the ability of a group to cooperatively implement a contentious decision. Consensus decision-making attempts to address the beliefs of such problems, proponents claim that outcomes of the consensus process include, Better decisions, Through including the input of all stakeholders the resulting proposals may better address all potential concerns. Better implementation, A process that includes and respects all parties, Better group relationships, A cooperative, collaborative group atmosphere can foster greater group cohesion and interpersonal connection. The level of agreement necessary to finalize a decision is known as a decision rule and these groups use the term consensus to denote both the discussion process and the decision rule. Other groups use a process to generate as much agreement as possible. In this case, someone who has a strong objection must live with the decision, Giving consent does not necessarily mean that the proposal being considered is one’s first choice. Group members can vote their consent to a proposal because they choose to cooperate with the direction of the group, sometimes the vote on a proposal is framed, “Is this proposal something you can live with. ”This relaxed threshold for a yes vote can achieve full consent
2.
Algebra
–
Algebra is one of the broad parts of mathematics, together with number theory, geometry and analysis. In its most general form, algebra is the study of mathematical symbols, as such, it includes everything from elementary equation solving to the study of abstractions such as groups, rings, and fields. The more basic parts of algebra are called elementary algebra, the abstract parts are called abstract algebra or modern algebra. Elementary algebra is generally considered to be essential for any study of mathematics, science, or engineering, as well as such applications as medicine, abstract algebra is a major area in advanced mathematics, studied primarily by professional mathematicians. Elementary algebra differs from arithmetic in the use of abstractions, such as using letters to stand for numbers that are unknown or allowed to take on many values. For example, in x +2 =5 the letter x is unknown, in E = mc2, the letters E and m are variables, and the letter c is a constant, the speed of light in a vacuum. Algebra gives methods for solving equations and expressing formulas that are easier than the older method of writing everything out in words. The word algebra is used in certain specialized ways. A special kind of object in abstract algebra is called an algebra. A mathematician who does research in algebra is called an algebraist, the word algebra comes from the Arabic الجبر from the title of the book Ilm al-jabr wal-muḳābala by Persian mathematician and astronomer al-Khwarizmi. The word entered the English language during the century, from either Spanish, Italian. It originally referred to the procedure of setting broken or dislocated bones. The mathematical meaning was first recorded in the sixteenth century, the word algebra has several related meanings in mathematics, as a single word or with qualifiers. As a single word without an article, algebra names a broad part of mathematics, as a single word with an article or in plural, an algebra or algebras denotes a specific mathematical structure, whose precise definition depends on the author. Usually the structure has an addition, multiplication, and a scalar multiplication, when some authors use the term algebra, they make a subset of the following additional assumptions, associative, commutative, unital, and/or finite-dimensional. In universal algebra, the word refers to a generalization of the above concept. With a qualifier, there is the distinction, Without an article, it means a part of algebra, such as linear algebra, elementary algebra. With an article, it means an instance of some abstract structure, like a Lie algebra, sometimes both meanings exist for the same qualifier, as in the sentence, Commutative algebra is the study of commutative rings, which are commutative algebras over the integers
3.
Set theory
–
Set theory is a branch of mathematical logic that studies sets, which informally are collections of objects. Although any type of object can be collected into a set, set theory is applied most often to objects that are relevant to mathematics, the language of set theory can be used in the definitions of nearly all mathematical objects. The modern study of set theory was initiated by Georg Cantor, Set theory is commonly employed as a foundational system for mathematics, particularly in the form of Zermelo–Fraenkel set theory with the axiom of choice. Beyond its foundational role, set theory is a branch of mathematics in its own right, contemporary research into set theory includes a diverse collection of topics, ranging from the structure of the real number line to the study of the consistency of large cardinals. Mathematical topics typically emerge and evolve through interactions among many researchers, Set theory, however, was founded by a single paper in 1874 by Georg Cantor, On a Property of the Collection of All Real Algebraic Numbers. Since the 5th century BC, beginning with Greek mathematician Zeno of Elea in the West and early Indian mathematicians in the East, especially notable is the work of Bernard Bolzano in the first half of the 19th century. Modern understanding of infinity began in 1867–71, with Cantors work on number theory, an 1872 meeting between Cantor and Richard Dedekind influenced Cantors thinking and culminated in Cantors 1874 paper. Cantors work initially polarized the mathematicians of his day, while Karl Weierstrass and Dedekind supported Cantor, Leopold Kronecker, now seen as a founder of mathematical constructivism, did not. This utility of set theory led to the article Mengenlehre contributed in 1898 by Arthur Schoenflies to Kleins encyclopedia, in 1899 Cantor had himself posed the question What is the cardinal number of the set of all sets. Russell used his paradox as a theme in his 1903 review of continental mathematics in his The Principles of Mathematics, in 1906 English readers gained the book Theory of Sets of Points by William Henry Young and his wife Grace Chisholm Young, published by Cambridge University Press. The momentum of set theory was such that debate on the paradoxes did not lead to its abandonment, the work of Zermelo in 1908 and Abraham Fraenkel in 1922 resulted in the set of axioms ZFC, which became the most commonly used set of axioms for set theory. The work of such as Henri Lebesgue demonstrated the great mathematical utility of set theory. Set theory is used as a foundational system, although in some areas category theory is thought to be a preferred foundation. Set theory begins with a binary relation between an object o and a set A. If o is a member of A, the notation o ∈ A is used, since sets are objects, the membership relation can relate sets as well. A derived binary relation between two sets is the relation, also called set inclusion. If all the members of set A are also members of set B, then A is a subset of B, for example, is a subset of, and so is but is not. As insinuated from this definition, a set is a subset of itself, for cases where this possibility is unsuitable or would make sense to be rejected, the term proper subset is defined
4.
Mathematical analysis
–
Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, integration, measure, infinite series, and analytic functions. These theories are studied in the context of real and complex numbers. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis, analysis may be distinguished from geometry, however, it can be applied to any space of mathematical objects that has a definition of nearness or specific distances between objects. Mathematical analysis formally developed in the 17th century during the Scientific Revolution, early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, a geometric sum is implicit in Zenos paradox of the dichotomy. The explicit use of infinitesimals appears in Archimedes The Method of Mechanical Theorems, in Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle. Zu Chongzhi established a method that would later be called Cavalieris principle to find the volume of a sphere in the 5th century, the Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolles theorem in the 12th century. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and his followers at the Kerala school of astronomy and mathematics further expanded his works, up to the 16th century. The modern foundations of analysis were established in 17th century Europe. During this period, calculus techniques were applied to approximate discrete problems by continuous ones, in the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the definition of continuity in 1816. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra widely used in earlier work, instead, Cauchy formulated calculus in terms of geometric ideas and infinitesimals. Thus, his definition of continuity required a change in x to correspond to an infinitesimal change in y. He also introduced the concept of the Cauchy sequence, and started the theory of complex analysis. Poisson, Liouville, Fourier and others studied partial differential equations, the contributions of these mathematicians and others, such as Weierstrass, developed the -definition of limit approach, thus founding the modern field of mathematical analysis. In the middle of the 19th century Riemann introduced his theory of integration, the last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, and introduced the epsilon-delta definition of limit. Then, mathematicians started worrying that they were assuming the existence of a continuum of numbers without proof. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the size of the set of discontinuities of real functions, also, monsters began to be investigated
5.
Computer program
–
A computer program is a collection of instructions that performs a specific task when executed by a computer. A computer requires programs to function, and typically executes the programs instructions in a processing unit. A computer program is written by a computer programmer in a programming language. From the program in its form of source code, a compiler can derive machine code—a form consisting of instructions that the computer can directly execute. Alternatively, a program may be executed with the aid of an interpreter. A part of a program that performs a well-defined task is known as an algorithm. A collection of programs, libraries and related data are referred to as software. Computer programs may be categorized along functional lines, such as software or system software. The earliest programmable machines preceded the invention of the digital computer, in 1801, Joseph-Marie Jacquard devised a loom that would weave a pattern by following a series of perforated cards. Patterns could be weaved and repeated by arranging the cards, in 1837, Charles Babbage was inspired by Jacquards loom to attempt to build the Analytical Engine. The names of the components of the device were borrowed from the textile industry. In the textile industry, yarn was brought from the store to be milled, the device would have had a store—memory to hold 1,000 numbers of 40 decimal digits each. Numbers from the store would then have then transferred to the mill. It was programmed using two sets of perforated cards—one to direct the operation and the other for the input variables, however, after more than 17,000 pounds of the British governments money, the thousands of cogged wheels and gears never fully worked together. During a nine-month period in 1842–43, Ada Lovelace translated the memoir of Italian mathematician Luigi Menabrea, the memoir covered the Analytical Engine. The translation contained Note G which completely detailed a method for calculating Bernoulli numbers using the Analytical Engine and this note is recognized by some historians as the worlds first written computer program. In 1936, Alan Turing introduced the Universal Turing machine—a theoretical device that can model every computation that can be performed on a Turing complete computing machine and it is a finite-state machine that has an infinitely long read/write tape. The machine can move the back and forth, changing its contents as it performs an algorithm
6.
Natural number
–
In mathematics, the natural numbers are those used for counting and ordering. In common language, words used for counting are cardinal numbers, texts that exclude zero from the natural numbers sometimes refer to the natural numbers together with zero as the whole numbers, but in other writings, that term is used instead for the integers. These chains of extensions make the natural numbers canonically embedded in the number systems. Properties of the numbers, such as divisibility and the distribution of prime numbers, are studied in number theory. Problems concerning counting and ordering, such as partitioning and enumerations, are studied in combinatorics, the most primitive method of representing a natural number is to put down a mark for each object. Later, a set of objects could be tested for equality, excess or shortage, by striking out a mark, the first major advance in abstraction was the use of numerals to represent numbers. This allowed systems to be developed for recording large numbers, the ancient Egyptians developed a powerful system of numerals with distinct hieroglyphs for 1,10, and all the powers of 10 up to over 1 million. A stone carving from Karnak, dating from around 1500 BC and now at the Louvre in Paris, depicts 276 as 2 hundreds,7 tens, and 6 ones, and similarly for the number 4,622. A much later advance was the development of the idea that 0 can be considered as a number, with its own numeral. The use of a 0 digit in place-value notation dates back as early as 700 BC by the Babylonians, the Olmec and Maya civilizations used 0 as a separate number as early as the 1st century BC, but this usage did not spread beyond Mesoamerica. The use of a numeral 0 in modern times originated with the Indian mathematician Brahmagupta in 628, the first systematic study of numbers as abstractions is usually credited to the Greek philosophers Pythagoras and Archimedes. Some Greek mathematicians treated the number 1 differently than larger numbers, independent studies also occurred at around the same time in India, China, and Mesoamerica. In 19th century Europe, there was mathematical and philosophical discussion about the nature of the natural numbers. A school of Naturalism stated that the numbers were a direct consequence of the human psyche. Henri Poincaré was one of its advocates, as was Leopold Kronecker who summarized God made the integers, in opposition to the Naturalists, the constructivists saw a need to improve the logical rigor in the foundations of mathematics. In the 1860s, Hermann Grassmann suggested a recursive definition for natural numbers thus stating they were not really natural, later, two classes of such formal definitions were constructed, later, they were shown to be equivalent in most practical applications. The second class of definitions was introduced by Giuseppe Peano and is now called Peano arithmetic and it is based on an axiomatization of the properties of ordinal numbers, each natural number has a successor and every non-zero natural number has a unique predecessor. Peano arithmetic is equiconsistent with several systems of set theory
7.
Polynomial
–
In mathematics, a polynomial is an expression consisting of variables and coefficients, that involves only the operations of addition, subtraction, multiplication, and non-negative integer exponents. An example of a polynomial of a single indeterminate x is x2 − 4x +7, an example in three variables is x3 + 2xyz2 − yz +1. Polynomials appear in a variety of areas of mathematics and science. In advanced mathematics, polynomials are used to construct polynomial rings and algebraic varieties, central concepts in algebra, the word polynomial joins two diverse roots, the Greek poly, meaning many, and the Latin nomen, or name. It was derived from the binomial by replacing the Latin root bi- with the Greek poly-. The word polynomial was first used in the 17th century, the x occurring in a polynomial is commonly called either a variable or an indeterminate. When the polynomial is considered as an expression, x is a symbol which does not have any value. It is thus correct to call it an indeterminate. However, when one considers the function defined by the polynomial, then x represents the argument of the function, many authors use these two words interchangeably. It is a convention to use uppercase letters for the indeterminates. However one may use it over any domain where addition and multiplication are defined, in particular, when a is the indeterminate x, then the image of x by this function is the polynomial P itself. This equality allows writing let P be a polynomial as a shorthand for let P be a polynomial in the indeterminate x. A polynomial is an expression that can be built from constants, the word indeterminate means that x represents no particular value, although any value may be substituted for it. The mapping that associates the result of substitution to the substituted value is a function. This can be expressed concisely by using summation notation, ∑ k =0 n a k x k That is. Each term consists of the product of a number—called the coefficient of the term—and a finite number of indeterminates, because x = x1, the degree of an indeterminate without a written exponent is one. A term and a polynomial with no indeterminates are called, respectively, a constant term, the degree of a constant term and of a nonzero constant polynomial is 0. The degree of the polynomial,0, is generally treated as not defined
8.
Ring (mathematics)
–
In mathematics, a ring is one of the fundamental algebraic structures used in abstract algebra. It consists of a set equipped with two operations that generalize the arithmetic operations of addition and multiplication. Through this generalization, theorems from arithmetic are extended to non-numerical objects such as polynomials, series, matrices, the conceptualization of rings started in the 1870s and completed in the 1920s. Key contributors include Dedekind, Hilbert, Fraenkel, and Noether, rings were first formalized as a generalization of Dedekind domains that occur in number theory, and of polynomial rings and rings of invariants that occur in algebraic geometry and invariant theory. Afterward, they proved to be useful in other branches of mathematics such as geometry. A ring is a group with a second binary operation that is associative, is distributive over the abelian group operation. By extension from the integers, the group operation is called addition. Whether a ring is commutative or not has profound implications on its behavior as an abstract object, as a result, commutative ring theory, commonly known as commutative algebra, is a key topic in ring theory. Its development has greatly influenced by problems and ideas occurring naturally in algebraic number theory. The most familiar example of a ring is the set of all integers, Z, −5, −4, −3, −2, −1,0,1,2,3,4,5. The familiar properties for addition and multiplication of integers serve as a model for the axioms for rings, a ring is a set R equipped with two binary operations + and · satisfying the following three sets of axioms, called the ring axioms 1. R is a group under addition, meaning that, + c = a + for all a, b, c in R. a + b = b + a for all a, b in R. There is an element 0 in R such that a +0 = a for all a in R, for each a in R there exists −a in R such that a + =0. R is a monoid under multiplication, meaning that, · c = a · for all a, b, c in R. There is an element 1 in R such that a ·1 = a and 1 · a = a for all a in R.3. Multiplication is distributive with respect to addition, a ⋅ = + for all a, b, c in R. · a = + for all a, b, c in R. As explained in § History below, many follow a alternative convention in which a ring is not defined to have a multiplicative identity. This article adopts the convention that, unless stated, a ring is assumed to have such an identity
9.
Binomial theorem
–
In elementary algebra, the binomial theorem describes the algebraic expansion of powers of a binomial. For example,4 = x 4 +4 x 3 y +6 x 2 y 2 +4 x y 3 + y 4, the coefficient a in the term of a xb yc is known as the binomial coefficient or. These coefficients for varying n and b can be arranged to form Pascals triangle and these numbers also arise in combinatorics, where gives the number of different combinations of b elements that can be chosen from an n-element set. Special cases of the theorem were known from ancient times. Greek mathematician Euclid mentioned the case of the binomial theorem for exponent 2. There is evidence that the theorem for cubes was known by the 6th century in India. Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting k objects out of n without replacement, were of interest to the ancient Hindus. The earliest known reference to this problem is the Chandaḥśāstra by the Hindu lyricist Pingala. The commentator Halayudha from the 10th century A. D. explains this method using what is now known as Pascals triangle. By the 6th century A. D. the Hindu mathematicians probably knew how to express this as a quotient n. k. the binomial theorem as such can be found in the work of 11th-century Persian mathematician Al-Karaji, who described the triangular pattern of the binomial coefficients. He also provided a proof of both the binomial theorem and Pascals triangle, using a primitive form of mathematical induction. The Persian poet and mathematician Omar Khayyam was probably familiar with the formula to higher orders, the binomial expansions of small degrees were known in the 13th century mathematical works of Yang Hui and also Chu Shih-Chieh. Yang Hui attributes the method to a much earlier 11th century text of Jia Xian, in 1544, Michael Stifel introduced the term binomial coefficient and showed how to use them to express n in terms of n −1, via Pascals triangle. Blaise Pascal studied the eponymous triangle comprehensively in the treatise Traité du triangle arithmétique, however, the pattern of numbers was already known to the European mathematicians of the late Renaissance, including Stifel, Niccolò Fontana Tartaglia, and Simon Stevin. Isaac Newton is generally credited with the binomial theorem, valid for any rational exponent. This formula is also referred to as the formula or the binomial identity. Using summation notation, it can be written as n = ∑ k =0 n x n − k y k = ∑ k =0 n x k y n − k. A simple variant of the formula is obtained by substituting 1 for y
10.
Differential calculus
–
In mathematics, differential calculus is a subfield of calculus concerned with the study of the rates at which quantities change. It is one of the two divisions of calculus, the other being integral calculus. The primary objects of study in differential calculus are the derivative of a function, related notions such as the differential, the derivative of a function at a chosen input value describes the rate of change of the function near that input value. The process of finding a derivative is called differentiation, geometrically, the derivative at a point is the slope of the tangent line to the graph of the function at that point, provided that the derivative exists and is defined at that point. For a real-valued function of a real variable, the derivative of a function at a point generally determines the best linear approximation to the function at that point. Differential calculus and integral calculus are connected by the theorem of calculus. Differentiation has applications to nearly all quantitative disciplines, for example, in physics, the derivative of the displacement of a moving body with respect to time is the velocity of the body, and the derivative of velocity with respect to time is acceleration. The derivative of the momentum of a body equals the applied to the body. The reaction rate of a reaction is a derivative. In operations research, derivatives determine the most efficient ways to transport materials, derivatives are frequently used to find the maxima and minima of a function. Equations involving derivatives are called differential equations and are fundamental in describing natural phenomena, derivatives and their generalizations appear in many fields of mathematics, such as complex analysis, functional analysis, differential geometry, measure theory, and abstract algebra. Suppose that x and y are real numbers and that y is a function of x and this relationship can be written as y = f. If f is the equation for a line, then there are two real numbers m and b such that y = mx + b. In this slope-intercept form, the m is called the slope and can be determined from the formula, m = change in y change in x = Δ y Δ x. It follows that Δy = m Δx, a general function is not a line, so it does not have a slope. Geometrically, the derivative of f at the point x = a is the slope of the tangent line to the function f at the point a and this is often denoted f ′ in Lagranges notation or dy/dx|x = a in Leibnizs notation. Since the derivative is the slope of the approximation to f at the point a. If every point a in the domain of f has a derivative, for example, if f = x2, then the derivative function f ′ = dy/dx = 2x
11.
Limit (mathematics)
–
In mathematics, a limit is the value that a function or sequence approaches as the input or index approaches some value. Limits are essential to calculus and are used to define continuity, derivatives, the concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. In formulas, a limit is usually written as lim n → c f = L and is read as the limit of f of n as n approaches c equals L. Here lim indicates limit, and the fact that function f approaches the limit L as n approaches c is represented by the right arrow, suppose f is a real-valued function and c is a real number. Intuitively speaking, the lim x → c f = L means that f can be made to be as close to L as desired by making x sufficiently close to c. The first inequality means that the distance x and c is greater than 0 and that x ≠ c, while the second indicates that x is within distance δ of c. Note that the definition of a limit is true even if f ≠ L. Indeed. Now since x +1 is continuous in x at 1, we can now plug in 1 for x, in addition to limits at finite values, functions can also have limits at infinity. In this case, the limit of f as x approaches infinity is 2, in mathematical notation, lim x → ∞2 x −1 x =2. Consider the following sequence,1.79,1.799,1.7999 and it can be observed that the numbers are approaching 1.8, the limit of the sequence. Formally, suppose a1, a2. is a sequence of real numbers, intuitively, this means that eventually all elements of the sequence get arbitrarily close to the limit, since the absolute value | an − L | is the distance between an and L. Not every sequence has a limit, if it does, it is called convergent, one can show that a convergent sequence has only one limit. The limit of a sequence and the limit of a function are closely related, on one hand, the limit as n goes to infinity of a sequence a is simply the limit at infinity of a function defined on the natural numbers n. On the other hand, a limit L of a function f as x goes to infinity, if it exists, is the same as the limit of any sequence a that approaches L. Note that one such sequence would be L + 1/n, in non-standard analysis, the limit of a sequence can be expressed as the standard part of the value a H of the natural extension of the sequence at an infinite hypernatural index n=H. Thus, lim n → ∞ a n = st , here the standard part function st rounds off each finite hyperreal number to the nearest real number. This formalizes the intuition that for very large values of the index. Conversely, the part of a hyperreal a = represented in the ultrapower construction by a Cauchy sequence, is simply the limit of that sequence
12.
Classification of discontinuities
–
Continuous functions are of utmost importance in mathematics, functions and applications. However, not all functions are continuous, if a function is not continuous at a point in its domain, one says that it has a discontinuity there. The set of all points of discontinuity of a function may be a discrete set and this article describes the classification of discontinuities in the simplest case of functions of a single real variable taking real values. For each of the following, consider a real valued function f of a variable x. Consider the function f = { x 2 for x <10 for x =12 − x for x >1 The point x0 =1 is a removable discontinuity. In other words, since the two one-sided limits exist and are equal, the limit L of f as x approaches x0 exists and is equal to this same value, if the actual value of f is not equal to L, then x0 is called a removable discontinuity. This discontinuity can be removed to make f continuous at x0, or more precisely and this use is abusive because continuity and discontinuity of a function are concepts defined only for points in the functions domain. Such a point not in the domain is named a removable singularity. Consider the function f = { x 2 for x <10 for x =12 −2 for x >1 Then, the point x0 =1 is a jump discontinuity. In this case, a limit does not exist because the one-sided limits, L− and L+, exist and are finite, but are not equal, since, L− ≠ L+. Then, x0 is called a discontinuity, step discontinuity. For this type of discontinuity, the function f may have any value at x0, for an essential discontinuity, only one of the two one-sided limits needs not to exist or is infinite. Consider the function f = { sin 5 x −1 for x <10 for x =11 x −1 for x >1 Then, the point x 0 =1 is an essential discontinuity. In this case, one or both of the limits L − and L + does not exist or is infinite so x0 is a discontinuity, infinite discontinuity. The set of points at which a function is continuous is always a Gδ set, the set of discontinuities is an Fσ set. The set of discontinuities of a function is at most countable. Thomaes function is discontinuous at every point, but continuous at every irrational point. The indicator function of the rationals, also known as the Dirichlet function, is discontinuous everywhere, removable singularity Mathematical singularity Extension by continuity Malik, S. C
13.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
14.
Complex domain
–
A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying the equation i2 = −1. In this expression, a is the part and b is the imaginary part of the complex number. If z = a + b i, then ℜ z = a, ℑ z = b, Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point in the complex plane, a complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the numbers are a field extension of the ordinary real numbers. As well as their use within mathematics, complex numbers have applications in many fields, including physics, chemistry, biology, economics, electrical engineering. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers and he called them fictitious during his attempts to find solutions to cubic equations in the 16th century. Complex numbers allow solutions to equations that have no solutions in real numbers. For example, the equation 2 = −9 has no real solution, Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the unit i where i2 = −1. According to the theorem of algebra, all polynomial equations with real or complex coefficients in a single variable have a solution in complex numbers. A complex number is a number of the form a + bi, for example, −3.5 + 2i is a complex number. The real number a is called the part of the complex number a + bi. By this convention the imaginary part does not include the unit, hence b. The real part of a number z is denoted by Re or ℜ. For example, Re = −3.5 Im =2, hence, in terms of its real and imaginary parts, a complex number z is equal to Re + Im ⋅ i. This expression is known as the Cartesian form of z. A real number a can be regarded as a number a + 0i whose imaginary part is 0
15.
Complex logarithm
–
In complex analysis, a complex logarithm function is an inverse of the complex exponential function, just as the real natural logarithm ln x is the inverse of the real exponential function ex. Thus, a logarithm of a number z is a complex number w such that ew = z. The notation for such a w is ln z or log z, since every nonzero complex number z has infinitely many logarithms, care is required to give such notation an unambiguous meaning. If z = reiθ with r >0, then w = ln r + iθ is one logarithm of z, for a function to have an inverse, it must map distinct values to distinct values, i. e. be injective. But the complex function is not injective, because ew+2πi = ew for any w. So the exponential function does not have a function in the standard sense. There are two solutions to this problem and this is analogous to the definition of arcsin x on as the inverse of the restriction of sin θ to the interval, there are infinitely many real numbers θ with sin θ = x, but one chooses the one in. Branches have the advantage that they can be evaluated at complex numbers, on the other hand, the function on the Riemann surface is elegant in that it packages together all branches of log z and does not require an arbitrary choice as part of its definition. For each nonzero complex number z = x + yi, the principal value Log z is the logarithm whose imaginary part lies in the interval (−π, the expression Log 0 is left undefined since there is no complex number w satisfying ew =0. The principal value can be described also in a few other ways, to give a formula for Log z, begin by expressing z in polar form, z = reiθ. For example, Log = ln 3 − πi/2, another way to describe Log z is as the inverse of a restriction of the complex exponential function, as in the previous section. In fact, the function maps S bijectively to the punctured complex plane C × = C ∖. The conformal mapping section below explains the properties of this map in more detail. When the notation log z appears without any particular logarithm having been specified, in particular, this gives a value consistent with the real value of ln z when z is a positive real number. The capitalization in the notation Log is used by authors to distinguish the principal value from other logarithms of z. Not all identities satisfied by ln extend to complex numbers and it is true that eLog z = z for all z ≠0, but the identity Log ez = z fails for z outside the strip S. For this reason, one cannot always apply Log to both sides of an identity ez = ew to deduce z = w, the function Log z is discontinuous at each negative real number, but continuous everywhere else in C ×. To explain the discontinuity, consider what happens to Arg z as z approaches a negative real number a, if z approaches a from above, then Arg z approaches π, which is also the value of Arg a itself
16.
Cauchy
–
Baron Augustin-Louis Cauchy FRS FRSE was a French mathematician who made pioneering contributions to analysis. He was one of the first to state and prove theorems of calculus rigorously and he almost singlehandedly founded complex analysis and the study of permutation groups in abstract algebra. A profound mathematician, Cauchy had an influence over his contemporaries. His writings range widely in mathematics and mathematical physics, more concepts and theorems have been named for Cauchy than for any other mathematician. Cauchy was a writer, he wrote approximately eight hundred research articles. Cauchy was the son of Louis François Cauchy and Marie-Madeleine Desestre, Cauchy married Aloise de Bure in 1818. She was a relative of the publisher who published most of Cauchys works. By her he had two daughters, Marie Françoise Alicia and Marie Mathilde, Cauchys father was a high official in the Parisian Police of the New Régime. He lost his position because of the French Revolution that broke out one month before Augustin-Louis was born, the Cauchy family survived the revolution and the following Reign of Terror by escaping to Arcueil, where Cauchy received his first education, from his father. After the execution of Robespierre, it was safe for the family to return to Paris, there Louis-François Cauchy found himself a new bureaucratic job, and quickly moved up the ranks. When Napoleon Bonaparte came to power, Louis-François Cauchy was further promoted, the famous mathematician Lagrange was also a friend of the Cauchy family. On Lagranges advice, Augustin-Louis was enrolled in the École Centrale du Panthéon, most of the curriculum consisted of classical languages, the young and ambitious Cauchy, being a brilliant student, won many prizes in Latin and Humanities. In spite of successes, Augustin-Louis chose an engineering career. In 1805 he placed second out of 293 applicants on this exam, one of the main purposes of this school was to give future civil and military engineers a high-level scientific and mathematical education. The school functioned under military discipline, which caused the young, nevertheless, he finished the Polytechnique in 1807, at the age of 18, and went on to the École des Ponts et Chaussées. He graduated in engineering, with the highest honors. After finishing school in 1810, Cauchy accepted a job as an engineer in Cherbourg. Cauchys first two manuscripts were accepted, the one was rejected
17.
Donald Knuth
–
Donald Ervin Knuth is an American computer scientist, mathematician, and professor emeritus at Stanford University. He is the author of the multi-volume work The Art of Computer Programming and he contributed to the development of the rigorous analysis of the computational complexity of algorithms and systematized formal mathematical techniques for it. In the process he also popularized the asymptotic notation, Knuth strongly opposes granting software patents, having expressed his opinion to the United States Patent and Trademark Office and European Patent Organization. Knuth was born in Milwaukee, Wisconsin, to German-Americans Ervin Henry Knuth and his father had two jobs, running a small printing company and teaching bookkeeping at Milwaukee Lutheran High School. Donald, a student at Milwaukee Lutheran High School, received academic accolades there, for example, in eighth grade, he entered a contest to find the number of words that the letters in Zieglers Giant Bar could be rearranged to create. Although the judges only had 2,500 words on their list, Donald found 4,500 words, as prizes, the school received a new television and enough candy bars for all of his schoolmates to eat. Knuth had a time choosing physics over music as his major at Case Institute of Technology. He also joined Beta Nu Chapter of the Theta Chi fraternity, while studying physics at the Case Institute of Technology, Knuth was introduced to the IBM650, one of the early mainframes. After reading the manual, Knuth decided to rewrite the assembly and compiler code for the machine used in his school. In 1958, Knuth created a program to help his schools basketball team win their games and he assigned values to players in order to gauge their probability of getting points, a novel approach that Newsweek and CBS Evening News later reported on. Knuth was one of the editors of the Engineering and Science Review. In 1963, with mathematician Marshall Hall as his adviser, he earned a PhD in mathematics from the California Institute of Technology, after receiving his PhD, Knuth joined Caltechs faculty as an associate professor. He accepted a commission to write a book on computer programming language compilers and he originally planned to publish this as a single book. As Knuth developed his outline for the book, he concluded that he required six volumes and he published the first volume in 1968. Knuth then left this position to join the Stanford University faculty, Knuth is a writer as well as a computer scientist. Knuth has been called the father of the analysis of algorithms, in the 1970s, Knuth described computer science as a totally new field with no real identity. And the standard of available publications was not that high, a lot of the papers coming out were quite simply wrong. So one of my motivations was to put straight a story that had been very badly told, by 2013, the first three volumes and part one of volume four of his series had been published
18.
IEEE 754-2008
–
The IEEE Standard for Floating-Point Arithmetic is a technical standard for floating-point computation established in 1985 by the Institute of Electrical and Electronics Engineers. The standard addressed many problems found in the floating point implementations that made them difficult to use reliably and portably. Many hardware floating point units now use the IEEE754 standard, the international standard ISO/IEC/IEEE60559,2011 has been approved for adoption through JTC1/SC25 under the ISO/IEEE PSDO Agreement and published. The binary formats in the standard are included in the new standard along with three new basic formats. To conform to the current standard, an implementation must implement at least one of the formats as both an arithmetic format and an interchange format. As of September 2015, the standard is being revised to incorporate clarifications, an IEEE754 format is a set of representations of numerical values and symbols. A format may also include how the set is encoded, a format comprises, Finite numbers, which may be either base 2 or base 10. Each finite number is described by three integers, s = a sign, c = a significand, q = an exponent, the numerical value of a finite number is s × c × bq where b is the base, also called radix. For example, if the base is 10, the sign is 1, the significand is 12345, two kinds of NaN, a quiet NaN and a signaling NaN. A NaN may carry a payload that is intended for diagnostic information indicating the source of the NaN, the sign of a NaN has no meaning, but it may be predictable in some circumstances. Hence the smallest non-zero positive number that can be represented is 1×10−101 and the largest is 9999999×1090, the numbers −b1−emax and b1−emax are the smallest normal numbers, non-zero numbers between these smallest numbers are called subnormal numbers. Zero values are finite values with significand 0 and these are signed zeros, the sign bit specifies if a zero is +0 or −0. Some numbers may have several representations in the model that has just been described, for instance, if b=10 and p=7, −12.345 can be represented by −12345×10−3, −123450×10−4, and −1234500×10−5. However, for most operations, such as operations, the result does not depend on the representation of the inputs. For the decimal formats, any representation is valid, and the set of representations is called a cohort. When a result can have several representations, the standard specifies which member of the cohort is chosen, for the binary formats, the representation is made unique by choosing the smallest representable exponent. For numbers with an exponent in the range, the leading bit of the significand will always be 1. Consequently, the leading 1 bit can be implied rather than explicitly present in the memory encoding and this rule is called leading bit convention, implicit bit convention, or hidden bit convention
19.
NaN
–
In computing, NaN, standing for not a number, is a numeric data type value representing an undefined or unrepresentable value, especially in floating-point calculations. Systematic use of NaNs was introduced by the IEEE754 floating-point standard in 1985, two separate kinds of NaNs are provided, termed quiet NaNs and signaling NaNs. An invalid operation is not the same as an arithmetic overflow or an arithmetic underflow. For example, a bit-wise IEEE floating-point standard single precision NaN would be, s1111111 1xxx xxxx xxxx xxxx xxxx where s is the sign. Some bits from x are used to determine the type of NaN, the remaining bits encode a payload. Floating-point operations other than ordered comparisons normally propagate a quiet NaN, a comparison with a NaN always returns an unordered result even when comparing with itself. The comparison predicates are either signaling or non-signaling, the signaling versions signal the invalid operation exception for such comparisons, the equality and inequality predicates are non-signaling so x = x returning false can be used to test if x is a quiet NaN. The other standard comparison predicates are all signaling if they receive a NaN operand, the predicate isNaN determines if a value is a NaN and never signals an exception, even if x is a signaling NaN. The propagation of quiet NaNs through arithmetic operations allows errors to be detected at the end of a sequence of operations without extensive testing during intermediate stages. In section 6.2 of the revised IEEE 754-2008 standard there are two anomalous functions that favor numbers — if just one of the operands is a NaN then the value of the operand is returned. There are three kinds of operations that can return NaN, Operations with a NaN as at least one operand, indeterminate forms, The divisions 0/0 and ±∞/±∞. The additions ∞ +, + ∞ and equivalent subtractions, the standard has alternative functions for powers, The standard pow function and the integer exponent pown function define 00, 1∞, and ∞0 as 1. The powr function defines all three forms as invalid operations and so returns NaN. Real operations with complex results, for example, The square root of a negative number, the logarithm of a negative number. The inverse sine or cosine of a number that is less than −1 or greater than +1, NaNs may also be explicitly assigned to variables, typically as a representation for missing values. Prior to the IEEE standard, programmers often used a value to represent undefined or missing values. NaNs are not necessarily generated in all the above cases, if an operation can produce an exception condition and traps are not masked then the operation will cause a trap instead. If an operand is a quiet NaN, and there isnt also a signaling NaN operand, then there is no exception condition, explicit assignments will not cause an exception even for signaling NaNs
20.
C99
–
C99 is an informal name for ISO/IEC9899,1999, a past version of the C programming language standard. The C11 version of the C programming language standard, published in 2011, normative Amendment 1 created a new standard for C in 1995, but only to correct some details of the 1989 standard and to add more extensive support for international character sets. The standard underwent further revision in the late 1990s, leading to the publication of ISO/IEC9899,1999 in 1999, the language defined by that version of the standard is commonly referred to as C99. The international C standard is maintained by the working group ISO/IEC JTC1/SC22/WG14, C99 is, for the most part, backward compatible with C89, but it is stricter in some ways. In particular, a declaration that lacks a type specifier no longer has int implicitly assumed, the C standards committee decided that it was of more value for compilers to diagnose inadvertent omission of the type specifier than to silently process legacy code that relied on implicit int. Variable-length arrays are not among these included parts because C++s Standard Template Library already includes similar functionality. A major feature of C99 is its numerics support, and in particular its support for access to the features of IEEE 754-1985 floating point hardware present in the vast majority of modern processors, platforms without IEEE754 hardware can also implement it in software. The four arithmetic operations and square root are correctly rounded as defined by IEEE754, the intermediate result type for operands of a given precision are summarized in the table on the right. C99 defines a number of expression evaluation methods, the current compilation mode can be checked to ensure it meets the assumptions the code was written under. The special values such as NaN and positive or negative infinity can be tested, long double is defined as IEEE754 double extended or quad precision if available. Using higher precision than required for intermediate computations can minimize round-off error, the main function to be evaluated. Although it appears that some arguments to this continued fraction, e. g.3.0, would lead to a divide-by-zero error, in fact the function is well-defined at 3. As the raised divide-by-zero flag is not an error in this case, in some cases, other exceptions may be regarded as an error, such as overflow. __STDC_IEC_559__ is to be defined only if Annex F IEC60559 floating-point arithmetic is fully implemented by the compiler, the default rounding mode is round to nearest for IEEE754, but explicitly setting the rounding mode toward + and - infinity can be used to diagnose numerical instability. Note that this method can be used even if compute_fn is part of a compiled binary library. But depending on the function, numerical instabilities cannot always be detected, a standard macro __STDC_VERSION__ is defined with value 199901L to indicate that C99 support is available. Historically, Microsoft has been slow to implement new C features in their Visual C++ tools, however, with the introduction of Visual C++2013 Microsoft implemented a limited subset of C99, which was expanded in Visual C++2015. Work continues on technical reports addressing decimal floating point, additional mathematical special functions, the C and C++ standards committees have been collaborating on specifications for threaded programming
21.
Java (programming language)
–
Java is a general-purpose computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers write once, run anywhere, Java applications are typically compiled to bytecode that can run on any Java virtual machine regardless of computer architecture. As of 2016, Java is one of the most popular programming languages in use, particularly for client-server web applications, Java was originally developed by James Gosling at Sun Microsystems and released in 1995 as a core component of Sun Microsystems Java platform. The language derives much of its syntax from C and C++, the original and reference implementation Java compilers, virtual machines, and class libraries were originally released by Sun under proprietary licences. As of May 2007, in compliance with the specifications of the Java Community Process, others have also developed alternative implementations of these Sun technologies, such as the GNU Compiler for Java, GNU Classpath, and IcedTea-Web. James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991, Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time. The language was initially called Oak after an oak tree that stood outside Goslings office, later the project went by the name Green and was finally renamed Java, from Java coffee. Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar, Sun Microsystems released the first public implementation as Java 1.0 in 1995. It promised Write Once, Run Anywhere, providing no-cost run-times on popular platforms, fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular, while mostly outside of browsers, in January 2016, Oracle announced that Java runtime environments based on JDK9 will discontinue the browser plugin. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification, with the advent of Java 2, new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, the desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EE, Java ME, in 1997, Sun Microsystems approached the ISO/IEC JTC1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process, at one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System, on November 13,2006, Sun released much of its Java virtual machine as free and open-source software, under the terms of the GNU General Public License. Suns vice-president Rich Green said that Suns ideal role with regard to Java was as an evangelist and this did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK. Java software runs on everything from laptops to data centers, game consoles to scientific supercomputers, on April 2,2010, James Gosling resigned from Oracle. There were five primary goals in the creation of the Java language, It must be simple, object-oriented and it must be robust and secure
22.
.NET Framework
–
. NET Framework is a software framework developed by Microsoft that runs primarily on Microsoft Windows. It includes a class library named Framework Class Library and provides language interoperability across several programming languages. FCL and CLR together constitute. NET Framework, FCL provides user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. Programmers produce software by combining their source code with. NET Framework, the framework is intended to be used by most new applications created for the Windows platform. Silverlight was available as a web browser plugin, Microsoft began developing. NET Framework in the late 1990s, originally under the name of Next Generation Windows Services. By late 2000, the first beta versions of. NET1.0 were released, in August 2000, Microsoft, Hewlett-Packard, and Intel worked to standardize Common Language Infrastructure and C#. By December 2001, both were ratified Ecma International standards, International Organisation for Standardisation followed in April 2003. The current version of ISO standards are ISO-IEC23271,2012, while Microsoft and their partners hold patents for CLI and C#, ECMA and ISO require that all patents essential to implementation be made available under reasonable and non-discriminatory terms. The firms agreed to meet these terms, and to make the patents available royalty-free, however, this did not apply for the part of. NET Framework not covered by ECMA-ISO standards, which included Windows Forms, ADO. NET, and ASP. NET. Patents that Microsoft holds in these areas may have deterred non-Microsoft implementations of the full framework, on 3 October 2007, Microsoft announced that the source code for. NET Framework 3.5 libraries was to become available under the Microsoft Reference Source License. The source code became available online on 16 January 2008 and included BCL, ASP. NET, ADO. NET, Windows Forms, WPF. Scott Guthrie of Microsoft promised that LINQ, WCF, and WF libraries were being added, in November 2014, Microsoft also produced an update to its patent grants, which further extends the scope beyond its prior pledges. The new grant does maintain the restriction that any implementation must maintain compliance with the mandatory parts of the CLI specification. On 31 March 2016, Microsoft announced at Microsoft Build that they will completely relicense Mono under an MIT License even in scenarios where formerly a commercial license was needed and it was announced that the Mono Project was contributed to the. NET Foundation. These developments followed the acquisition of Xamarin, which began in February 2016 and was finished on 18 March 2016. Microsofts press release highlights that the cross-platform commitment now allows for a fully open-source, however, Microsoft does not plan to release the source for WPF or Windows Forms. By implementing the core aspects of. NET Framework within the scope of CLI, microsofts implementation of CLI is Common Language Runtime. It serves as the engine of. NET Framework
23.
SageMath
–
SageMath is mathematical software with features covering many aspects of mathematics, including algebra, combinatorics, numerical mathematics, number theory, and calculus. The originator and leader of the SageMath project, William Stein, is a mathematician at the University of Washington, SageMath uses a Python-like syntax, supporting procedural, functional and object-oriented constructs. Features of SageMath include, A browser-based notebook for review and re-use of previous inputs and outputs, including graphics, compatible with Firefox, Opera, Konqueror, Google Chrome and Safari. Notebooks can be accessed locally or remotely and the connection can be secured with HTTPS, interfacing this way is documented officially to Sage. William Stein realized when designing Sage that there were many open-source mathematics software packages written in different languages, namely C, C++, Common Lisp, Fortran. Rather than reinventing the wheel, Sage integrates many specialized mathematics software packages into a common interface, however, Sage contains hundreds of thousands of unique lines of code adding new functions and creating the interface between its components. SageMath uses both students and professionals for development, the development of SageMath is supported by both volunteer work and grants. However, it was not until 2016 that the first full-time Sage developer was hired, only the major releases are listed below. SageMath practices the release early, release often concept, with releases every few weeks or months, in total, there have been over 300 releases, although their frequency has decreased. 2007, first prize in the software division of Les Trophées du Libre. 2012, one of the selected for the Google Summer of Code. SageMath has been cited in a variety of publications, both binaries and source code are available for SageMath from the download page. Cython can increase the speed of SageMath programs, as the Python code is converted into C, SageMath is free software, distributed under the terms of the GNU General Public License version 3. SageMath is available in many ways, The source code can be downloaded from the downloads page, although not recommended for end users, development releases of SageMath are also available. Many Linux distributions also include SageMath in their repositories, binaries can be downloaded for Linux, macOS and Solaris. A live CD containing a bootable Linux operating system is also available and this allows usage of SageMath without Linux installation. Users could have used a version of SageMath at sagenb. org. Users can use a single cell version of SageMath at sagecell. sagemath. org or embed a single SageMath cell into any web page
24.
Maple (software)
–
Maple is a symbolic and numeric computing environment, and is also a multi-paradigm programming language. Developed by Maplesoft, Maple also covers aspects of technical computing, including visualization, data analysis, matrix computation. A toolbox, MapleSim, adds functionality for physical modeling. Users can enter mathematics in traditional mathematical notation, custom user interfaces can also be created. There is support for numeric computations, to arbitrary precision, as well as symbolic computation and visualization, examples of symbolic computations are given below. Maple incorporates a dynamically typed programming language which resembles Pascal. The language permits variables of lexical scope, there are also interfaces to other languages. There is also an interface to Excel, Maple supports MathML2.0, a W3C format for representing and interpreting mathematical expressions, including their display in Web pages. Maple is based on a kernel, written in C. Most functionality is provided by libraries, which come from a variety of sources, most of the libraries are written in the Maple language, these have viewable source code. Many numerical computations are performed by the NAG Numerical Libraries, ATLAS libraries, different functionality in Maple requires numerical data in different formats. Symbolic expressions are stored in memory as directed acyclic graphs, the standard interface and calculator interface are written in Java. The first concept of Maple arose from a meeting in November 1980 at the University of Waterloo, researchers at the university wished to purchase a computer powerful enough to run Macsyma. Instead, it was decided that they would develop their own computer system that would be able to run on lower cost computers. The first limited version appearing in December 1980 with Maple demonstrated first at conferences beginning in 1982, the name is a reference to Maples Canadian heritage. By the end of 1983, over 50 universities had copies of Maple installed on their machines, in 1984, the research group arranged with Watcom Products Inc to license and distribute the first commercially available version, Maple 3.3. In 1988 Waterloo Maple Inc. was founded, the company’s original goal was to manage the distribution of the software. In 1989, the first graphical user interface for Maple was developed and included with version 4.3 for the Macintosh, x11 and Windows versions of the new interface followed in 1990 with Maple V
25.
Mathematica
–
Wolfram Mathematica is a mathematical symbolic computation program, sometimes termed a computer algebra system or program, used in many scientific, engineering, mathematical, and computing fields. It was conceived by Stephen Wolfram and is developed by Wolfram Research of Champaign, the Wolfram Language is the programming language used in Mathematica. The kernel interprets expressions and returns result expressions, all content and formatting can be generated algorithmically or edited interactively. Standard word processing capabilities are supported, including real-time multi-lingual spell-checking, documents can be structured using a hierarchy of cells, which allow for outlining and sectioning of a document and support automatic numbering index creation. Documents can be presented in an environment for presentations. Notebooks and their contents are represented as Mathematica expressions that can be created, modified or analyzed by Mathematica programs or converted to other formats, the front end includes development tools such as a debugger, input completion, and automatic syntax highlighting. Among the alternative front ends is the Wolfram Workbench, an Eclipse based integrated development environment and it provides project-based code development tools for Mathematica, including revision management, debugging, profiling, and testing. There is a plugin for IntelliJ IDEA based IDEs to work with Wolfram Language code which in addition to syntax highlighting can analyse and auto-complete local variables, the Mathematica Kernel also includes a command line front end. Other interfaces include JMath, based on GNU readline and MASH which runs self-contained Mathematica programs from the UNIX command line, version 5.2 added automatic multi-threading when computations are performed on multi-core computers. This release included CPU specific optimized libraries, in addition Mathematica is supported by third party specialist acceleration hardware such as ClearSpeed. Support for CUDA and OpenCL GPU hardware was added in 2010, also, since version 8 it can generate C code, which is automatically compiled by a system C compiler, such as GCC or Microsoft Visual Studio. A free-of-charge version, Wolfram CDF Player, is provided for running Mathematica programs that have saved in the Computable Document Format. It can also view standard Mathematica files, but not run them and it includes plugins for common web browsers on Windows and Macintosh. WebMathematica allows a web browser to act as a front end to a remote Mathematica server and it is designed to allow a user written application to be remotely accessed via a browser on any platform. It may not be used to full access to Mathematica. Due to bandwidth limitations interactive 3D graphics is not fully supported within a web browser, Wolfram Language code can be converted to C code or to an automatically generated DLL. Wolfram Language code can be run on a Wolfram cloud service as a web-app or as an API either on Wolfram-hosted servers or in an installation of the Wolfram Enterprise Private Cloud. Communication with other applications occurs through a protocol called Wolfram Symbolic Transfer Protocol and it allows communication between the Wolfram Mathematica kernel and front-end, and also provides a general interface between the kernel and other applications
26.
Wolfram Alpha
–
Wolfram Alpha is a computational knowledge engine or answer engine developed by Wolfram Research, which was founded by Stephen Wolfram. Users submit queries and computation requests via a text field, Wolfram Alpha then computes answers and relevant visualizations from a knowledge base of curated, structured data that come from other sites and books. The site use a portfolio of automated and manual methods, including statistics, visualization, source cross-checking, the curated data makes Alpha different from semantic search engines, which index a large number of answers and then try to match the question to one. It is able to respond to particularly-phrased natural language fact-based questions such as Where was Mary Robinson born, or more complex questions such as How old was Queen Elizabeth II in 1974. Wolfram Alpha does not answer queries which require a response such as What is the difference between the Julian and the Gregorian calendars. But will answer factual or computational questions such as June 1 in Julian calendar, mathematical symbolism can be parsed by the engine, which typically responds with more than the numerical results. For example, lim /x yields the correct limiting value of 1, as well as a plot, up to 235 terms of the Taylor series, and it is also able to perform calculations on data using more than one source. For example, What is the fifty-second smallest country by GDP per capita, Wolfram Alpha is written in 15 million lines of Wolfram Language code and runs on more than 10,000 CPUs. The database currently includes hundreds of datasets, such as All Current, the datasets have been accumulated over several years. The curated datasets are checked for quality either by a scientist or other expert in a relevant field, one example of a live dataset that Wolfram Alpha can use is the profile of a Facebook user, through inputting the facebook report query. Within two weeks of launching the Facebook analytics service,400,000 users had used it, downloadable query results are behind a pay wall but summaries are accessible to free accounts. Wolfram Alpha has been used to power some searches in the Microsoft Bing, launch preparations began on May 15,2009 at 7 pm CDT and were broadcast live on Justin. tv. The plan was to launch the service a few hours later. The service was launched on May 18,2009. Wolfram Alpha has received mixed reviews, Wolfram Alpha advocates point to its potential, some even stating that how it determines results is more important than current usefulness. On December 3,2009, an app was introduced. Some users considered the initial $50 price of the iOS app unnecessarily high and they also complained about the simultaneous removal of the mobile formatting option for the site. Wolfram responded by lowering the price to $2, offering a refund to existing customers, on October 6,2010 an Android version of the app was released and it is now available for Kindle Fire and Nook
27.
MATLAB
–
MATLAB is a multi-paradigm numerical computing environment and fourth-generation programming language. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses the MuPAD symbolic engine, an additional package, Simulink, adds graphical multi-domain simulation and model-based design for dynamic and embedded systems. In 2004, MATLAB had around one million users across industry, MATLAB users come from various backgrounds of engineering, science, and economics. Cleve Moler, the chairman of the science department at the University of New Mexico. He designed it to give his students access to LINPACK and EISPACK without them having to learn Fortran and it soon spread to other universities and found a strong audience within the applied mathematics community. Jack Little, an engineer, was exposed to it during a visit Moler made to Stanford University in 1983, recognizing its commercial potential, he joined with Moler and Steve Bangert. They rewrote MATLAB in C and founded MathWorks in 1984 to continue its development and these rewritten libraries were known as JACKPAC. In 2000, MATLAB was rewritten to use a set of libraries for matrix manipulation. MATLAB was first adopted by researchers and practitioners in control engineering, Littles specialty and it is now also used in education, in particular the teaching of linear algebra, numerical analysis, and is popular amongst scientists involved in image processing. The MATLAB application is built around the MATLAB scripting language, common usage of the MATLAB application involves using the Command Window as an interactive mathematical shell or executing text files containing MATLAB code. Variables are defined using the assignment operator, =, MATLAB is a weakly typed programming language because types are implicitly converted. It is a typed language because variables can be assigned without declaring their type, except if they are to be treated as symbolic objects. Values can come from constants, from computation involving values of other variables, for example, A simple array is defined using the colon syntax, init, increment, terminator. For instance, defines a variable named array which is an array consisting of the values 1,3,5,7 and that is, the array starts at 1, increments with each step from the previous value by 2, and stops once it reaches 9. The increment value can actually be left out of this syntax, assigns to the variable named ari an array with the values 1,2,3,4, and 5, since the default value of 1 is used as the incrementer. Indexing is one-based, which is the convention for matrices in mathematics, although not for some programming languages such as C, C++. Matrices can be defined by separating the elements of a row with blank space or comma, the list of elements should be surrounded by square brackets. Parentheses, are used to access elements and subarrays, sets of indices can be specified by expressions such as 2,4, which evaluates to
28.
Python (programming language)
–
Python is a widely used high-level programming language for general-purpose programming, created by Guido van Rossum and first released in 1991. The language provides constructs intended to enable writing clear programs on both a small and large scale and it has a large and comprehensive standard library. Python interpreters are available for operating systems, allowing Python code to run on a wide variety of systems. CPython, the implementation of Python, is open source software and has a community-based development model. CPython is managed by the non-profit Python Software Foundation, about the origin of Python, Van Rossum wrote in 1996, Over six years ago, in December 1989, I was looking for a hobby programming project that would keep me occupied during the week around Christmas. Would be closed, but I had a computer. I decided to write an interpreter for the new scripting language I had been thinking about lately, I chose Python as a working title for the project, being in a slightly irreverent mood. Python 2.0 was released on 16 October 2000 and had major new features, including a cycle-detecting garbage collector. With this release the development process was changed and became more transparent, Python 3.0, a major, backwards-incompatible release, was released on 3 December 2008 after a long period of testing. Many of its features have been backported to the backwards-compatible Python 2.6. x and 2.7. x version series. The End Of Life date for Python 2.7 was initially set at 2015, many other paradigms are supported via extensions, including design by contract and logic programming. Python uses dynamic typing and a mix of reference counting and a garbage collector for memory management. An important feature of Python is dynamic name resolution, which binds method, the design of Python offers some support for functional programming in the Lisp tradition. The language has map, reduce and filter functions, list comprehensions, dictionaries, and sets, the standard library has two modules that implement functional tools borrowed from Haskell and Standard ML. Python can also be embedded in existing applications that need a programmable interface, while offering choice in coding methodology, the Python philosophy rejects exuberant syntax, such as in Perl, in favor of a sparser, less-cluttered grammar. As Alex Martelli put it, To describe something as clever is not considered a compliment in the Python culture. Pythons philosophy rejects the Perl there is more one way to do it approach to language design in favor of there should be one—and preferably only one—obvious way to do it. Pythons developers strive to avoid premature optimization, and moreover, reject patches to non-critical parts of CPython that would offer an increase in speed at the cost of clarity
29.
Magma computer algebra system
–
Magma is a computer algebra system designed to solve problems in algebra, number theory, geometry and combinatorics. It is named after the algebraic structure magma and it runs on Unix-like operating systems, as well as Windows. Magma is produced and distributed by the Computational Algebra Group within the School of Mathematics and Statistics at the University of Sydney, in late 2006, the book Discovering Mathematics with Magma was published by Springer as volume 19 of the Algorithms and Computations in Mathematics series. The Magma system is used extensively within pure mathematics, the predecessor of the Magma system was named Cayley, after Arthur Cayley. Magma was officially released in August 1993, version 2.0 of Magma was released in June 1996 and subsequent versions of 2. X have been released approximately once per year. All students, researchers and faculty associated with a participating institution will be able to access Magma for free, Group theory Magma includes permutation, matrix, finitely presented, soluble, abelian, polycyclic, braid and straight-line program groups. Several databases of groups are also included, integer factorization algorithms include the Elliptic Curve Method, the Quadratic sieve and the Number field sieve. Algebraic number theory Magma includes the KANT computer algebra system for comprehensive computations in algebraic number fields, a special type also allows one to compute in the algebraic closure of a field. Module theory and linear algebra Magma contains asymptotically fast algorithms for all fundamental dense matrix operations, commutative algebra and Gröbner bases Magma has an efficient implementation of the Faugère F4 algorithm for computing Gröbner bases. Representation theory Magma has extensive tools for computing in representation theory, including the computation of tables of finite groups
30.
PARI/GP
–
PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License and it runs on most common operating systems. The PARI/GP system is a package that is capable of doing formal computations on recursive types at high speed and its three main strengths are its speed, the possibility of directly using data types that are familiar to mathematicians, and its extensive algebraic number theory module. The PARI/GP system consists of the standard components, PARI is a C library, allowing for fast computations. Gp is an interactive command line interface giving access to the PARI functions. It functions as a programmable calculator which contains most of the control instructions of a standard language like C. GP is the name of gps scripting language which can be used to program gp, also available is gp2c, the GP-to-C compiler, which compiles GP scripts into the C language and transparently loads the resulting functions into gp. The advantage of this is that gp2c-compiled scripts will typically run three to four times faster, gp2c understands almost all of GP, with the exception of Lists. It can compute factorizations, perform elliptic curve computations and perform algebraic number theory calculations and it also allows computations with matrices, polynomials, power series, algebraic numbers and implements many special functions. PARI/GP comes with its own built-in graphical plotting capability, PARI/GP has some symbolic manipulation capability, e. g. multivariate polynomial and rational function handling. It also has some formal integration and differentiation capabilities, PARI/GP can be compiled with GMP providing faster computations than PARI/GPs native arbitrary precision kernel. PARI/GPs progenitor was a program named Isabelle, an interpreter for higher arithmetic, written in 1979 by Henri Cohen and François Dress at the Université Bordeaux 1. The name PARI is a pun about the early stages when the authors started to implement a library for Pascal ARIthmetic in the Pascal programming language. The first version of the gp calculator was originally called GPC, the trailing C was eventually dropped. Below are some samples of the gp calculator usage, \p 212 realprecision =221 significant digits. %3 = x - 1/6*x^3 + 1/120*x^5 - 1/5040*x^7 + 1/362880*x^9 - 1/39916800*x^11 + 1/6227020800*x^13 - 1/1307674368000*x^15 + O. for time =5 ms, K = bnfinit, K. cyc time = 1ms. %4 = /* This number field has class number 3
31.
Nicolas Bourbaki
–
With the goal of grounding all of mathematics on set theory, the group strove for rigour and generality. Their work led to the discovery of several concepts and terminologies still used, in 1934, young French mathematicians from various French universities felt the need to form a group to jointly produce textbooks that they could all use for teaching. André Weil organized the first meeting on 10 December 1934 in the basement of a Parisian grill room, Bourbakis main work is the Elements of Mathematics series. This series aims to be a completely self-contained treatment of the areas of modern mathematics. Assuming no special knowledge of mathematics, it takes up mathematics from the beginning, proceeds axiomatically. The volume on spectral theory from 1967 was for almost four decades the last new book to be added to the series, after that several new chapters to existing books as well as revised editions of existing chapters appeared until the publication of chapters 8-9 of Commutative Algebra in 1983. Then a long break in publishing activity occurred, leading many to suspect the end of the publishing project, however, chapter 10 of Commutative Algebra appeared in 1998, and after another long break a completely re-written and expanded chapter 8 of Algèbre was published in 2012. More importantly, the first four chapters of a new book on algebraic topology were published in 2016. Besides the Éléments de mathématique series, lectures from the Séminaire Bourbaki also have been published in monograph form since 1948. Notations introduced by Bourbaki include the symbol ∅ for the empty set and a dangerous bend symbol ☡, and the terms injective, surjective, and bijective. The emphasis on rigour may be seen as a reaction to the work of Henri Poincaré, the impact of Bourbakis work initially was great on many active research mathematicians world-wide. For example, Our time is witnessing the creation of a monumental work and it provoked some hostility, too, mostly on the side of classical analysts, they approved of rigour but not of high abstraction. This led to a gulf with the way theoretical physics was practiced, Bourbakis direct influence has decreased over time. This is partly because certain concepts which are now important, such as the machinery of category theory, are not covered in the treatise. It also mattered that, while especially algebraic structures can be defined in Bourbakis terms. On the other hand, the approach and rigour advocated by Bourbaki have permeated the current mathematical practices to such extent that the task undertaken was completed and this is particularly true for the less applied parts of mathematics. The Bourbaki seminar series founded in post-WWII Paris continues, it has going on since 1948. It is an important source of survey articles, with sketches of proofs, the topics range through all branches of mathematics, including sometimes theoretical physics