1.
Simpson's rule
–
In numerical analysis, Simpsons rule is a method for numerical integration, the numerical approximation of definite integrals. Specifically, it is the approximation, ∫ a b f d x ≈ b − a 6. For unequally spaced points, see Cartwright, Simpsons rule also corresponds to the three-point Newton-Cotes quadrature rule. The method is credited to the mathematician Thomas Simpson of Leicestershire, kepler used similar formulas over 100 years prior. For this reason the method is sometimes called Keplers rule, or Keplersche Fassregel in German, Simpsons rule can be derived in various ways. One derivation replaces the integrand f by the quadratic polynomial P which takes the values as f at the end points a and b. One can use Lagrange polynomial interpolation to find an expression for this polynomial, an easy integration by substitution shows that ∫ a b P d x = b − a 6. This calculation can be carried out more easily if one first observes that there is no loss of generality in assuming that a = −1 and b =1. Another derivation constructs Simpsons rule from two simpler approximations, the midpoint rule M = f and the trapezoidal rule T =12. The errors in these approximations are −1243 f ″ + O and 1123 f ″ + O, respectively, the two O terms are not equal, see Big O notation for more details. It follows from the formulas for the errors of the midpoint. This weighted average is exactly Simpsons rule, using another approximation, it is possible to take a suitable weighted average and eliminate another error term. The third derivation starts from the ansatz 1 b − a ∫ a b f d x ≈ α f + β f + γ f, the coefficients α, β and γ can be fixed by requiring that this approximation be exact for all quadratic polynomials. The error in approximating an integral by Simpsons rule is 1905 | f |, the error is asymptotically proportional to 5. However, the above derivations suggest an error proportional to 4, Simpsons rule gains an extra order because the points at which the integrand is evaluated are distributed symmetrically in the interval. If the interval of integration is in some small, then Simpsons rule will provide an adequate approximation to the exact integral. By small, what we mean is that the function being integrated is relatively smooth over the interval. For such a function, a smooth quadratic interpolant like the one used in Simpsons rule will give good results, however, it is often the case that the function we are trying to integrate is not smooth over the interval
2.
Trapezoidal rule
–
In mathematics, and more specifically in numerical analysis, the trapezoidal rule is a technique for approximating the definite integral ∫ a b f d x. The trapezoidal rule works by approximating the region under the graph of the function f as a trapezoid and it follows that ∫ a b f d x ≈. A2016 paper reports that the rule was in use in Babylon before 50 BC for integrating the velocity of Jupiter along the ecliptic. The trapezoidal rule is one of a family of formulas for numerical integration called Newton–Cotes formulas, however for various classes of rougher functions, the trapezoidal rule has faster convergence in general than Simpsons rule. Moreover, the trapezoidal rule tends to become extremely accurate when periodic functions are integrated over their periods, for a domain discretized into N equally spaced panels, or N+1 grid points a = x1 < x2 <. < xN+1 = b, where the spacing is h = N the approximation to the integral becomes ∫ a b f d x ≈ h 2 ∑ k =1 N = b − a 2 N. When the grid spacing is non-uniform, one can use the formula ∫ a b f d x ≈12 ∑ k =1 N and this can also be seen from the geometric picture, the trapezoids include all of the area under the curve and extend over it. Similarly, a concave-down function yields an underestimate because area is unaccounted for under the curve, if the interval of the integral being approximated includes an inflection point, the error is harder to identify. Further terms in this error estimate are given by the Euler–Maclaurin summation formula and it is argued that the speed of convergence of the trapezoidal rule reflects and can be used as a definition of classes of smoothness of the functions. The trapezoidal rule often converges very quickly for periodic functions, in the error formula above, f = f, and only the O term remains. More detailed analysis can be found in, for various classes of functions that are not twice-differentiable, the trapezoidal rule has sharper bounds than Simpsons rule
3.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
4.
Fundamental theorem of calculus
–
The fundamental theorem of calculus is a theorem that links the concept of the derivative of a function with the concept of the functions integral. This part of the guarantees the existence of antiderivatives for continuous functions. This part of the theorem has key practical applications because it simplifies the computation of definite integrals. The fundamental theorem of calculus relates differentiation and integration, showing that two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration, the first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory. Isaac Barrow proved a more generalized version of the theorem, while his student Isaac Newton completed the development of the mathematical theory. Gottfried Leibniz systematized the knowledge into a calculus for infinitesimal quantities, for a continuous function y = f whose graph is plotted as a curve, each value of x has a corresponding area function A, representing the area beneath the curve between 0 and x. The function A may not be known, but it is given that it represents the area under the curve. The area under the curve between x and x + h could be computed by finding the area between 0 and x + h, then subtracting the area between 0 and x, in other words, the area of this “sliver” would be A − A. There is another way to estimate the area of this same sliver, as shown in the accompanying figure, h is multiplied by f to find the area of a rectangle that is approximately the same size as this sliver. So, A − A ≈ f h In fact, this becomes a perfect equality if we add the red portion of the excess area shown in the diagram. So, A − A = f h + Rearranging terms, as h approaches 0 in the limit, the last fraction can be shown to go to zero. This is true because the area of the red portion of region is less than or equal to the area of the tiny black-bordered rectangle. More precisely, | f − A − A h | = | Red Excess | h ≤ h | f − f | h = | f − f |, by the continuity of f, the latter expression tends to zero as h does. Therefore, the left-hand side tends to zero as h does and that is, the derivative of the area function A exists and is the original function f, so, the area function is simply an antiderivative of the original function. Computing the derivative of a function and “finding the area” under its curve are opposite operations and this is the crux of the Fundamental Theorem of Calculus. Intuitively, the theorem states that the sum of infinitesimal changes in a quantity over time adds up to the net change in the quantity
5.
Limit of a function
–
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input. Formal definitions, first devised in the early 19th century, are given below, informally, a function f assigns an output f to every input x. We say the function has a limit L at an input p, more specifically, when f is applied to any input sufficiently close to p, the output value is forced arbitrarily close to L. On the other hand, if some inputs very close to p are taken to outputs that stay a distance apart. The notion of a limit has many applications in modern calculus, in particular, the many definitions of continuity employ the limit, roughly, a function is continuous if all of its limits agree with the values of the function. It also appears in the definition of the derivative, in the calculus of one variable, however, his work was not known during his lifetime. Weierstrass first introduced the definition of limit in the form it is usually written today. He also introduced the notations lim and limx→x0, the modern notation of placing the arrow below the limit symbol is due to Hardy in his book A Course of Pure Mathematics in 1908. Imagine a person walking over a landscape represented by the graph of y = f and her horizontal position is measured by the value of x, much like the position given by a map of the land or by a global positioning system. Her altitude is given by the coordinate y and she is walking towards the horizontal position given by x = p. As she gets closer and closer to it, she notices that her altitude approaches L, if asked about the altitude of x = p, she would then answer L. What, then, does it mean to say that her altitude approaches L. It means that her altitude gets nearer and nearer to L except for a small error in accuracy. For example, suppose we set a particular goal for our traveler. She reports back that indeed she can get within ten meters of L, since she notes that when she is within fifty horizontal meters of p, the accuracy goal is then changed, can she get within one vertical meter. If she is anywhere within seven meters of p, then her altitude always remains within one meter from the target L. This explicit statement is quite close to the definition of the limit of a function with values in a topological space. To say that lim x → p f = L, means that ƒ can be made as close as desired to L by making x close enough, the following definitions are the generally accepted ones for the limit of a function in various contexts. Suppose f, R → R is defined on the real line, the value of the limit does not depend on the value of f, nor even that p be in the domain of f
6.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
7.
Mean value theorem
–
This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. More precisely, if a function f is continuous on the closed interval and it is one of the most important results in real analysis. A special case of this theorem was first described by Parameshvara, from the Kerala school of astronomy and mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Rolle in 1691, the result was what is now known as Rolles theorem, the mean value theorem in its modern form was stated and proved by Cauchy in 1823. Let f, → R be a function on the closed interval, and differentiable on the open interval. Then there exists c in such that f ′ = f − f b − a. The mean value theorem is a generalization of Rolles theorem, which assumes f = f, the mean value theorem is still valid in a slightly more general setting. One only needs to assume that f, → R is continuous on, If finite, that limit equals f ′. An example where this version of the theorem applies is given by the cube root function mapping x → x 13. Note that the theorem, as stated, is if a differentiable function is complex-valued instead of real-valued. For example, define f = e x i for all real x, then f − f =0 =0 while f ′ ≠0 for any real x. Thus the Mean value theorem says that given any chord of a smooth curve, the following proof illustrates this idea. Define g = f − r x, where r is a constant, since f is continuous on and differentiable on, the same is true for g. We now want to choose r so that g satisfies the conditions of Rolles theorem, Assume that f is a continuous, real-valued function, defined on an arbitrary interval I of the real line. If the derivative of f at every point of the interval I exists and is zero. Proof, Assume the derivative of f at every point of the interval I exists and is zero. Let be an open interval in I. By the mean value theorem, there exists a point c in such that 0 = f ′ = f − f b − a, thus, f is constant on the interior of I and thus is constant on I by continuity
8.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
9.
Product rule
–
In calculus, the product rule is a formula used to find the derivatives of products of two or more functions. It may be stated as ′ = f ′ ⋅ g + f ⋅ g ′ or in the Leibniz notation d d x = u ⋅ d v d x + v ⋅ d u d x. In differentials notation, this can be written as d = u d v + v d u, discovery of this rule is credited to Gottfried Leibniz, who demonstrated it using differentials. Here is Leibnizs argument, Let u and v be two functions of x. Then the differential of uv is d = ⋅ − u ⋅ v = u ⋅ d v + v ⋅ d u + d u ⋅ d v. Since the term du·dv is negligible, Leibniz concluded that d = v ⋅ d u + u ⋅ d v, suppose we want to differentiate ƒ = x2 sin. By using the rule, one gets the derivative ƒ = 2x sin + x2cos. This follows from the rule since the derivative of any constant is zero. This, combined with the sum rule for derivatives, shows that differentiation is linear, the rule for integration by parts is derived from the product rule, as is the quotient rule. Let h = f g, and suppose that f and g are each differentiable at x and we want to prove that h is differentiable at x and that its derivative h is given by f g + f g. To do this f g − f g is added to the numerator to permit its factoring, a rigorous proof of the product rule can be given using the definition of the derivative as a limit, and the basic properties of limits. Let h = f g, and suppose that f and g are each differentiable at x0 and we want to prove that h is differentiable at x0 and that its derivative h′ is given by f′ g + f g′. Let Δh = h − h, note that although x0 is fixed, Δh depends on the value of Δx, which is thought of as being small. The function h is differentiable at x0 if the limit lim Δ x →0 Δ h Δ x exists, as with Δh, let Δf = f − f and Δg = g − g which, like Δh, also depends on Δx. Then f = f + Δf and g = g + Δg, using the basic properties of limits and the definition of the derivative, we can tackle this term-by term. First, lim Δ x →0 = f ′ g, similarly, lim Δ x →0 = f g ′. The third term, corresponding to the small rectangle, winds up being negligible because Δf Δg vanishes to second order. Then, f g − f g = − f g = f ′ g h + f g ′ h + O Taking the limit for small h gives the result, Let f = uv and suppose u and v are positive functions of x