1.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
2.
Mean value theorem
–
This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. More precisely, if a function f is continuous on the closed interval and it is one of the most important results in real analysis. A special case of this theorem was first described by Parameshvara, from the Kerala school of astronomy and mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Rolle in 1691, the result was what is now known as Rolles theorem, the mean value theorem in its modern form was stated and proved by Cauchy in 1823. Let f, → R be a function on the closed interval, and differentiable on the open interval. Then there exists c in such that f ′ = f − f b − a. The mean value theorem is a generalization of Rolles theorem, which assumes f = f, the mean value theorem is still valid in a slightly more general setting. One only needs to assume that f, → R is continuous on, If finite, that limit equals f ′. An example where this version of the theorem applies is given by the cube root function mapping x → x 13. Note that the theorem, as stated, is if a differentiable function is complex-valued instead of real-valued. For example, define f = e x i for all real x, then f − f =0 =0 while f ′ ≠0 for any real x. Thus the Mean value theorem says that given any chord of a smooth curve, the following proof illustrates this idea. Define g = f − r x, where r is a constant, since f is continuous on and differentiable on, the same is true for g. We now want to choose r so that g satisfies the conditions of Rolles theorem, Assume that f is a continuous, real-valued function, defined on an arbitrary interval I of the real line. If the derivative of f at every point of the interval I exists and is zero. Proof, Assume the derivative of f at every point of the interval I exists and is zero. Let be an open interval in I. By the mean value theorem, there exists a point c in such that 0 = f ′ = f − f b − a, thus, f is constant on the interior of I and thus is constant on I by continuity
3.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
4.
Taylor's theorem
–
In calculus, Taylors theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. For analytic functions the Taylor polynomials at a point are finite order truncations of its Taylor series. The exact content of Taylors theorem is not universally agreed upon, indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial. Taylors theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712, yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory, Taylors theorem is taught in introductory level calculus courses and it is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the point of more advanced asymptotic analysis. Taylors theorem also generalizes to multivariate and vector valued functions f, R n → R m on any dimensions n and m and this generalization of Taylors theorem is the basis for the definition of so-called jets which appear in differential geometry and partial differential equations. If a real-valued function f is differentiable at the point a then it has an approximation at the point a. This means that there exists a function h1 such that f = f + f ′ + h 1, here P1 = f + f ′ is the linear approximation of f at the point a. The graph of y = P1 is the tangent line to the graph of f at x = a, the error in the approximation is R1 = f − P1 = h 1. Note that this goes to zero a little bit faster than x − a as x tends to a, if we wanted a better approximation to f, we might instead try a quadratic polynomial instead of a linear function. Instead of just matching one derivative of f at a, we can match two derivatives, thus producing a polynomial that has the slope and concavity as f at a. The quadratic polynomial in question is P2 = f + f ′ + f ″22, Taylors theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of the point a, a better approximation than the linear approximation. Specifically, f = P2 + h 22, lim x → a h 2 =0. Here the error in the approximation is R2 = f − P2 = h 22 which, given the behavior of h 2. Similarly, we might get better approximations to f if we use polynomials of higher degree. In general, the error in approximating a function by a polynomial of degree k will go to zero a little bit faster than k as x tends to a. Find the smallest degree k for which the polynomial Pk approximates f to within an error on a given interval
5.
Multivariable calculus
–
A study of limits and continuity in multivariable calculus yields many counter-intuitive results not demonstrated by single-variable functions. Indeed, the function f = x 2 y x 4 + y 2 approaches zero along any line through the origin, however, when the origin is approached along a parabola y = x 2, it has a limit of 0.5. Since taking different paths toward the same point yields different values for the limit, continuity in each argument is not sufficient for multivariate continuity. For instance, in the case of a function with two real-valued parameters, f, continuity of f in x for fixed y and continuity of f in y for fixed x does not imply continuity of f. Consider f = { y x − y if 1 ≥ x > y ≥0 x y − x if 1 ≥ y > x ≥01 − x if x = y >00 else. It is easy to verify that all real-valued functions that are given by f y, = f are continuous in x, similarly, all f x are continuous as f is symmetric with regards to x and y. However, f itself is not continuous as can be seen by considering the sequence f which should converge to f =0 if f was continuous, however, lim n → ∞ f =1. Thus, function is not continuous at, the partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a function is a derivative with respect to one variable with all other variables held constant. Partial derivatives may be combined in interesting ways to more complicated expressions of the derivative. In vector calculus, the del operator is used to define the concepts of gradient, divergence, a matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called differential equations or PDEs. These equations are more difficult to solve than ordinary differential equations. The multiple integral expands the concept of the integral to functions of any number of variables, double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubinis theorem guarantees that an integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration. The surface integral and the integral are used to integrate over curved manifolds such as surfaces and curves. In single-variable calculus, the theorem of calculus establishes a link between the derivative and the integral
6.
Shell integration
–
Shell integration is a means of calculating the volume of a solid of revolution, when integrating along an axis perpendicular to the axis of revolution. This is in contrast to disk integration which integrates along the parallel to the axis of revolution. The shell method goes as follows, Consider a volume in three dimensions obtained by rotating a cross-section in the xy-plane around the y-axis, suppose the cross-section is defined by the graph of the positive function f on the interval. Consider the volume, depicted below, whose cross section on the interval is defined by, because the volume is hollow in the middle we will find two functions, one that defines the inner solid and one that defines the outer solid. After integrating these two functions with the method we subtract them to yield the desired volume. With the shell method all we need is the formula,2 π ∫12 x 22 d x By expanding the polynomial the integral becomes very simple. In the end we find the volume is π/10 cubic units, solid of revolution Disk integration Weisstein, Eric W. Method of Shells
7.
Improper integral
–
Such an integral is often written symbolically just like a standard definite integral, in some cases with infinity as a limit of integration. By abuse of notation, improper integrals are often written symbolically just like standard definite integrals, when the definite integral exists, this ambiguity is resolved as both the proper and improper integral will coincide in value. The original definition of the Riemann integral does not apply to a such as 1 / x 2 on the interval [1, ∞). The narrow definition of the Riemann integral also does not cover the function 1 / x on the interval, the problem here is that the integrand is unbounded in the domain of integration. However, the integral does exist if understood as the limit ∫011 x d x = lim a →0 + ∫ a 11 x d x = lim a →0 + =2. Sometimes integrals may have two singularities where they are improper, consider, for example, the function 1/ integrated from 0 to ∞. At the lower bound, as x goes to 0 the function goes to ∞, thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π/6, to integrate from 1 to ∞, a Riemann sum is not possible. However, any upper bound, say t, gives a well-defined result,2 arctan − π/2. This has a limit as t goes to infinity, namely π/2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, replacing 1/3 by an arbitrary positive value s is equally safe, giving π/2 −2 arctan. This, too, has a limit as s goes to zero. This process does not guarantee success, a limit might fail to exist, for example, over the bounded interval from 0 to 1 the integral of 1/x does not converge, and over the unbounded interval from 1 to ∞ the integral of 1/√x does not converge. It might also happen that an integrand is unbounded near an interior point, for the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. But the similar integral ∫ −11 d x x cannot be assigned a value in this way, as the integrals above, an improper integral converges if the limit defining it exists. It is also possible for an integral to diverge to infinity. In that case, one may assign the value of ∞ to the integral, for instance lim b → ∞ ∫1 b 1 x d x = ∞. However, other improper integrals may simply diverge in no particular direction, such as lim b → ∞ ∫1 b x sin d x and this is called divergence by oscillation
8.
Related rates
–
In differential calculus, related rates problems involve finding a rate at which a quantity changes by relating that quantity to other quantities whose rates of change are known. The rate of change is usually with respect to time, because science and engineering often relate quantities to each other, the methods of related rates have broad applications in these fields. Because problems involve several variables, differentiation with respect to time or one of the other variables requires application of the chain rule. Fundamentally, if a function F is defined such that F = f and we assume x is a function of t, i. e. x = g. Then F = f, so F ′ = f ′ ⋅ g ′ Written in Leibniz notation, the value of this is, if it is known how x changes with respect to t, then we can determine how F changes with respect to t and vice versa. We can extend this application of the rule with the sum, difference, product and quotient rules of calculus. If F = G + H then d F d x ⋅ d x d t = d G d y ⋅ d y d t + d H d z ⋅ d z d t. The most common way to approach related rates problems is the following, Identify the known variables, including rates of change, construct an equation relating the quantities whose rates of change are known to the quantity whose rate of change is to be found. Differentiate both sides of the equation with respect to time, often, the chain rule is employed at this step. Substitute the known rates of change and the known quantities into the equation, solve for the wanted rate of change. Errors in this procedure are often caused by plugging in the values for the variables before finding the derivative with respect to time. A 10-meter ladder is leaning against the wall of a building, how fast is the top of the ladder sliding down the wall when the base of the ladder is 6 meters from the wall. The distance between the base of the ladder and the wall, x, and the height of the ladder on the wall, y, represent the sides of a triangle with the ladder as the hypotenuse. The objective is to find dy/dt, the rate of change of y with respect to time, t, when h, x and dx/dt, the rate of change of x, are known. Step 1, x =6 h =10 d x d t =3 d h d t =0 d y d t =. Step 2, From the Pythagorean theorem, the equation x 2 + y 2 = h 2, step 4 &5, Using the variables from step 1 gives us, d y d t = h d h d t − x d x d t y. D y d t =10 ×0 −6 ×3 y = −18 y, in doing such, the top of the ladder is sliding down the wall at a rate of 9⁄4 meters per second. Because one physical quantity often depends on another, which, in turn depends on others, such as time and this section presents an example of related rates kinematics and electromagnetic induction
9.
Limit of a function
–
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input. Formal definitions, first devised in the early 19th century, are given below, informally, a function f assigns an output f to every input x. We say the function has a limit L at an input p, more specifically, when f is applied to any input sufficiently close to p, the output value is forced arbitrarily close to L. On the other hand, if some inputs very close to p are taken to outputs that stay a distance apart. The notion of a limit has many applications in modern calculus, in particular, the many definitions of continuity employ the limit, roughly, a function is continuous if all of its limits agree with the values of the function. It also appears in the definition of the derivative, in the calculus of one variable, however, his work was not known during his lifetime. Weierstrass first introduced the definition of limit in the form it is usually written today. He also introduced the notations lim and limx→x0, the modern notation of placing the arrow below the limit symbol is due to Hardy in his book A Course of Pure Mathematics in 1908. Imagine a person walking over a landscape represented by the graph of y = f and her horizontal position is measured by the value of x, much like the position given by a map of the land or by a global positioning system. Her altitude is given by the coordinate y and she is walking towards the horizontal position given by x = p. As she gets closer and closer to it, she notices that her altitude approaches L, if asked about the altitude of x = p, she would then answer L. What, then, does it mean to say that her altitude approaches L. It means that her altitude gets nearer and nearer to L except for a small error in accuracy. For example, suppose we set a particular goal for our traveler. She reports back that indeed she can get within ten meters of L, since she notes that when she is within fifty horizontal meters of p, the accuracy goal is then changed, can she get within one vertical meter. If she is anywhere within seven meters of p, then her altitude always remains within one meter from the target L. This explicit statement is quite close to the definition of the limit of a function with values in a topological space. To say that lim x → p f = L, means that ƒ can be made as close as desired to L by making x close enough, the following definitions are the generally accepted ones for the limit of a function in various contexts. Suppose f, R → R is defined on the real line, the value of the limit does not depend on the value of f, nor even that p be in the domain of f
10.
Continuous function
–
In mathematics, a continuous function is a function for which sufficiently small changes in the input result in arbitrarily small changes in the output. Otherwise, a function is said to be a discontinuous function, a continuous function with a continuous inverse function is called a homeomorphism. Continuity of functions is one of the concepts of topology. The introductory portion of this focuses on the special case where the inputs and outputs of functions are real numbers. In addition, this article discusses the definition for the general case of functions between two metric spaces. In order theory, especially in theory, one considers a notion of continuity known as Scott continuity. Other forms of continuity do exist but they are not discussed in this article, as an example, consider the function h, which describes the height of a growing flower at time t. By contrast, if M denotes the amount of money in an account at time t, then the function jumps at each point in time when money is deposited or withdrawn. A form of the definition of continuity was first given by Bernard Bolzano in 1817. Cauchy defined infinitely small quantities in terms of quantities. The formal definition and the distinction between pointwise continuity and uniform continuity were first given by Bolzano in the 1830s but the work wasnt published until the 1930s, all three of those nonequivalent definitions of pointwise continuity are still in use. Eduard Heine provided the first published definition of continuity in 1872. This is not a definition of continuity since the function f =1 x is continuous on its whole domain of R ∖ A function is continuous at a point if it does not have a hole or jump. A “hole” or “jump” in the graph of a function if the value of the function at a point c differs from its limiting value along points that are nearby. Such a point is called a discontinuity, a function is then continuous if it has no holes or jumps, that is, if it is continuous at every point of its domain. Otherwise, a function is discontinuous, at the points where the value of the function differs from its limiting value, there are several ways to make this definition mathematically rigorous. These definitions are equivalent to one another, so the most convenient definition can be used to determine whether a function is continuous or not. In the definitions below, f, I → R. is a function defined on a subset I of the set R of real numbers and this subset I is referred to as the domain of f
11.
Partial derivative
–
In mathematics, the symmetry of second derivatives refers to the possibility under certain conditions of interchanging the order of taking partial derivatives of a function f of n variables. This is sometimes known as Schwarzs theorem or Youngs theorem, in the context of partial differential equations it is called the Schwarz integrability condition. This matrix of partial derivatives of f is called the Hessian matrix of f. The entries in it off the diagonal are the mixed derivatives. In most real-life circumstances the Hessian matrix is symmetric, although there are a number of functions that do not have this property. Mathematical analysis reveals that symmetry requires a hypothesis on f that goes further than simply stating the existence of the derivatives at a particular point. Schwarz theorem gives a sufficient condition on f for this to occur, in symbols, the symmetry says that, for example, ∂ ∂ x = ∂ ∂ y. This equality can also be written as ∂ x y f = ∂ y x f, alternatively, the symmetry can be written as an algebraic statement involving the differential operator Di which takes the partial derivative with respect to xi, Di. From this relation it follows that the ring of operators with constant coefficients. But one should naturally specify some domain for these operators and it is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are possible, the partial differentiations of this function are commutative at that point. One easy way to establish this theorem is by applying Greens theorem to the gradient of f, a weaker condition than the continuity of second partial derivatives which nevertheless suffices to ensure symmetry is that all partial derivatives are themselves differentiable. The theory of distributions eliminates analytic problems with the symmetry, the derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions. In more detail, = − = f = f = − =, another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. The symmetry may be if the function fails to have differentiable partial derivatives. An example of non-symmetry is the function, This function is everywhere continuous, however, the second partial derivatives are not continuous at, and the symmetry fails. In fact, along the x-axis the y-derivative is ∂ y f | = x, vice versa, along the y-axis the x-derivative ∂ x f | = − y, and so ∂ y ∂ x f | = −1
12.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z