1.
Calculus
–
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science, engineering and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is also used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was later discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle. In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series. He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
2.
Mean value theorem
–
This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. More precisely, if a function f is continuous on the closed interval and it is one of the most important results in real analysis. A special case of this theorem was first described by Parameshvara, from the Kerala school of astronomy and mathematics in India, in his commentaries on Govindasvāmi and Bhāskara II. A restricted form of the theorem was proved by Rolle in 1691, the result was what is now known as Rolles theorem, the mean value theorem in its modern form was stated and proved by Cauchy in 1823. Let f, → R be a function on the closed interval, and differentiable on the open interval. Then there exists c in such that f ′ = f − f b − a. The mean value theorem is a generalization of Rolles theorem, which assumes f = f, the mean value theorem is still valid in a slightly more general setting. One only needs to assume that f, → R is continuous on, If finite, that limit equals f ′. An example where this version of the theorem applies is given by the cube root function mapping x → x 13. Note that the theorem, as stated, is if a differentiable function is complex-valued instead of real-valued. For example, define f = e x i for all real x, then f − f =0 =0 while f ′ ≠0 for any real x. Thus the Mean value theorem says that given any chord of a smooth curve, the following proof illustrates this idea. Define g = f − r x, where r is a constant, since f is continuous on and differentiable on, the same is true for g. We now want to choose r so that g satisfies the conditions of Rolles theorem, Assume that f is a continuous, real-valued function, defined on an arbitrary interval I of the real line. If the derivative of f at every point of the interval I exists and is zero. Proof, Assume the derivative of f at every point of the interval I exists and is zero. Let be an open interval in I. By the mean value theorem, there exists a point c in such that 0 = f ′ = f − f b − a, thus, f is constant on the interior of I and thus is constant on I by continuity
3.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
4.
Taylor's theorem
–
In calculus, Taylors theorem gives an approximation of a k-times differentiable function around a given point by a k-th order Taylor polynomial. For analytic functions the Taylor polynomials at a point are finite order truncations of its Taylor series. The exact content of Taylors theorem is not universally agreed upon, indeed, there are several versions of it applicable in different situations, and some of them contain explicit estimates on the approximation error of the function by its Taylor polynomial. Taylors theorem is named after the mathematician Brook Taylor, who stated a version of it in 1712, yet an explicit expression of the error was not provided until much later on by Joseph-Louis Lagrange. An earlier version of the result was already mentioned in 1671 by James Gregory, Taylors theorem is taught in introductory level calculus courses and it is one of the central elementary tools in mathematical analysis. Within pure mathematics it is the point of more advanced asymptotic analysis. Taylors theorem also generalizes to multivariate and vector valued functions f, R n → R m on any dimensions n and m and this generalization of Taylors theorem is the basis for the definition of so-called jets which appear in differential geometry and partial differential equations. If a real-valued function f is differentiable at the point a then it has an approximation at the point a. This means that there exists a function h1 such that f = f + f ′ + h 1, here P1 = f + f ′ is the linear approximation of f at the point a. The graph of y = P1 is the tangent line to the graph of f at x = a, the error in the approximation is R1 = f − P1 = h 1. Note that this goes to zero a little bit faster than x − a as x tends to a, if we wanted a better approximation to f, we might instead try a quadratic polynomial instead of a linear function. Instead of just matching one derivative of f at a, we can match two derivatives, thus producing a polynomial that has the slope and concavity as f at a. The quadratic polynomial in question is P2 = f + f ′ + f ″22, Taylors theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of the point a, a better approximation than the linear approximation. Specifically, f = P2 + h 22, lim x → a h 2 =0. Here the error in the approximation is R2 = f − P2 = h 22 which, given the behavior of h 2. Similarly, we might get better approximations to f if we use polynomials of higher degree. In general, the error in approximating a function by a polynomial of degree k will go to zero a little bit faster than k as x tends to a. Find the smallest degree k for which the polynomial Pk approximates f to within an error on a given interval
5.
Antiderivative
–
In calculus, an antiderivative, primitive function, primitive integral or indefinite integral of a function f is a differentiable function F whose derivative is equal to the original function f. This can be stated symbolically as F ′ = f, the process of solving for antiderivatives is called antidifferentiation and its opposite operation is called differentiation, which is the process of finding a derivative. The discrete equivalent of the notion of antiderivative is antidifference, the function F = x3/3 is an antiderivative of f = x2, as the derivative of x3/3 is x2. As the derivative of a constant is zero, x2 will have a number of antiderivatives, such as x3/3, x3/3 +1, x3/3 -2. Thus, all the antiderivatives of x2 can be obtained by changing the value of C in F = x3/3 + C, essentially, the graphs of antiderivatives of a given function are vertical translations of each other, each graphs vertical location depending upon the value of C. In physics, the integration of acceleration yields velocity plus a constant, the constant is the initial velocity term that would be lost upon taking the derivative of velocity because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion, C is called the arbitrary constant of integration. If the domain of F is a disjoint union of two or more intervals, then a different constant of integration may be chosen for each of the intervals. For instance F = { −1 x + C1 x <0 −1 x + C2 x >0 is the most general antiderivative of f =1 / x 2 on its natural domain ∪. Every continuous function f has an antiderivative, and one antiderivative F is given by the integral of f with variable upper boundary. Varying the lower boundary produces other antiderivatives and this is another formulation of the fundamental theorem of calculus. There are many functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions. Examples of these are ∫ e − x 2 d x, ∫ sin x 2 d x, ∫ sin x x d x, ∫1 ln x d x, ∫ x x d x. From left to right, the first four are the function, the Fresnel function, the trigonometric integral. See also Differential Galois theory for a detailed discussion. Finding antiderivatives of elementary functions is often harder than finding their derivatives. For some elementary functions, it is impossible to find an antiderivative in terms of elementary functions. See the articles on elementary functions and nonelementary integral for further information, integrals which have already been derived can be looked up in a table of integrals
6.
Shell integration
–
Shell integration is a means of calculating the volume of a solid of revolution, when integrating along an axis perpendicular to the axis of revolution. This is in contrast to disk integration which integrates along the parallel to the axis of revolution. The shell method goes as follows, Consider a volume in three dimensions obtained by rotating a cross-section in the xy-plane around the y-axis, suppose the cross-section is defined by the graph of the positive function f on the interval. Consider the volume, depicted below, whose cross section on the interval is defined by, because the volume is hollow in the middle we will find two functions, one that defines the inner solid and one that defines the outer solid. After integrating these two functions with the method we subtract them to yield the desired volume. With the shell method all we need is the formula,2 π ∫12 x 22 d x By expanding the polynomial the integral becomes very simple. In the end we find the volume is π/10 cubic units, solid of revolution Disk integration Weisstein, Eric W. Method of Shells
7.
Improper integral
–
Such an integral is often written symbolically just like a standard definite integral, in some cases with infinity as a limit of integration. By abuse of notation, improper integrals are often written symbolically just like standard definite integrals, when the definite integral exists, this ambiguity is resolved as both the proper and improper integral will coincide in value. The original definition of the Riemann integral does not apply to a such as 1 / x 2 on the interval [1, ∞). The narrow definition of the Riemann integral also does not cover the function 1 / x on the interval, the problem here is that the integrand is unbounded in the domain of integration. However, the integral does exist if understood as the limit ∫011 x d x = lim a →0 + ∫ a 11 x d x = lim a →0 + =2. Sometimes integrals may have two singularities where they are improper, consider, for example, the function 1/ integrated from 0 to ∞. At the lower bound, as x goes to 0 the function goes to ∞, thus this is a doubly improper integral. Integrated, say, from 1 to 3, an ordinary Riemann sum suffices to produce a result of π/6, to integrate from 1 to ∞, a Riemann sum is not possible. However, any upper bound, say t, gives a well-defined result,2 arctan − π/2. This has a limit as t goes to infinity, namely π/2. Similarly, the integral from 1/3 to 1 allows a Riemann sum as well, replacing 1/3 by an arbitrary positive value s is equally safe, giving π/2 −2 arctan. This, too, has a limit as s goes to zero. This process does not guarantee success, a limit might fail to exist, for example, over the bounded interval from 0 to 1 the integral of 1/x does not converge, and over the unbounded interval from 1 to ∞ the integral of 1/√x does not converge. It might also happen that an integrand is unbounded near an interior point, for the integral as a whole to converge, the limit integrals on both sides must exist and must be bounded. But the similar integral ∫ −11 d x x cannot be assigned a value in this way, as the integrals above, an improper integral converges if the limit defining it exists. It is also possible for an integral to diverge to infinity. In that case, one may assign the value of ∞ to the integral, for instance lim b → ∞ ∫1 b 1 x d x = ∞. However, other improper integrals may simply diverge in no particular direction, such as lim b → ∞ ∫1 b x sin d x and this is called divergence by oscillation
8.
Geometric series
–
In mathematics, a geometric series is a series with a constant ratio between successive terms. For example, the series 12 +14 +18 +116 + ⋯ is geometric, Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property. Historically, geometric series played an important role in the development of calculus. Geometric series are used throughout mathematics, and they have important applications in physics, engineering, biology, economics, computer science, queueing theory, the terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a series using only two terms, r and a. The term r is the ratio, and a is the first term of the series. In the case above, where r is one half, the series has the sum one, if r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms also gets larger and larger, if r is equal to one, all of the terms of the series are the same. If r is one the terms take two values alternately. The sum of the oscillates between two values. This is a different type of divergence and again the series has no sum, see for example Grandis series,1 −1 +1 −1 + ···. The sum can be computed using the self-similarity of the series, consider the sum of the following geometric series, s =1 +23 +49 +827 + ⋯. This series has common ratio 2/3, if we multiply through by this common ratio, then the initial 1 becomes a 2/3, the 2/3 becomes a 4/9, and so on,23 s =23 +49 +827 +1681 + ⋯. This new series is the same as the original, except that the first term is missing, subtracting the new series s from the original series s cancels every term in the original but the first, s −23 s =1, so s =3. A similar technique can be used to evaluate any self-similar expression, as n goes to infinity, the absolute value of r must be less than one for the series to converge. When a =1, this can be simplified to 1 + r + r 2 + r 3 + ⋯ =11 − r, the formula also holds for complex r, with the corresponding restriction, the modulus of r is strictly less than one. Since = 1−rn+1 and rn+1 →0 for | r | <1, convergence of geometric series can also be demonstrated by rewriting the series as an equivalent telescoping series. Consider the function, g = r K1 − r
9.
Curl (mathematics)
–
In vector calculus, the curl is a vector operator that describes the infinitesimal rotation of a 3-dimensional vector field. At every point in the field, the curl of that point is represented by a vector, the attributes of this vector characterize the rotation at that point. The direction of the curl is the axis of rotation, as determined by the rule. If the vector represents the flow velocity of a moving fluid. A vector field whose curl is zero is called irrotational, the curl is a form of differentiation for vector fields. The alternative terminology rotor or rotational and alternative notations rot F and ∇ × F are often used for curl F and this is a similar phenomenon as in the 3 dimensional cross product, and the connection is reflected in the notation ∇ × for the curl. The name curl was first suggested by James Clerk Maxwell in 1871, the curl of a vector field F, denoted by curl F, or ∇ × F, or rot F, at a point is defined in terms of its projection onto various lines through the point. As such, the curl operator maps continuously differentiable functions f, ℝ3 → ℝ3 to continuous functions g, in fact, it maps Ck functions in ℝ3 to Ck −1 functions in ℝ3. Implicitly, curl is defined by, ⋅ n ^ = d e f lim A →0 where ∮C F · dr is a line integral along the boundary of the area in question, and | A | is the magnitude of the area. Note that the equation for each component, k can be obtained by exchanging each occurrence of a subscript 1,2,3 in cyclic permutation, 1→2, 2→3, and 3→1. If are the Cartesian coordinates and are the coordinates, then h i =2 +2 +2 is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices,3,1,2 →1,2,3 →2,3,1. Suppose the vector field describes the velocity field of a fluid flow, if the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, such notation involving operators is common in physics and algebra. However, in coordinate systems, such as polar-toroidal coordinates. This expands as follows, i + j + k Although expressed in terms of coordinates, equivalently, = e k ε k l m ∇ l F m where ek are the coordinate vector fields. Equivalently, using the derivative, the curl can be expressed as, ∇ × F = ♯ Here ♭ and ♯ are the musical isomorphisms