Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape and algebra is the study of generalizations of arithmetic operations. It has two branches, differential calculus, and integral calculus, these two branches are related to each other by the fundamental theorem of calculus. Both branches make use of the notions of convergence of infinite sequences. Generally, modern calculus is considered to have developed in the 17th century by Isaac Newton. Today, calculus has widespread uses in science and economics, Calculus is a part of modern mathematics education. A course in calculus is a gateway to other, more advanced courses in mathematics devoted to the study of functions and limits, Calculus has historically been called the calculus of infinitesimals, or infinitesimal calculus. Calculus is used for naming some methods of calculation or theories of computation, such as calculus, calculus of variations, lambda calculus. The ancient period introduced some of the ideas that led to integral calculus, the method of exhaustion was discovered independently in China by Liu Hui in the 3rd century AD in order to find the area of a circle.
In the 5th century AD, Zu Gengzhi, son of Zu Chongzhi, indian mathematicians gave a non-rigorous method of a sort of differentiation of some trigonometric functions. In the Middle East, Alhazen derived a formula for the sum of fourth powers. He used the results to carry out what would now be called an integration, Cavalieris work was not well respected since his methods could lead to erroneous results, and the infinitesimal quantities he introduced were disreputable at first. The formal study of calculus brought together Cavalieris infinitesimals with the calculus of finite differences developed in Europe at around the same time, pierre de Fermat, claiming that he borrowed from Diophantus, introduced the concept of adequality, which represented equality up to an infinitesimal error term. The combination was achieved by John Wallis, Isaac Barrow, and James Gregory, in other work, he developed series expansions for functions, including fractional and irrational powers, and it was clear that he understood the principles of the Taylor series.
He did not publish all these discoveries, and at this time infinitesimal methods were considered disreputable. These ideas were arranged into a calculus of infinitesimals by Gottfried Wilhelm Leibniz. He is now regarded as an independent inventor of and contributor to calculus, unlike Newton, Leibniz paid a lot of attention to the formalism, often spending days determining appropriate symbols for concepts. Leibniz and Newton are usually credited with the invention of calculus. Newton was the first to apply calculus to general physics and Leibniz developed much of the used in calculus today
Order of integration (calculus)
In some cases, the order of integration can be validly interchanged, in others it cannot. The problem for examination is evaluation of an integral of the form ∬ D f d x d y, for some functions f straightforward integration is feasible, but where that is not true, the integral can sometimes be reduced to simpler form by changing the order of integration. The difficulty with this interchange is determining the change in description of the domain D, the method is applicable to other multiple integrals. Sometimes, even though a full evaluation is difficult, or perhaps requires a numerical integration, reduction to a single integration makes a numerical evaluation much easier and more efficient. Consider the iterated integral ∫ a z ∫ a x h d y d x and this forms a three dimensional slice dx wide along the x-axis, from y=a to y=x along the y axis, and in the z direction z=f. Notice that if the thickness dx is infinitesimal, x varies only infinitesimally on the slice and we can assume that x is constant.
This integration is as shown in the panel of Figure 1. The integral can be reduced to an integration by reversing the order of integration as shown in the right panel of the figure. For application to principal-value integrals, see Whittaker and Watson, Gakhov, Lu, see the discussion of the Poincaré-Bertrand transformation in Obolashvili. The second form is evaluated using a partial fraction expansion and an evaluation using the Sokhotski–Plemelj formula, the notation ∫ L ∗ indicates a Cauchy principal value. A good discussion of the basis for reversing the order of integration is found in the book Fourier Analysis by T. W. Körner. He introduces his discussion with an example where interchange of integration leads to two different answers because the conditions of Theorem II below are not satisfied. Here is the example, ∫1 ∞ x 2 − y 22 d y =1 ∞ = −11 + x 2, ron Miechs UCLA Calculus Problems More complex examples of changing the order of integration Duane Nykamps University of Minnesota website A general introduction
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space.
In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals.
In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
Integration by reduction formulae
Integration by reduction formula in integral calculus is a technique of integration, in the form of a recurrence relation. This method of integration is one of the earliest used and this makes the reduction formula a type of recurrence relation. In other words, the formula expresses the integral I n = ∫ f d x, in terms of I k = ∫ f d x. To compute the integral, we set n to its value, we back-substitute the previous results until we have computed In. Below are examples of the procedure, cosine integral Typically, integrals like ∫ cos n x d x, can be evaluated by a reduction formula. Start by setting, I n = ∫ cos n x d x. Now re-write as, I n = ∫ cos n −1 x cos x d x, Integrating by this substitution, cos x d x = d, I n = ∫ cos n −1 x d. To supplement the example, the above can be used to evaluate the integral for n =5, I5 = ∫ cos 5 x d x, exponential integral Another typical example is, ∫ x n e a x d x. Start by setting, I n = ∫ x n e a x d x
Integration by parts
It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be derived in one line simply by integrating the product rule of differentiation, more general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals. The discrete analogue for sequences is called summation by parts, the theorem can be derived as follows. Suppose u and v are two differentiable functions. The product rule states, d d x = v d d x + u d d x and it is not actually necessary for u and v to be continuously differentiable. Integration by parts works if u is continuous and the function designated v is Lebesgue integrable. This is only if we choose v = − exp . One can come up with similar examples in which u and v are not continuously differentiable. This visualisation explains why integration by parts may help find the integral of an inverse function f−1 when the integral of the f is known.
Indeed, the x and y are inverses, and the integral ∫x dy may be calculated as above from knowing the integral ∫y dx. The following form is useful in illustrating the best strategy to take, as a simple example, consider, ∫ ln x 2 d x. Since the derivative of ln is 1/x, one makes part u, since the antiderivative of 1/x2 is -1/x, the formula now yields, ∫ ln x 2 d x = − ln x − ∫ d x. The antiderivative of −1/x2 can be found with the rule and is 1/x. Alternatively, one may choose u and v such that the product u simplifies due to cancellation, for example, suppose one wishes to integrate, ∫ sec 2 ⋅ ln d x. The integrand simplifies to 1, so the antiderivative is x, finding a simplifying combination frequently involves experimentation. Some other special techniques are demonstrated in the examples below and trigonometric functions An example commonly used to examine the workings of integration by parts is I = ∫ e x cos d x. Here, integration by parts is performed twice, then, ∫ e x sin d x = e x sin − ∫ e x cos d x.
Putting these together, ∫ e x cos d x = e x cos + e x sin − ∫ e x cos d x, the same integral shows up on both sides of this equation
If a real-valued function f is continuous on a proper closed interval, differentiable on the open interval, and f = f, there exists at least one c in the open interval such that f ′ =0. This version of Rolles theorem is used to prove the mean value theorem and it is the basis for the proof of Taylors theorem. Indian mathematician Bhāskara II is credited with knowledge of Rolles theorem, although the theorem is named after Michel Rolle, Rolles 1691 proof covered only the case of polynomial functions. His proof did not use the methods of calculus, which at that point in his life he considered to be fallacious. The theorem was first proved by Cauchy in 1823 as a corollary of a proof of the mean value theorem, the name Rolles theorem was first used by Moritz Wilhelm Drobisch of Germany in 1834 and by Giusto Bellavitis of Italy in 1846. For a radius r >0, consider the function f = r 2 − x 2, x ∈ and its graph is the upper semicircle centered at the origin. This function is continuous on the interval and differentiable in the open interval.
Since f = f, Rolles theorem applies, and indeed, note that the theorem applies even when the function cannot be differentiated at the endpoints because it only requires the function to be differentiable in the open interval. If differentiability fails at a point of the interval, the conclusion of Rolles theorem may not hold. Consider the absolute value function f = | x |, x ∈, f = f, but there is no c between −1 and 1 for which the derivative is zero. This is because that function, although continuous, is not differentiable at x =0, note that the derivative of f changes its sign at x =0, but without attaining the value 0. The theorem cannot be applied to this function, because it does not satisfy the condition that the function must be differentiable for x in the open interval. However, when the differentiability requirement is dropped from Rolles theorem, f will still have a number in the open interval. The second example illustrates the generalization of Rolles theorem, Consider a real-valued.
If the right- and left-hand limits agree for every x, they agree in particular for c, if f is convex or concave, the right- and left-hand derivatives exist at every inner point, hence the above limits exist and are real numbers. This generalized version of the theorem is sufficient to prove convexity when the derivatives are monotonically increasing. Since the proof for the version of Rolles theorem and the generalization are very similar. In particular, if the derivative exists, it must be zero at c, by assumption, f is continuous on, and by the extreme value theorem attains both its maximum and its minimum in
In mathematics, the gradient is a multi-variable generalization of the derivative. While a derivative can be defined on functions of a variable, for functions of several variables. The gradient is a function, as opposed to a derivative. If f is a differentiable, real-valued function of several variables, like the derivative, the gradient represents the slope of the tangent of the graph of the function. More precisely, the gradient points in the direction of the greatest rate of increase of the function, the components of the gradient in coordinates are the coefficients of the variables in the equation of the tangent space to the graph. The Jacobian is the generalization of the gradient for vector-valued functions of several variables, a further generalization for a function between Banach spaces is the Fréchet derivative. Consider a room in which the temperature is given by a field, T. At each point in the room, the gradient of T at that point will show the direction in which the temperature rises most quickly, the magnitude of the gradient will determine how fast the temperature rises in that direction.
Consider a surface whose height above sea level at point is H, the gradient of H at a point is a vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at point is given by the magnitude of the gradient vector. The gradient can be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, suppose that the steepest slope on a hill is 40%. If a road goes directly up the hill, the steepest slope on the road will be 40%, if, the road goes around the hill at an angle, it will have a shallower slope. This observation can be stated as follows. If the hill height function H is differentiable, the gradient of H dotted with a unit vector gives the slope of the hill in the direction of the vector. More precisely, when H is differentiable, the dot product of the gradient of H with a unit vector is equal to the directional derivative of H in the direction of that unit vector. The gradient of a function f is denoted ∇f or ∇→f where ∇ denotes the vector differential operator.
The notation grad f is commonly used for the gradient. The gradient of f is defined as the vector field whose dot product with any vector v at each point x is the directional derivative of f along v. That is
Limit of a function
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input. Formal definitions, first devised in the early 19th century, are given below, informally, a function f assigns an output f to every input x. We say the function has a limit L at an input p, more specifically, when f is applied to any input sufficiently close to p, the output value is forced arbitrarily close to L. On the other hand, if some inputs very close to p are taken to outputs that stay a distance apart. The notion of a limit has many applications in modern calculus, in particular, the many definitions of continuity employ the limit, roughly, a function is continuous if all of its limits agree with the values of the function. It appears in the definition of the derivative, in the calculus of one variable, his work was not known during his lifetime. Weierstrass first introduced the definition of limit in the form it is usually written today.
He introduced the notations lim and limx→x0, the modern notation of placing the arrow below the limit symbol is due to Hardy in his book A Course of Pure Mathematics in 1908. Imagine a person walking over a landscape represented by the graph of y = f and her horizontal position is measured by the value of x, much like the position given by a map of the land or by a global positioning system. Her altitude is given by the coordinate y and she is walking towards the horizontal position given by x = p. As she gets closer and closer to it, she notices that her altitude approaches L, if asked about the altitude of x = p, she would answer L. What, does it mean to say that her altitude approaches L. It means that her altitude gets nearer and nearer to L except for a small error in accuracy. For example, suppose we set a particular goal for our traveler. She reports back that indeed she can get within ten meters of L, since she notes that when she is within fifty horizontal meters of p, the accuracy goal is changed, can she get within one vertical meter.
If she is anywhere within seven meters of p, her altitude always remains within one meter from the target L. This explicit statement is quite close to the definition of the limit of a function with values in a topological space. To say that lim x → p f = L, means that ƒ can be made as close as desired to L by making x close enough, the following definitions are the generally accepted ones for the limit of a function in various contexts. Suppose f, R → R is defined on the real line, the value of the limit does not depend on the value of f, nor even that p be in the domain of f
In mathematics, a geometric series is a series with a constant ratio between successive terms. For example, the series 12 +14 +18 +116 + ⋯ is geometric, Geometric series are among the simplest examples of infinite series with finite sums, although not all of them have this property. Historically, geometric series played an important role in the development of calculus. Geometric series are used throughout mathematics, and they have important applications in physics, biology, computer science, queueing theory, the terms of a geometric series form a geometric progression, meaning that the ratio of successive terms in the series is constant. This relationship allows for the representation of a series using only two terms, r and a. The term r is the ratio, and a is the first term of the series. In the case above, where r is one half, the series has the sum one, if r is greater than one or less than minus one the terms of the series become larger and larger in magnitude. The sum of the terms gets larger and larger, if r is equal to one, all of the terms of the series are the same.
If r is one the terms take two values alternately. The sum of the oscillates between two values. This is a different type of divergence and again the series has no sum, see for example Grandis series,1 −1 +1 −1 + ···. The sum can be computed using the self-similarity of the series, consider the sum of the following geometric series, s =1 +23 +49 +827 + ⋯. This series has common ratio 2/3, if we multiply through by this common ratio, the initial 1 becomes a 2/3, the 2/3 becomes a 4/9, and so on,23 s =23 +49 +827 +1681 + ⋯. This new series is the same as the original, except that the first term is missing, subtracting the new series s from the original series s cancels every term in the original but the first, s −23 s =1, so s =3. A similar technique can be used to evaluate any self-similar expression, as n goes to infinity, the absolute value of r must be less than one for the series to converge. When a =1, this can be simplified to 1 + r + r 2 + r 3 + ⋯ =11 − r, the formula holds for complex r, with the corresponding restriction, the modulus of r is strictly less than one.
Since = 1−rn+1 and rn+1 →0 for | r | <1, convergence of geometric series can be demonstrated by rewriting the series as an equivalent telescoping series. Consider the function, g = r K1 − r
In mathematics, the ratio test is a test for the convergence of a series ∑ n =1 ∞ a n, where each term is a real or complex number and an is nonzero when n is large. The test was first published by Jean le Rond dAlembert and is known as dAlemberts ratio test or as the Cauchy ratio test. The sum of the first m terms is given by,1 −12 m, as m increases, this converges to 1, so the sum of the series is 1. On the other hand given this geometric series, ∑ n =1 ∞2 n =2 +4 +8 + ⋯ The quotient a n +1 a n of any two adjacent terms is 2. The sum of the first m terms is given by 2 m +1 −2, more generally, the sum of the first m terms of the geometric series is given by, ∑ n =1 m r n = r r m −1 r −1. Whether this converges or diverges as m increases depends on whether r, however, as n increases, the ratio still tends in the limit towards the same constant 1/2. The ratio test generalizes the simple test for geometric series to more complex series like this one where the quotient of two terms is not fixed, but in the limit tends towards a fixed value.
The rules are similar, if the quotient approaches a value less than one, the series converges, whereas if it approaches a value greater than one, the series diverges. It is possible to make the ratio test applicable to cases where the limit L fails to exist, if limit superior. The test criteria can be refined so that the test is sometimes even when L =1. More specifically, let R = lim sup | a n +1 a n | r = lim inf | a n +1 a n |, if the limit L in exists, we must have L = R = r. So the original ratio test is a version of the refined one. As every term is positive, the series converges, consider the series ∑ n =1 ∞ e n n. Putting this into the ratio test, L = lim n → ∞ | a n +1 a n | = lim n → ∞ | e n +1 n +1 e n n | = e >1. Consider the three series ∑ n =1 ∞1, ∑ n =1 ∞1 n 2, ∑ n =1 ∞ n 1 n, the first series diverges, the second one converges absolutely and the third one converges conditionally. However, the term-by-term magnitude ratios | a n +1 a n | of the three series are respectively 1, n 22 and n n +1.
So, in all three cases, we have lim n → ∞ | a n +1 a n | =1 and this illustrates that when L=1, the series may converge or diverge and hence the original ratio test is inconclusive. Below is a proof of the validity of the ratio test