Mathematical analysis

Mathematical analysis is the branch of mathematics dealing with limits and related theories, such as differentiation, measure, infinite series, analytic functions. These theories are studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary techniques of analysis. Analysis may be distinguished from geometry. Mathematical analysis formally developed in the 17th century during the Scientific Revolution, but many of its ideas can be traced back to earlier mathematicians. Early results in analysis were implicitly present in the early days of ancient Greek mathematics. For instance, an infinite geometric sum is implicit in Zeno's paradox of the dichotomy. Greek mathematicians such as Eudoxus and Archimedes made more explicit, but informal, use of the concepts of limits and convergence when they used the method of exhaustion to compute the area and volume of regions and solids; the explicit use of infinitesimals appears in Archimedes' The Method of Mechanical Theorems, a work rediscovered in the 20th century.

In Asia, the Chinese mathematician Liu Hui used the method of exhaustion in the 3rd century AD to find the area of a circle. Zu Chongzhi established a method that would be called Cavalieri's principle to find the volume of a sphere in the 5th century; the Indian mathematician Bhāskara II gave examples of the derivative and used what is now known as Rolle's theorem in the 12th century. In the 14th century, Madhava of Sangamagrama developed infinite series expansions, like the power series and the Taylor series, of functions such as sine, cosine and arctangent. Alongside his development of the Taylor series of the trigonometric functions, he estimated the magnitude of the error terms created by truncating these series and gave a rational approximation of an infinite series, his followers at the Kerala School of Astronomy and Mathematics further expanded his works, up to the 16th century. The modern foundations of mathematical analysis were established in 17th century Europe. Descartes and Fermat independently developed analytic geometry, a few decades Newton and Leibniz independently developed infinitesimal calculus, which grew, with the stimulus of applied work that continued through the 18th century, into analysis topics such as the calculus of variations and partial differential equations, Fourier analysis, generating functions.

During this period, calculus techniques were applied to approximate discrete problems by continuous ones. In the 18th century, Euler introduced the notion of mathematical function. Real analysis began to emerge as an independent subject when Bernard Bolzano introduced the modern definition of continuity in 1816, but Bolzano's work did not become known until the 1870s. In 1821, Cauchy began to put calculus on a firm logical foundation by rejecting the principle of the generality of algebra used in earlier work by Euler. Instead, Cauchy formulated calculus in terms of geometric infinitesimals. Thus, his definition of continuity required an infinitesimal change in x to correspond to an infinitesimal change in y, he introduced the concept of the Cauchy sequence, started the formal theory of complex analysis. Poisson, Liouville and others studied partial differential equations and harmonic analysis; the contributions of these mathematicians and others, such as Weierstrass, developed the -definition of limit approach, thus founding the modern field of mathematical analysis.

In the middle of the 19th century Riemann introduced his theory of integration. The last third of the century saw the arithmetization of analysis by Weierstrass, who thought that geometric reasoning was inherently misleading, introduced the "epsilon-delta" definition of limit. Mathematicians started worrying that they were assuming the existence of a continuum of real numbers without proof. Dedekind constructed the real numbers by Dedekind cuts, in which irrational numbers are formally defined, which serve to fill the "gaps" between rational numbers, thereby creating a complete set: the continuum of real numbers, developed by Simon Stevin in terms of decimal expansions. Around that time, the attempts to refine the theorems of Riemann integration led to the study of the "size" of the set of discontinuities of real functions. "monsters" began to be investigated. In this context, Jordan developed his theory of measure, Cantor developed what is now called naive set theory, Baire proved the Baire category theorem.

In the early 20th century, calculus was formalized using an axiomatic set theory. Lebesgue solved the problem of measure, Hilbert introduced Hilbert spaces to solve integral equations; the idea of normed vector space was in the air, in the 1920s Banach created functional analysis. In mathematics, a metric space is a set where a notion of distance between elements of the set is defined. Much of analysis happens in some metric space. Examples of analysis without a metric include functional analysis. Formally, a metric space is an ordered pair where M is a set

Complex analysis

Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is useful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics. By extension, use of complex analysis has applications in engineering fields such as nuclear, aerospace and electrical engineering; as a differentiable function of a complex variable is equal to the sum of its Taylor series, complex analysis is concerned with analytic functions of a complex variable. Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Riemann, Cauchy and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is used throughout analytic number theory. In modern times, it has become popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions.

Another important application of complex analysis is in string theory which studies conformal invariants in quantum field theory. A complex function is a function from complex numbers to complex numbers. In other words, it is a function that has a subset of the complex numbers as a domain and the complex numbers as a codomain. Complex functions are supposed to have a domain that contains a nonempty open subset of the complex plane. For any complex function, the values z from the domain and their images f in the range may be separated into real and imaginary parts: z = x + i y and f = f = u + i v, where x, y, u, v are all real-valued. In other words, a complex function f: C → C may be decomposed into u: R 2 → R and v: R 2 → R, i.e. into two real-valued functions of two real variables. Any complex-valued function f on an arbitrary set X can be considered as an ordered pair of two real-valued functions: or, alternatively, as a vector-valued function from X into R 2; some properties of complex-valued functions are nothing more than the corresponding properties of vector valued functions of two real variables.

Other concepts of complex analysis, such as differentiability are direct generalizations of the similar concepts for real functions, but may have different properties. In particular, every differentiable complex function is analytic, two differentiable functions that are equal in a neighborhood of a point are equal on the intersection of their domain; the latter property is the basis of the principle of analytic continuation which allows extending every real analytic function in a unique way for getting a complex analytic function whose domain is the whole complex plane with a finite number of curve arcs removed. Many basic and special complex functions are defined in this way, including exponential functions, logarithmic functions, trigonometric functions. Complex functions that are differentiable at every point of an open subset Ω of the complex plane are said to be holomorphic on Ω. In the context of complex analysis, the derivative of f at z 0 is defined to be f ′ = lim z → z 0 f − f z − z 0, z ∈ C.

Superficially, this definition is formally analogous to that of the derivative of a real function. However, complex derivatives and differentiable functions behave in different ways compared to their real counterparts. In particular, for this limit to exist, the value of the difference quotient must approach the same complex number, regardless of the manner in which we

Convolution

In mathematics convolution is a mathematical operation on two functions to produce a third function that expresses how the shape of one is modified by the other. The term convolution refers to the process of computing it; some features of convolution are similar to cross-correlation: for real-valued functions, of a continuous or discrete variable, it differs from cross-correlation only in that either f or g is reflected about the y-axis. For continuous functions, the cross-correlation operator is the adjoint of the convolution operator. Convolution has applications that include probability, computer vision, natural language processing and signal processing and differential equations; the convolution can be defined for functions on Euclidean space, other groups. For example, periodic functions, such as the discrete-time Fourier transform, can be defined on a circle and convolved by periodic convolution. A discrete convolution can be defined for functions on the set of integers. Generalizations of convolution have applications in the field of numerical analysis and numerical linear algebra, in the design and implementation of finite impulse response filters in signal processing.

Computing the inverse of the convolution operation is known as deconvolution. The convolution of f and g is written f ∗ g, using an star, it is defined as the integral of the product of the two functions after one is shifted. As such, it is a particular kind of integral transform: An equivalent definition is: ≜ ∫ − ∞ ∞ f g d τ. While the symbol t is used above, it need not represent the time domain, but in that context, the convolution formula can be described as a weighted average of the function f at the moment t where the weighting is given by g shifted by amount t. As t changes, the weighting function emphasizes different parts of the input function. For functions f, g supported on only [0, ∞), the integration limits can be truncated, resulting in: = ∫ 0 t f g d τ for f, g: [ 0, ∞ ) → R. For the multi-dimensional formulation of convolution, see domain of definition. A common engineering convention is: f ∗ g ≜ ∫ − ∞ ∞ f g d τ ⏟, which has to be interpreted to avoid confusion. For instance, f ∗ g is equivalent to.

Convolution describes the output of an important class of operations known as linear time-invariant. See LTI system theory for a derivation of convolution as the result of LTI constraints. In terms of the Fourier transforms of the input and output of an LTI operation, no new frequency components are created; the existing ones are only modified. In other words, the output transform is the pointwise product of the input transform with a third transform. See Convolution theorem for a derivation of that property of convolution. Conversely, convolution can be derived as the inverse Fourier transform of the pointwise product of two Fourier transforms. One of the earliest uses of the convolution integral appeared in D'Alembert's derivation of Taylor's theorem in Recherches sur différents points importants du système du monde, published in 1754. An expression of the type: ∫ f ⋅ g d u is used by Sylvestre François Lacroix on page 505 of his book entitled Treatise on differences and series, the last of 3 volumes of the encyclopedic series: Traité du calcul différentiel et du calcul intégral, Chez Courcier, Paris, 1797–1800.

Soon thereafter, convolution operations appear in the works of Pierre Simon Laplace, Jean-Baptiste Joseph Fourier, Siméon Denis Poisson, others. The term itself did not come into wide use until the 60s. Prior to that it was sometimes known as Faltung, composition product, superposition integral, Carson's integral, yet it appears as early as 1903. The o