Smoothness

In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has that are continuous. A smooth function is a function. Differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives. Consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer; the function f is said to be of class Ck if the derivatives f′, f′′... F are continuous; the function f is said to be of class C ∞, or smooth. The function f is said to be of class Cω, or analytic, if f is smooth and if its Taylor series expansion around any point in its domain converges to the function in some neighborhood of the point. Cω is thus contained in C∞. Bump functions are examples of functions in C∞ but not in Cω. To put it differently, the class C0 consists of all continuous functions; the class C1 consists of all differentiable functions.

Thus, a C1 function is a function whose derivative exists and is of class C0. In general, the classes Ck can be defined recursively by declaring C0 to be the set of all continuous functions and declaring Ck for any positive integer k to be the set of all differentiable functions whose derivative is in Ck−1. In particular, Ck is contained in Ck−1 for every k, there are examples to show that this containment is strict. C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers; the function f = { x if x ≥ 0, 0 if x < 0 is continuous, but not differentiable at x = 0, so it is of class C0 but not of class C1. The function g = { x 2 sin if x ≠ 0, 0 if x = 0 is differentiable, with derivative g ′ = { − cos + 2 x sin if x ≠ 0, 0 if x = 0; because cos oscillates as x → 0, g’ is not continuous at zero. Therefore, g is differentiable but not of class C1. Moreover, if one takes g = x4/3sin in this example, it can be used to show that the derivative function of a differentiable function can be unbounded on a compact set and, that a differentiable function on a compact set may not be locally Lipschitz continuous.

The functions f = | x | k + 1 where k is are continuous and k times differentiable at all x. But at x = 0 they are not times differentiable, so they are of class Ck but not of class Cj where j > k. The exponential function is analytic, so, of class Cω; the trigonometric functions are analytic wherever they are defined. The function f = { e − 1 1 − x 2 if | x | < 1, 0 otherwise is smooth, so of class C∞, but it is not analytic at x = ±1, so it is not of class Cω. The function f is an example of a smooth function with compact support. A function f: U ⊂ R n → R defined on an open set U of R n is said to be of class C k {\displayst

Flow (mathematics)

In mathematics, a flow formalizes the idea of the motion of particles in a fluid. Flows are ubiquitous including engineering and physics; the notion of flow is basic to the study of ordinary differential equations. Informally, a flow may be viewed as a continuous motion of points over time. More formally, a flow is a group action of the real numbers on a set; the idea of a vector flow, that is, the flow determined by a vector field, occurs in the areas of differential topology, Riemannian geometry and Lie groups. Specific examples of vector flows include the geodesic flow, the Hamiltonian flow, the Ricci flow, the mean curvature flow, the Anosov flow. Flows may be defined for systems of random variables and stochastic processes, occur in the study of ergodic dynamical systems; the most celebrated of these is the Bernoulli flow. A flow on a set X is a group action of the additive group of real numbers on X. More explicitly, a flow is a mapping φ: X × R → X such that, for all x ∈ X and all real numbers s and t, φ = x.

It is customary to write φt instead of φ, so that the equations above can be expressed as φ0 = Id and φs ∘ φt = φs+t. For all t ∈ ℝ, the mapping φt: X → X is a bijection with inverse φ−t: X → X; this follows from the above definition, the real parameter t may be taken as a generalized functional power, as in function iteration. Flows are required to be compatible with structures furnished on the set X. In particular, if X is equipped with a topology φ is required to be continuous. If X is equipped with a differentiable structure φ is required to be differentiable. In these cases the flow forms a one parameter subgroup of homeomorphisms and diffeomorphisms, respectively. In certain situations one might consider local flows, which are defined only in some subset d o m = ⊂ X × R called the flow domain of φ; this is the case with the flows of vector fields. It is common in many fields, including engineering and the study of differential equations, to use a notation that makes the flow implicit. Thus, x is written for φt, one might say that the "variable x depends on the time t and the initial condition x = x0".

Examples are given below. In the case of a flow of a vector field V on a smooth manifold X, the flow is denoted in such a way that its generator is made explicit. For example, Φ V: X × R → X. Given x in X, the set is called the orbit of x under φ. Informally, it may be regarded as the trajectory of a particle, positioned at x. If the flow is generated by a vector field its orbits are the images of its integral curves. Let F: Rn→Rn be a vector field and x: R→Rn the solution of the initial value problem x ˙ = F, x = x 0. Φ = x is the flow of the vector field F. It is a well-defined local flow provided. Φ: Rn×R → Rn is Lipschitz-continuous wherever defined. In general it may be hard to show that the flow φ is globally defined, but one simple criterion is that the vector field F is compactly supported. In the case of time-dependent vector fields F: Rn×R→Rn, one denotes φt,t0 = x, where x: R→Rn is the solution of x ˙ = F, x = x 0. Φt,t0 is the time-dependent flow of F. It is not a "flow" by the definition above, but it can be seen as one by rearranging its arguments.

Namely, the mapping φ: ( R n ×

Eigenvalues and eigenvectors

In linear algebra, an eigenvector or characteristic vector of a linear transformation is a non-zero vector that changes by only a scalar factor when that linear transformation is applied to it. More formally, if T is a linear transformation from a vector space V over a field F into itself and v is a vector in V, not the zero vector v is an eigenvector of T if T is a scalar multiple of v; this condition can be written as the equation T = λ v, where λ is a scalar in the field F, known as the eigenvalue, characteristic value, or characteristic root associated with the eigenvector v. If the vector space V is finite-dimensional the linear transformation T can be represented as a square matrix A, the vector v by a column vector, rendering the above mapping as a matrix multiplication on the left-hand side and a scaling of the column vector on the right-hand side in the equation A v = λ v. There is a direct correspondence between n-by-n square matrices and linear transformations from an n-dimensional vector space to itself, given any basis of the vector space.

For this reason, it is equivalent to define eigenvalues and eigenvectors using either the language of matrices or the language of linear transformations. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction, stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations; the prefix eigen- is adopted from the German word eigen for "proper", "characteristic". Utilized to study principal axes of the rotational motion of rigid bodies and eigenvectors have a wide range of applications, for example in stability analysis, vibration analysis, atomic orbitals, facial recognition, matrix diagonalization. In essence, an eigenvector v of a linear transformation T is a non-zero vector that, when T is applied to it, does not change direction. Applying T to the eigenvector only scales the eigenvector by the scalar value λ, called an eigenvalue.

This condition can be written as the equation T = λ v, referred to as the eigenvalue equation or eigenequation. In general, λ may be any scalar. For example, λ may be negative, in which case the eigenvector reverses direction as part of the scaling, or it may be zero or complex; the Mona Lisa example pictured at right provides a simple illustration. Each point on the painting can be represented as a vector pointing from the center of the painting to that point; the linear transformation in this example is called a shear mapping. Points in the top half are moved to the right and points in the bottom half are moved to the left proportional to how far they are from the horizontal axis that goes through the middle of the painting; the vectors pointing to each point in the original image are therefore tilted right or left and made longer or shorter by the transformation. Notice that points along the horizontal axis do not move at all. Therefore, any vector that points directly to the right or left with no vertical component is an eigenvector of this transformation because the mapping does not change its direction.

Moreover, these eigenvectors all have an eigenvalue equal to one because the mapping does not change their length, either. Linear transformations can take many different forms, mapping vectors in a variety of vector spaces, so the eigenvectors can take many forms. For example, the linear transformation could be a differential operator like d d x, in which case the eigenvectors are functions called eigenfunctions that are scaled by that differential operator, such as d d x e λ x = λ e λ x. Alternatively, the linear transformation could take the form of an n by n matrix, in which case the eigenvectors are n by 1 matrices that are referred to as eigenvectors. If the linear transformation is expressed in the form of an n by n matrix A the eigenvalue equation above for a linear transformation can be rewritten as the matrix multiplication A v = λ v, where the eigenvector v is an n by 1 matrix. For a matrix and eigenvectors can be used to decompose the matrix, for example by diagonalizing it. Eigenvalues and eigenvectors give rise to many related mathematical concepts, the prefix eigen- is applied liberally when naming them: The set of all eigenvectors of a linear transformation, each paired with its corresponding eigenvalue, is called the eigensystem of that transformation.

The set of all eigenvectors of T corresponding to the same eigenvalue, together with the zero vector, is called an eigenspace or characteristic space of T. If the set of eigenvectors of T form a basis of the domain of T this basis is called an eigenbasis. Eigenvalues are introduced in the context of linear algebra or matrix theory. However, they arose in the study of quadratic forms and differential equations. In the 18th century Euler studied the rotational motion of a rigid body and discovered the importance of the pri

Differentiable manifold

In mathematics, a differentiable manifold is a type of manifold, locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts known as an atlas. One may apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible computations done in one chart are valid in any other differentiable chart. In formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. To induce a global differential structure on the local coordinate systems induced by the homeomorphisms, their composition on chart intersections in the atlas must be differentiable functions on the corresponding linear space. In other words, where the domains of charts overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas.

The maps that relate the coordinates defined by the various charts to one another are called transition maps. Differentiability means different things in different contexts including: continuously differentiable, k times differentiable and holomorphic. Furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable tensor and vector fields. Differentiable manifolds are important in physics. Special kinds of differentiable manifolds form the basis for physical theories such as classical mechanics, general relativity, Yang–Mills theory, it is possible to develop a calculus for differentiable manifolds. This leads to such mathematical machinery as the exterior calculus; the study of calculus on differentiable manifolds is known as differential geometry.

The emergence of differential geometry as a distinct discipline is credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen, he motivated the idea of a manifold by an intuitive process of varying a given object in a new direction, presciently described the role of coordinate systems and charts in subsequent formal developments: Having constructed the notion of a manifoldness of n dimensions, found that its true character consists in the property that the determination of position in it may be reduced to n determinations of magnitude... – B. RiemannThe works of physicists such as James Clerk Maxwell, mathematicians Gregorio Ricci-Curbastro and Tullio Levi-Civita led to the development of tensor analysis and the notion of covariance, which identifies an intrinsic geometric property as one, invariant with respect to coordinate transformations; these ideas found a key application in Einstein's theory of general relativity and its underlying equivalence principle.

A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces. The accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a topological manifold is a second countable Hausdorff space, locally homeomorphic to a vector space, by a collection of homeomorphisms called charts; the composition of one chart with the inverse of another chart is a function called a transition map, defines a homeomorphism of an open subset of the linear space onto another open subset of the linear space. This formalizes the notion of "patching together pieces of a space to make a manifold" – the manifold produced contains the data of how it has been patched together. However, different atlases may produce "the same" manifold. Thus, one defines a topological manifold to be a space as above with an equivalence class of atlases, where one defines equivalence of atlases below. There are a number of different types of differentiable manifolds, depending on the precise differentiability requirements on the transition functions.

Some common examples include the following: A differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable. More a Ck-manifold is a topological manifold with an atlas whose transition maps are all k-times continuously differentiable. A smooth manifold or C∞-manifold is a differentiable manifold for which all the transition maps are smooth; that is, derivatives of all orders exist. An equivalence class of such atlases is said to be a smooth structure. An analytic manifold, or Cω-manifold is a smooth manifold with the additional condition that each transition map is analytic: the Taylor expansion is convergent and equals the function on some open ball. A complex manifold is a topological space modeled on a Euclidean space over the complex field and for which all the transition maps are holomorphic. While there is a meaningful notion of a Ck atlas, there is no distinct notion of a Ck manifold other than C0 and C∞, because for every Ck-structure with k > 0, there is a unique Ck-equivalent C∞-structure – a result of Whitney.

In fact, eve