1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
3.
Multivariable calculus
–
A study of limits and continuity in multivariable calculus yields many counter-intuitive results not demonstrated by single-variable functions. Indeed, the function f = x 2 y x 4 + y 2 approaches zero along any line through the origin, however, when the origin is approached along a parabola y = x 2, it has a limit of 0.5. Since taking different paths toward the same point yields different values for the limit, continuity in each argument is not sufficient for multivariate continuity. For instance, in the case of a function with two real-valued parameters, f, continuity of f in x for fixed y and continuity of f in y for fixed x does not imply continuity of f. Consider f = { y x − y if 1 ≥ x > y ≥0 x y − x if 1 ≥ y > x ≥01 − x if x = y >00 else. It is easy to verify that all real-valued functions that are given by f y, = f are continuous in x, similarly, all f x are continuous as f is symmetric with regards to x and y. However, f itself is not continuous as can be seen by considering the sequence f which should converge to f =0 if f was continuous, however, lim n → ∞ f =1. Thus, function is not continuous at, the partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a function is a derivative with respect to one variable with all other variables held constant. Partial derivatives may be combined in interesting ways to more complicated expressions of the derivative. In vector calculus, the del operator is used to define the concepts of gradient, divergence, a matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called differential equations or PDEs. These equations are more difficult to solve than ordinary differential equations. The multiple integral expands the concept of the integral to functions of any number of variables, double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubinis theorem guarantees that an integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration. The surface integral and the integral are used to integrate over curved manifolds such as surfaces and curves. In single-variable calculus, the theorem of calculus establishes a link between the derivative and the integral
4.
Coordinate
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
5.
Integrand
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
6.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
7.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
8.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z
9.
Oriented
–
In mathematics, orientability is a property of surfaces in Euclidean space that measures whether it is possible to make a consistent choice of surface normal vector at every point. A choice of surface normal allows one to use the rule to define a clockwise direction of loops in the surface. More generally, orientability of a surface, or manifold. Equivalently, a surface is orientable if a figure such as in the space cannot be moved around the space. The notion of orientability can be generalised to higher-dimensional manifolds as well, a manifold is orientable if it has a consistent choice of orientation, and a connected orientable manifold has exactly two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. A surface S in the Euclidean space R3 is orientable if a two-dimensional figure cannot be moved around the surface, an abstract surface is orientable if a consistent concept of clockwise rotation can be defined on the surface in a continuous manner. That is to say that a loop going around one way on the surface can never be continuously deformed to a loop going around the opposite way and this turns out to be equivalent to the question of whether the surface contains no subset that is homeomorphic to the Möbius strip. Thus, for surfaces, the Möbius strip may be considered the source of all non-orientability, for an orientable surface, a consistent choice of clockwise is called an orientation, and the surface is called oriented. For surfaces embedded in Euclidean space, an orientation is specified by the choice of a continuously varying surface normal n at every point, If such a normal exists at all, then there are always two ways to select it, n or −n. More generally, an orientable surface admits exactly two orientations, and the distinction between a surface and an orientable surface is subtle and frequently blurred. Examples Most surfaces we encounter in the world are orientable. Spheres, planes, and tori are orientable, for example, but Möbius strips, real projective planes, and Klein bottles are non-orientable. They, as visualized in 3-dimensions, all have just one side, the real projective plane and Klein bottle cannot be embedded in R3, only immersed with nice intersections. Note that locally an embedded surface always has two sides, so a near-sighted ant crawling on a surface would think there is an other side. The essence of one-sidedness is that the ant can crawl from one side of the surface to the other going through the surface or flipping over an edge. In general, the property of being orientable is not equivalent to being two-sided, however, this holds when the ambient space is orientable. For example, a torus embedded in K2 × S1 can be one-sided, Orientation by triangulation Any surface has a triangulation, a decomposition into triangles such that each edge on a triangle is glued to at most one other edge
10.
Surface (topology)
–
In topology and differential geometry, a surface is a two-dimensional manifold, and, as such, may be an abstract surface not embedded in any Euclidean space. For example, the Klein bottle is a surface, which cannot be represented in the three-dimensional Euclidean space without introducing self-intersections, in mathematics, a surface is a geometrical shape that resembles to a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, the exact definition of a surface may depend on the context. Typically, in geometry, a surface may cross itself, while, in topology and differential geometry. A surface is a space, this means that a moving point on a surface may move in two directions. In other words, around almost every point, there is a patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles a two-dimensional sphere, the concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the properties of an airplane. A surface is a space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a chart and it is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean. In most writings on the subject, it is assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second countable. It is also assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second countable and these homeomorphisms are also known as charts. The boundary of the upper half-plane is the x-axis, a point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of points is known as the boundary of the surface which is necessarily a one-manifold, that is. On the other hand, a point mapped to above the x-axis is an interior point, the collection of interior points is the interior of the surface which is always non-empty. The closed disk is an example of a surface with boundary
11.
Exterior product
–
In mathematics, the exterior algebra of a vector space is an associative algebra that contains the vector space, and such that the square of any element of the vector space is zero. The exterior algebra is universal in the sense that every embedding of the space or module in an algebra that has these properties may be factored through the exterior algebra. The multiplication operation of the algebra is called the exterior product or wedge product. The term exterior comes from the product of two vectors not being a vector, while the term wedge comes from the shape of the multiplication symbol. The exterior algebra is also named Grassmann algebra after Hermann Grassmann, the exterior product should not be confused with the outer product, which is the tensor product of vectors. The exterior product of two vectors is called a 2-blade, which is in turn a bivector. More generally, the product of any number k of vectors is sometimes called a k-blade. Given a vector space V, its exterior algebra is denoted Λ, the vector subspace generated by the k-blades is known as the kth exterior power of V, and denoted Λ k. The exterior algebra Λ is the sum of the Λ k as modules with the exterior product as additional structure. The exterior product makes the exterior algebra a graded algebra, and is alternating, the exterior algebra is used in geometry to study areas, volumes, and their higher-dimensional analogs. An exterior algebra has the structure of a bialgebra, naturally induced by the space of V. In this context, the Euclidean structure induces on the exterior algebra a richer structure of a Hopf algebra, the exterior algebra is also used in multivariable calculus, as the differential forms of higher degree belong to the exterior algebra of the differential forms of degree one. The Cartesian plane R2 is a space equipped with a basis consisting of a pair of unit vectors e 1 =, e 2 =. Suppose that v = = a e 1 + b e 2, w = = c e 1 + d e 2 are a pair of vectors in R2. There is a parallelogram having v and w as two of its sides. The area of this parallelogram is given by the determinant formula. Note that the coefficient in this last expression is precisely the determinant of the matrix, the fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the area of the parallelogram, the absolute value of the signed area is the ordinary area
12.
Exterior algebra
–
In mathematics, the exterior algebra of a vector space is an associative algebra that contains the vector space, and such that the square of any element of the vector space is zero. The exterior algebra is universal in the sense that every embedding of the space or module in an algebra that has these properties may be factored through the exterior algebra. The multiplication operation of the algebra is called the exterior product or wedge product. The term exterior comes from the product of two vectors not being a vector, while the term wedge comes from the shape of the multiplication symbol. The exterior algebra is also named Grassmann algebra after Hermann Grassmann, the exterior product should not be confused with the outer product, which is the tensor product of vectors. The exterior product of two vectors is called a 2-blade, which is in turn a bivector. More generally, the product of any number k of vectors is sometimes called a k-blade. Given a vector space V, its exterior algebra is denoted Λ, the vector subspace generated by the k-blades is known as the kth exterior power of V, and denoted Λ k. The exterior algebra Λ is the sum of the Λ k as modules with the exterior product as additional structure. The exterior product makes the exterior algebra a graded algebra, and is alternating, the exterior algebra is used in geometry to study areas, volumes, and their higher-dimensional analogs. An exterior algebra has the structure of a bialgebra, naturally induced by the space of V. In this context, the Euclidean structure induces on the exterior algebra a richer structure of a Hopf algebra, the exterior algebra is also used in multivariable calculus, as the differential forms of higher degree belong to the exterior algebra of the differential forms of degree one. The Cartesian plane R2 is a space equipped with a basis consisting of a pair of unit vectors e 1 =, e 2 =. Suppose that v = = a e 1 + b e 2, w = = c e 1 + d e 2 are a pair of vectors in R2. There is a parallelogram having v and w as two of its sides. The area of this parallelogram is given by the determinant formula. Note that the coefficient in this last expression is precisely the determinant of the matrix, the fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the area of the parallelogram, the absolute value of the signed area is the ordinary area
13.
Orientation (mathematics)
–
In linear algebra, the notion of orientation makes sense in arbitrary finite dimension. In this setting, the orientation of a basis is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple rotation. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed, the orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented and which are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, a vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called unoriented. Let V be a real vector space and let b1. It is a result in linear algebra that there exists a unique linear transformation A, V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation if A has positive determinant, the property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class, every ordered basis lives in one equivalence class or another. Thus any choice of an ordered basis for V determines an orientation. For example, the basis on Rn provides a standard orientation on Rn. Any choice of an isomorphism between V and Rn will then provide an orientation on V. The ordering of elements in a basis is crucial, two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1 and this is because the determinant of a permutation matrix is equal to the signature of the associated permutation. Similarly, let A be a linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive, a zero-dimensional vector space has only a single point, the zero vector. Consequently, the basis of a zero-dimensional vector space is the empty set ∅. Therefore, there is an equivalence class of ordered bases, namely
14.
Differential of a function
–
In calculus, the differential represents the principal part of the change in a function y = f with respect to changes in the independent variable. The differential dy is defined by d y = f ′ d x, where f ′ is the derivative of f with respect to x, one also writes d f = f ′ d x. The precise meaning of the variables dy and dx depends on the context of the application, traditionally, the variables dx and dy are considered to be very small, and this interpretation is made rigorous in non-standard analysis. The quotient dy/dx is not infinitely small, rather it is a real number, the use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy defined the differential without appeal to the atomism of Leibnizs infinitesimals, in physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John reconcile the use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the purpose for which they are intended. Thus physical infinitesimals need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense, following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a functional of an increment Δx. This approach allows the differential to be developed for a variety of more sophisticated spaces, in non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing. The differential is defined in modern treatments of calculus as follows. The differential of a function f of a real variable x is the function df of two independent real variables x and Δx given by d f = d e f f ′ Δ x. One or both of the arguments may be suppressed, i. e. one may see df or simply df, if y = f, the differential may also be written as dy. The partial differential is therefore ∂ y ∂ x 1 d x 1 involving the partial derivative of y with respect to x1. The total differential is then defined as d y = ∂ y ∂ x 1 Δ x 1 + ⋯ + ∂ y ∂ x n Δ x n. Since, with this definition, d x i = Δ x i, in measurement, the total differential is used in estimating the error Δf of a function f based on the errors Δx, Δy. of the parameters x, y. As they are assumed to be independent, the analysis describes the worst-case scenario, the absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign
15.
Curl (mathematics)
–
In vector calculus, the curl is a vector operator that describes the infinitesimal rotation of a 3-dimensional vector field. At every point in the field, the curl of that point is represented by a vector, the attributes of this vector characterize the rotation at that point. The direction of the curl is the axis of rotation, as determined by the rule. If the vector represents the flow velocity of a moving fluid. A vector field whose curl is zero is called irrotational, the curl is a form of differentiation for vector fields. The alternative terminology rotor or rotational and alternative notations rot F and ∇ × F are often used for curl F and this is a similar phenomenon as in the 3 dimensional cross product, and the connection is reflected in the notation ∇ × for the curl. The name curl was first suggested by James Clerk Maxwell in 1871, the curl of a vector field F, denoted by curl F, or ∇ × F, or rot F, at a point is defined in terms of its projection onto various lines through the point. As such, the curl operator maps continuously differentiable functions f, ℝ3 → ℝ3 to continuous functions g, in fact, it maps Ck functions in ℝ3 to Ck −1 functions in ℝ3. Implicitly, curl is defined by, ⋅ n ^ = d e f lim A →0 where ∮C F · dr is a line integral along the boundary of the area in question, and | A | is the magnitude of the area. Note that the equation for each component, k can be obtained by exchanging each occurrence of a subscript 1,2,3 in cyclic permutation, 1→2, 2→3, and 3→1. If are the Cartesian coordinates and are the coordinates, then h i =2 +2 +2 is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices,3,1,2 →1,2,3 →2,3,1. Suppose the vector field describes the velocity field of a fluid flow, if the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, such notation involving operators is common in physics and algebra. However, in coordinate systems, such as polar-toroidal coordinates. This expands as follows, i + j + k Although expressed in terms of coordinates, equivalently, = e k ε k l m ∇ l F m where ek are the coordinate vector fields. Equivalently, using the derivative, the curl can be expressed as, ∇ × F = ♯ Here ♭ and ♯ are the musical isomorphisms
16.
Fundamental theorem of calculus
–
The fundamental theorem of calculus is a theorem that links the concept of the derivative of a function with the concept of the functions integral. This part of the guarantees the existence of antiderivatives for continuous functions. This part of the theorem has key practical applications because it simplifies the computation of definite integrals. The fundamental theorem of calculus relates differentiation and integration, showing that two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration, the first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory. Isaac Barrow proved a more generalized version of the theorem, while his student Isaac Newton completed the development of the mathematical theory. Gottfried Leibniz systematized the knowledge into a calculus for infinitesimal quantities, for a continuous function y = f whose graph is plotted as a curve, each value of x has a corresponding area function A, representing the area beneath the curve between 0 and x. The function A may not be known, but it is given that it represents the area under the curve. The area under the curve between x and x + h could be computed by finding the area between 0 and x + h, then subtracting the area between 0 and x, in other words, the area of this “sliver” would be A − A. There is another way to estimate the area of this same sliver, as shown in the accompanying figure, h is multiplied by f to find the area of a rectangle that is approximately the same size as this sliver. So, A − A ≈ f h In fact, this becomes a perfect equality if we add the red portion of the excess area shown in the diagram. So, A − A = f h + Rearranging terms, as h approaches 0 in the limit, the last fraction can be shown to go to zero. This is true because the area of the red portion of region is less than or equal to the area of the tiny black-bordered rectangle. More precisely, | f − A − A h | = | Red Excess | h ≤ h | f − f | h = | f − f |, by the continuity of f, the latter expression tends to zero as h does. Therefore, the left-hand side tends to zero as h does and that is, the derivative of the area function A exists and is the original function f, so, the area function is simply an antiderivative of the original function. Computing the derivative of a function and “finding the area” under its curve are opposite operations and this is the crux of the Fundamental Theorem of Calculus. Intuitively, the theorem states that the sum of infinitesimal changes in a quantity over time adds up to the net change in the quantity
17.
Divergence theorem
–
More precisely, the divergence theorem states that the outward flux of a vector field through a closed surface is equal to the volume integral of the divergence over the region inside the surface. Intuitively, it states that the sum of all sources gives the net out of a region. The divergence theorem is an important result for the mathematics of physics and engineering, in particular in electrostatics, in physics and engineering, the divergence theorem is usually applied in three dimensions. However, it generalizes to any number of dimensions, in one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Greens theorem, the theorem is a special case of the more general Stokes theorem. If a fluid is flowing in some area, then the rate at which fluid flows out of a region within that area can be calculated by adding up the sources inside the region. The fluid flow is represented by a field, and the vector fields divergence at a given point describes the strength of the source or sink there. So, integrating the fields divergence over the interior of the region should equal the integral of the field over the regions boundary. The divergence theorem says that this is true, suppose V is a subset of R n which is compact and has a piecewise smooth boundary S. If F is a continuously differentiable vector field defined on a neighborhood of V, then we have, the left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, the symbol within the two integrals stresses once more that ∂V is a closed surface. By replacing F in the divergence theorem with specific forms, other useful identities can be derived, with F → F g for a scalar function g and a vector field F, ∭ V d V = S g F ⋅ n d S. A special case of this is F = ∇ f , in case the theorem is the basis for Greens identities. With F → F × G for two vector fields F and G, ∭ V d V = S ⋅ d S. With F → f c for a scalar function f and vector c, ∭ V c ⋅ ∇ f d V = S ⋅ d S − ∭ V f d V. The last term on the right vanishes for constant c or any divergence free vector field, with F → c × F for vector field F and constant vector c, ∭ V c ⋅ d V = S ⋅ d S. Suppose we wish to evaluate S F ⋅ n d S, where S is the sphere defined by S =. Since the function y is positive in one hemisphere of W and negative in the other, in an equal and opposite way, the same is true for z, ∭ W y d V = ∭ W z d V =0
18.
Green's theorem
–
In mathematics, Greens theorem gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green and is the special case of the more general Kelvin–Stokes theorem. Let C be an oriented, piecewise smooth, simple closed curve in a plane. If L and M are functions of defined on a region containing D and have continuous partial derivatives there. In physics, Greens theorem is used to solve two-dimensional flow integrals. In plane geometry, and in particular, area surveying, Greens theorem can be used to determine the area, the following is a proof of half of the theorem for the simplified area D, a type I region where C1 and C3 are curves connected by vertical lines. A similar proof exists for the half of the theorem when D is a type II region where C2. Putting these two together, the theorem is thus proven for regions of type III. The general case can then be deduced from this case by decomposing D into a set of type III regions. If it can be shown that if ∮ C L d x = ∬ D d A and ∮ C M d y = ∬ D d A are true and we can prove easily for regions of type I, and for regions of type II. Greens theorem then follows for regions of type III, assume region D is a type I region and can thus be characterized, as pictured on the right, by D = where g1 and g2 are continuous functions on. Compute the double integral in, ∬ D ∂ L ∂ y d A = ∫ a b ∫ g 1 g 2 ∂ L ∂ y d y d x = ∫ a b d x, now compute the line integral in. C can be rewritten as the union of four curves, C1, C2, C3, with C1, use the parametric equations, x = x, y = g1, a ≤ x ≤ b. Then ∫ C1 L d x = ∫ a b L d x, with C3, use the parametric equations, x = x, y = g2, a ≤ x ≤ b. Then ∫ C3 L d x = − ∫ − C3 L d x = − ∫ a b L d x, the integral over C3 is negated because it goes in the negative direction from b to a, as C is oriented positively. On C2 and C4, x remains constant, meaning ∫ C4 L d x = ∫ C2 L d x =0, combining with, we get for regions of type I. A similar treatment yields for regions of type II, putting the two together, we get the result for regions of type III. Write F for the vector-valued function F =, start with the left side of Greens theorem, ∮ C = ∮ C ⋅ = ∮ C F ⋅ d r
19.
Stokes' Theorem
–
In vector calculus, and more generally differential geometry, Stokes theorem, discovered in its modern form by É. Cartan and first published in 1945, is a statement about the integration of differential forms on manifolds and this modern form of Stokes theorem is a vast generalization of a classical result. Lord Kelvin communicated it to George Stokes in a letter dated July 2,1850, Stokes set the theorem as a question on the 1854 Smiths Prize exam, which led to the result bearing his name, even though it was actually first published by Hermann Hankel in 1861. This classical statement, along with the divergence theorem, the fundamental theorem of calculus. By the choice of F, dF/dx = f, in the parlance of differential forms, this is saying that f dx is the exterior derivative of the 0-form, i. e. function, F, in other words, that dF = f dx. The general Stokes theorem applies to higher differential forms ω instead of just 0-forms such as F, a closed interval is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points a and b, integrating f over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed, the manifold has to be orientable, the two points a and b form the boundary of the closed interval. More generally, Stokes theorem applies to oriented manifolds M with boundary, the boundary ∂M of M is itself a manifold and inherits a natural orientation from that of M. For example, the orientation of the interval gives an orientation of the two boundary points. Intuitively, a inherits the opposite orientation as b, as they are at opposite ends of the interval, so, integrating F over two boundary points a, b is taking the difference F − F. In even simpler terms, one can consider that points can be thought of as the boundaries of curves, so the fundamental theorem reads, ∫ f d x = ∫ d F = ∫ − ∪ + F = F − F. Let Ω be a smooth manifold with boundary of dimension n. First, suppose that α is compactly supported in the domain of a single, in this case, we define the integral of α over Ω as ∫ Ω α = ∫ φ ∗ α, i. e. via the pullback of α to Rn. This quantity is well-defined, that is, it does not depend on the choice of the coordinate charts, nor the partition of unity. Stokes theorem reads, If ω is an -form with compact support on Ω, here d is the exterior derivative, which is defined using the manifold structure only. On the right-hand side, a circle is used within the integral sign to stress the fact that the -manifold ∂Ω has no boundary. The right-hand side of the equation is used to formulate integral laws
20.
Stokes' theorem
–
In vector calculus, and more generally differential geometry, Stokes theorem, discovered in its modern form by É. Cartan and first published in 1945, is a statement about the integration of differential forms on manifolds and this modern form of Stokes theorem is a vast generalization of a classical result. Lord Kelvin communicated it to George Stokes in a letter dated July 2,1850, Stokes set the theorem as a question on the 1854 Smiths Prize exam, which led to the result bearing his name, even though it was actually first published by Hermann Hankel in 1861. This classical statement, along with the divergence theorem, the fundamental theorem of calculus. By the choice of F, dF/dx = f, in the parlance of differential forms, this is saying that f dx is the exterior derivative of the 0-form, i. e. function, F, in other words, that dF = f dx. The general Stokes theorem applies to higher differential forms ω instead of just 0-forms such as F, a closed interval is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points a and b, integrating f over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed, the manifold has to be orientable, the two points a and b form the boundary of the closed interval. More generally, Stokes theorem applies to oriented manifolds M with boundary, the boundary ∂M of M is itself a manifold and inherits a natural orientation from that of M. For example, the orientation of the interval gives an orientation of the two boundary points. Intuitively, a inherits the opposite orientation as b, as they are at opposite ends of the interval, so, integrating F over two boundary points a, b is taking the difference F − F. In even simpler terms, one can consider that points can be thought of as the boundaries of curves, so the fundamental theorem reads, ∫ f d x = ∫ d F = ∫ − ∪ + F = F − F. Let Ω be a smooth manifold with boundary of dimension n. First, suppose that α is compactly supported in the domain of a single, in this case, we define the integral of α over Ω as ∫ Ω α = ∫ φ ∗ α, i. e. via the pullback of α to Rn. This quantity is well-defined, that is, it does not depend on the choice of the coordinate charts, nor the partition of unity. Stokes theorem reads, If ω is an -form with compact support on Ω, here d is the exterior derivative, which is defined using the manifold structure only. On the right-hand side, a circle is used within the integral sign to stress the fact that the -manifold ∂Ω has no boundary. The right-hand side of the equation is used to formulate integral laws
21.
Topology
–
In mathematics, topology is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. This can be studied by considering a collection of subsets, called open sets, important topological properties include connectedness and compactness. Topology developed as a field of study out of geometry and set theory, through analysis of such as space, dimension. Such ideas go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs, Leonhard Eulers Seven Bridges of Königsberg Problem and Polyhedron Formula are arguably the fields first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, by the middle of the 20th century, topology had become a major branch of mathematics. It defines the basic notions used in all branches of topology. Algebraic topology tries to measure degrees of connectivity using algebraic constructs such as homology, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to geometry and together they make up the geometric theory of differentiable manifolds. Geometric topology primarily studies manifolds and their embeddings in other manifolds, a particularly active area is low-dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots, Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler and his 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750 Euler wrote to a friend that he had realised the importance of the edges of a polyhedron and this led to his polyhedron formula, V − E + F =2. Some authorities regard this analysis as the first theorem, signalling the birth of topology, further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term Topologie in Vorstudien zur Topologie, written in his native German, in 1847, the term topologist in the sense of a specialist in topology was used in 1905 in the magazine Spectator. Their work was corrected, consolidated and greatly extended by Henri Poincaré, in 1895 he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a case of a general topological space. In 1914, Felix Hausdorff coined the term topological space and gave the definition for what is now called a Hausdorff space, currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski
22.
De Rham's theorem
–
It is a cohomology theory based on the existence of differential forms with prescribed properties. The de Rham complex is the complex of exterior differential forms on some smooth manifold M. 0 → Ω0 → d Ω1 → d Ω2 → d Ω3 → ⋯ where Ω0 is the space of functions on M, Ω1 is the space of 1-forms. The converse, however, is not in true, closed forms need not be exact. A simple but significant case is the 1-form of angle measure on the unit circle and we can, however, change the topology by removing just one point. The idea of de Rham cohomology is to classify the different types of closed forms on a manifold. One performs this classification by saying that two closed forms α, β ∈ Ωk are cohomologous if they differ by an exact form and this classification induces an equivalence relation on the space of closed forms in Ωk. One then defines the k-th de Rham cohomology group H d R k to be the set of classes, that is. Note that, for any manifold M with n connected components H d R0 ≅ R n and this follows from the fact that any smooth function on M with zero derivative is constant on each of the connected components of M. One may often find the general de Rham cohomologies of a manifold using the fact about the zero cohomology. Another useful fact is that the de Rham cohomology is a homotopy invariant, let n >0, m ≥0, and I an open real interval. Then H d R k ≃ { R if k =0, n,0 if k ≠0, n, similarly, allowing n >0 here, we obtain H d R k ≃ R. Punctured Euclidean space is simply Euclidean space with the origin removed. Stokes theorem is an expression of duality between de Rham cohomology and the homology of chains and it says that the pairing of differential forms and chains, via integration, gives a homomorphism from de Rham cohomology H d R k to singular cohomology groups H k. De Rhams theorem, proved by Georges de Rham in 1931, states that for a smooth manifold M, the theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. The wedge product endows the direct sum of groups with a ring structure. A further result of the theorem is that the two rings are isomorphic, where the analogous product on singular cohomology is the cup product. Let Ωk denote the sheaf of germs of k-forms on M, by the Poincaré lemma, the following sequence of sheaves is exact,0 → R → Ω0 → d Ω1 → d Ω2 → d ⋯ → d Ω m →0. This sequence now breaks up into short exact sequences 0 → d Ω k −1 → ⊂ Ω k → d d Ω k →0, each of these induces a long exact sequence in cohomology
23.
Differentiable manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
24.
Vector field
–
In vector calculus, a vector field is an assignment of a vector to each point in a subset of space. A vector field in the plane, can be visualised as, the elements of differential and integral calculus extend naturally to vector fields. Vector fields can usefully be thought of as representing the velocity of a flow in space. In coordinates, a field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point. More generally, vector fields are defined on manifolds, which are spaces that look like Euclidean space on small scales. In this setting, a field gives a tangent vector at each point of the manifold. Vector fields are one kind of tensor field, given a subset S in Rn, a vector field is represented by a vector-valued function V, S → Rn in standard Cartesian coordinates. If each component of V is continuous, then V is a vector field. A vector field can be visualized as assigning a vector to individual points within an n-dimensional space, in physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a distinct entity from a simple list of scalars. Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector V are V x =, then the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. Given a differentiable manifold M, a field on M is an assignment of a tangent vector to each point in M. More precisely, a vector field F is a mapping from M into the tangent bundle TM so that p ∘ F is the identity mapping where p denotes the projection from TM to M, in other words, a vector field is a section of the tangent bundle. If the manifold M is smooth or analytic—that is, the change of coordinates is smooth —then one can make sense of the notion of vector fields. The collection of all vector fields on a smooth manifold M is often denoted by Γ or C∞. A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind, the length of the arrow will be an indication of the wind speed
25.
Manifold (mathematics)
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
26.
Orientability
–
In mathematics, orientability is a property of surfaces in Euclidean space that measures whether it is possible to make a consistent choice of surface normal vector at every point. A choice of surface normal allows one to use the rule to define a clockwise direction of loops in the surface. More generally, orientability of a surface, or manifold. Equivalently, a surface is orientable if a figure such as in the space cannot be moved around the space. The notion of orientability can be generalised to higher-dimensional manifolds as well, a manifold is orientable if it has a consistent choice of orientation, and a connected orientable manifold has exactly two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. A surface S in the Euclidean space R3 is orientable if a two-dimensional figure cannot be moved around the surface, an abstract surface is orientable if a consistent concept of clockwise rotation can be defined on the surface in a continuous manner. That is to say that a loop going around one way on the surface can never be continuously deformed to a loop going around the opposite way and this turns out to be equivalent to the question of whether the surface contains no subset that is homeomorphic to the Möbius strip. Thus, for surfaces, the Möbius strip may be considered the source of all non-orientability, for an orientable surface, a consistent choice of clockwise is called an orientation, and the surface is called oriented. For surfaces embedded in Euclidean space, an orientation is specified by the choice of a continuously varying surface normal n at every point, If such a normal exists at all, then there are always two ways to select it, n or −n. More generally, an orientable surface admits exactly two orientations, and the distinction between a surface and an orientable surface is subtle and frequently blurred. Examples Most surfaces we encounter in the world are orientable. Spheres, planes, and tori are orientable, for example, but Möbius strips, real projective planes, and Klein bottles are non-orientable. They, as visualized in 3-dimensions, all have just one side, the real projective plane and Klein bottle cannot be embedded in R3, only immersed with nice intersections. Note that locally an embedded surface always has two sides, so a near-sighted ant crawling on a surface would think there is an other side. The essence of one-sidedness is that the ant can crawl from one side of the surface to the other going through the surface or flipping over an edge. In general, the property of being orientable is not equivalent to being two-sided, however, this holds when the ambient space is orientable. For example, a torus embedded in K2 × S1 can be one-sided, Orientation by triangulation Any surface has a triangulation, a decomposition into triangles such that each edge on a triangle is glued to at most one other edge
27.
Chain (algebraic topology)
–
In algebraic topology, a simplicial k-chain is a formal linear combination of k-simplices. Integration is defined on chains by taking the combination of integrals over the simplices in the chain with coefficients typically integers. The set of all k-chains forms a group and the sequence of groups is called a chain complex. The boundary of a chain is the combination of boundaries of the simplices in the chain. The boundary of a k-chain is a -chain, note that the boundary of a simplex is not a simplex, but a chain with coefficients 1 or −1 – thus chains are the closure of simplices under the boundary operator. Example 1, The boundary of a path is the difference of its endpoints. Example 2, The boundary of the triangle is a sum of its edges with signs arranged to make the traversal of the boundary counterclockwise. A chain is called a cycle when its boundary is zero, a chain that is the boundary of another chain is called a boundary. Boundaries are cycles, so form a chain complex, whose homology groups are called simplicial homology groups. Example 3, A 0-cycle is a combination of points such that the sum of all the coefficients is 0. Thus, the 0-homology group measures the number of connected components of the space. Example 4, The plane punctured at the origin has nontrivial 1-homology group since the circle is a cycle. In differential geometry, the duality between the operator on chains and the exterior derivative is expressed by the general Stokes theorem
28.
Measure theory
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. A triple is called a measure space, a probability measure is a measure with total measure one – i. e. A probability space is a space with a probability measure
29.
Covector
–
In linear algebra, a one-form on a vector space is the same as a linear functional on the space. The usage of one-form in this context usually distinguishes the one-forms from higher-degree multilinear functionals on the space, in differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold M is a mapping of the total space of the tangent bundle of M to R whose restriction to each fibre is a linear functional on the tangent space. Symbolically, α, T M → R, α x = α | T x M, T x M → R where αx is linear, often one-forms are described locally, particularly in local coordinates. From this perspective, a one-form has a covariant transformation law on passing from one system to another. Thus a one-form is an order 1 covariant tensor field, many real-world concepts can be described as one-forms, Indexing into a vector, The second element of a three-vector is given by the one-form. That is, the element of is · = y. Mean, The mean element of an n-vector is given by the one-form and that is, mean = ⋅ v. Sampling, Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net flow, R, is given by the one-form w, = −t where i is the discount rate. That is, N P V = ⟨ w, R ⟩ = ∫ t =0 ∞ R t d t, the most basic non-trivial differential one-form is the change in angle form d θ. This is defined as the derivative of the angle function θ, integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of geometry, this derivative is a one-form, and it is closed but not exact. This is the most basic example of such a form, let U ⊆ R be open, and consider a differentiable function f, U → R, with derivative f. The differential df of f, at a point x 0 ∈ U, is defined as a linear map of the variable dx. Specifically, d f, d x ↦ f ′ d x, hence the map x ↦ d f sends each point x to a linear functional df. This is the simplest example of a differential form, in terms of the de Rham complex, one has an assignment from zero-forms to one-forms i. e. f ↦ d f. Two-form Reciprocal lattice Tensor Inner product
30.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
31.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance
32.
Smooth function
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
33.
Directional derivative
–
It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a case of the Gâteaux derivative. The directional derivative of a function f = f along a vector v = is the function defined by the limit ∇ v f = lim h →0 f − f h. In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector, without the restriction, this definition is valid in a broad range of contexts, for example where the norm of a vector is undefined. Intuitively, the derivative of f at a point x represents the rate of change of f with respect to time when moving past x at velocity v. Some authors define the derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude. This definition gives the rate of increase of f per unit of distance moved in the given direction. In this case, one has ∇ v f = lim h →0 f − f h | v |, many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the derivative of f along v, denoted variously as df, ∇ v f, L v f, or v p. Let γ, → M be a curve with γ = p. We translate a covector S along δ then δ′ and then subtract the translation along δ′, instead of building the directional derivative using partial derivatives, we use the covariant derivative. The rotation operator for an angle θ, i. e. See for example Neumann boundary condition, if the normal direction is denoted by n, then the directional derivative of a function f is sometimes denoted as ∂ f ∂ n. The directional directive provides a way of finding these derivatives. The definitions of directional derivatives for various situations are given below and it is assumed that the functions are sufficiently smooth that derivatives can be taken. Let f be a real valued function of the vector v, mathematical methods for physics and engineering
34.
Partial derivative
–
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in calculus and differential geometry. The partial derivative of an f with respect to the variable x is variously denoted by f x ′, f x, ∂ x f, D x f, D1 f, ∂ ∂ x f. One of the first known uses of the symbol in mathematics is by Marquis de Condorcet from 1770, the modern partial derivative notation is by Adrien-Marie Legendre, though he later abandoned it, Carl Gustav Jacob Jacobi re-introduced the symbol in 1841. Suppose that ƒ is a function of more than one variable, for instance, z = f = x 2 + x y + y 2. The graph of this function defines a surface in Euclidean space, to every point on this surface, there is an infinite number of tangent lines. Partial differentiation is the act of choosing one of these lines, the graph and this plane are shown on the right. On the graph below it, we see the way the function looks on the plane y =1, therefore ∂ z ∂ x =3 at the point. That is, the derivative of z with respect to x at is 3. The function f can be reinterpreted as a family of functions of one variable indexed by the other variables, in other words, every value of y defines a function, denoted fy, which is a function of one variable x. That is, f y = x 2 + x y + y 2. Once a value of y is chosen, say a, then f determines a function fa which traces a curve x2 + ax + a2 on the xz plane, f a = x 2 + a x + a 2. In this expression, a is a constant, not a variable, so fa is a function of one real variable. Consequently, the definition of the derivative for a function of one variable applies, the above procedure can be performed for any choice of a. Assembling the derivatives together into a function gives a function describes the variation of f in the x direction. This is the derivative of f with respect to x. Here ∂ is a rounded d called the partial derivative symbol, to distinguish it from the letter d, ∂ is sometimes pronounced tho or partial. In general, the derivative of an n-ary function f in the direction xi at the point is defined to be
35.
Linear function
–
In linear algebra and functional analysis, a linear function is a linear map. In calculus, analytic geometry and related areas, a function is a polynomial of degree one or less. When the function is of one variable, it is of the form f = a x + b. The graph of such a function of one variable is a nonvertical line, a is frequently referred to as the slope of the line, and b as the intercept. For a function f of any number of independent variables, the general formula is f = b + a 1 x 1 + … + a k x k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial and its graph, when there is only one independent variable, is a horizontal line. In this context, the meaning may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning is a kind of affine map. In linear algebra, a function is a map f between two vector spaces that preserves vector addition and scalar multiplication, f = f + f f = a f. Here a denotes a constant belonging to some field K of scalars and x and y are elements of a vector space, some authors use linear function only for linear maps that take values in the scalar field, these are also called linear functionals. The linear functions of calculus qualify as linear maps when f =0, or, equivalently, geometrically, the graph of the function must pass through the origin. Homogeneous function Nonlinear system Piecewise linear function Linear interpolation Discontinuous linear map Izrail Moiseevich Gelfand, Lectures on Linear Algebra, Interscience Publishers, ISBN 0-486-66082-6 Thomas S. Shores, Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6 James Stewart, Calculus, Early Transcendentals, edition 7E, ISBN 978-0-538-49790-9 Leonid N. Vaserstein, Linear Programming, in Leslie Hogben, ed. Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap
36.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
37.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
38.
1-form
–
In linear algebra, a one-form on a vector space is the same as a linear functional on the space. The usage of one-form in this context usually distinguishes the one-forms from higher-degree multilinear functionals on the space, in differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold M is a mapping of the total space of the tangent bundle of M to R whose restriction to each fibre is a linear functional on the tangent space. Symbolically, α, T M → R, α x = α | T x M, T x M → R where αx is linear, often one-forms are described locally, particularly in local coordinates. From this perspective, a one-form has a covariant transformation law on passing from one system to another. Thus a one-form is an order 1 covariant tensor field, many real-world concepts can be described as one-forms, Indexing into a vector, The second element of a three-vector is given by the one-form. That is, the element of is · = y. Mean, The mean element of an n-vector is given by the one-form and that is, mean = ⋅ v. Sampling, Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net flow, R, is given by the one-form w, = −t where i is the discount rate. That is, N P V = ⟨ w, R ⟩ = ∫ t =0 ∞ R t d t, the most basic non-trivial differential one-form is the change in angle form d θ. This is defined as the derivative of the angle function θ, integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of geometry, this derivative is a one-form, and it is closed but not exact. This is the most basic example of such a form, let U ⊆ R be open, and consider a differentiable function f, U → R, with derivative f. The differential df of f, at a point x 0 ∈ U, is defined as a linear map of the variable dx. Specifically, d f, d x ↦ f ′ d x, hence the map x ↦ d f sends each point x to a linear functional df. This is the simplest example of a differential form, in terms of the de Rham complex, one has an assignment from zero-forms to one-forms i. e. f ↦ d f. Two-form Reciprocal lattice Tensor Inner product
39.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow
40.
Smooth manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
41.
Coordinate chart
–
In mathematics, particularly topology, one describes a manifold using an atlas. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold, if the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the definition of a manifold and related structures such as vector bundles. The definition of an atlas depends on the notion of a chart, a chart for a topological space M is a homeomorphism φ from an open subset U of M to an open subset of a Euclidean space. The chart is traditionally recorded as the ordered pair, an atlas for a topological space M is a collection of charts on M such that ⋃ U α = M. If the codomain of each chart is the n-dimensional Euclidean space, a transition map provides a way of comparing two charts of an atlas. To make this comparison, we consider the composition of one chart with the inverse of the other and this composition is not well-defined unless we restrict both charts to the intersection of their domains of definition. To be more precise, suppose that and are two charts for a manifold M such that U α ∩ U β is non-empty. The transition map τ α, β, φ α → φ β is the map defined by τ α, β = φ β ∘ φ α −1. Note that since φ α and φ β are both homeomorphisms, the transition map τ α, β is also a homeomorphism, one often desires more structure on a manifold than simply the topological structure. For example, if one would like an unambiguous notion of differentiation of functions on a manifold, such a manifold is called differentiable. Given a differentiable manifold, one can define the notion of tangent vectors. If each transition function is a map, then the atlas is called a smooth atlas. Alternatively, one could require that the maps have only k continuous derivatives in which case the atlas is said to be C k. Very generally, if each transition function belongs to a pseudo-group G of homeomorphisms of Euclidean space, if the transition maps between charts of an atlas preserve a local trivialization, then the atlas defines the structure of a fibre bundle. Smooth atlas smooth frame Atlas by Rowland, Todd
42.
Section (fiber bundle)
–
In the mathematical field of topology, a section of a fiber bundle E is a continuous right inverse of the projection function π. A section is an abstract characterization of what it means to be a graph, then a graph is any function σ for which π = x. The language of fibre bundles allows this notion of a section to be generalized to the case when E is not necessarily a Cartesian product, if π, E → B is a fibre bundle, then a section is a choice of point σ in each of the fibres. The condition π = x simply means that the section at a point x must lie over x, for example, when E is a vector bundle a section of E is an element of the vector space Ex lying over each point x ∈ B. Sections, particularly of principal bundles and vector bundles, are very important tools in differential geometry. In this setting, the base space B is a smooth manifold M, in this case, one considers the space of smooth sections of E over an open set U, denoted C∞. It is also useful in analysis to consider spaces of sections with intermediate regularity. Fiber bundles do not in general have such sections, so it is also useful to define sections only locally. A local section of a bundle is a continuous map s, U → E where U is an open set in B and π = x for all x in U. If is a trivialization of E, where φ is a homeomorphism from π−1 to U × F. The sections form a sheaf over B called the sheaf of sections of E, the space of continuous sections of a fiber bundle E over U is sometimes denoted C, while the space of global sections of E is often denoted Γ or Γ. Sections are studied in homotopy theory and algebraic topology, where one of the goals is to account for the existence or non-existence of global sections. An obstruction denies the existence of global sections since the space is too twisted, more precisely, obstructions obstruct the possibility of extending a local section to a global section due to the spaces twistedness. Obstructions are indicated by particular characteristic classes, which are cohomological classes, for example, a principal bundle has a global section if and only if it is trivial. On the other hand, a vector bundle always has a global section, however, it only admits a nowhere vanishing section if its Euler class is zero. Obstructions to extending local sections may be generalized in the manner, take a topological space and form a category whose objects are open subsets. Thus we use a category to generalize a topological space and we generalize the notion of a local section using sheaves of abelian groups, which assigns to each object an abelian group. There is an important distinction here, intuitively, local sections are like vector fields on a subset of a topological space
43.
Tangent space
–
The elements of the tangent space are called tangent vectors at x. This is a generalization of the notion of a vector in a Euclidean space. The dimension of all the tangent spaces of a manifold is the same as that of the manifold. More generally, if a manifold is thought of as an embedded submanifold of Euclidean space one can picture a tangent space in this literal fashion. This was the approach to defining parallel transport, and used by Dirac. More strictly this defines a tangent space, distinct from the space of tangent vectors described by modern terminology. In algebraic geometry, in contrast, there is a definition of tangent space at a point P of a variety V. The points P at which the dimension is exactly that of V are called the non-singular points, for example, a curve that crosses itself doesnt have a unique tangent line at that point. The singular points of V are those where the test to be a manifold fails, once tangent spaces have been introduced, one can define vector fields, which are abstractions of the velocity field of particles moving on a manifold. A vector field attaches to every point of the manifold a vector from the tangent space at that point, all the tangent spaces can be glued together to form a new differentiable manifold of twice the dimension of the original manifold, called the tangent bundle of the manifold. The informal description above relies on a manifold being embedded in a vector space Rm. However, it is convenient to define the notion of tangent space based on the manifold itself. There are various equivalent ways of defining the tangent spaces of a manifold, while the definition via velocities of curves is intuitively the simplest, it is also the most cumbersome to work with. More elegant and abstract approaches are described below, in the embedded manifold picture, a tangent vector at a point x is thought of as the velocity of a curve passing through the point x. We can therefore take a tangent vector to be a class of curves passing through x while being tangent to each other at x. Suppose M is a Ck manifold and x is a point in M. Pick a chart φ, U → Rn, where U is an open subset of M containing x. Suppose two curves γ1, → M and γ2, → M with γ1 = γ2 = x are given such that φ ∘ γ1, then γ1 and γ2 are called equivalent at 0 if the ordinary derivatives of φ ∘ γ1 and φ ∘ γ2 at 0 coincide. This defines a relation on such curves, and the equivalence classes are known as the tangent vectors of M at x
44.
Tangent bundle
–
In differential geometry, the tangent bundle of a differentiable manifold M is a manifold T M, which assembles all the tangent vectors in M. As a set, it is given by the disjoint union of the tangent spaces of M and that is, T M = ⨆ x ∈ M T x M = ⋃ x ∈ M × T x M = ⋃ x ∈ M =. Where T x M denotes the tangent space to M at the point x, so, an element of T M can be thought of\as a pair, where x is a point in M and v is a tangent vector to M at x. There is a natural projection π, T M ↠ M defined by π = x and this projection maps each tangent space T x M to the single point x. The tangent bundle comes equipped with a natural topology, with this topology, the tangent bundle to a manifold is the prototypical example of a vector bundle. A section of T M is a field on M, and the dual bundle to T M is the cotangent bundle. By definition, a manifold M is parallelizable if and only if the tangent bundle is trivial. By definition, a manifold M is framed if and only if the tangent bundle TM is stably trivial, for example, the n-dimensional sphere Sn is framed for all n, but parallelizable only for n=1,3,7. One of the roles of the tangent bundle is to provide a domain. Namely, if f, M → N is a function, with M and N smooth manifolds, its derivative is a smooth function Df. The tangent bundle comes equipped with a topology and smooth structure so as to make it into a manifold in its own right. The dimension of TM is twice the dimension of M, each tangent space of an n-dimensional manifold is an n-dimensional vector space. If U is an open subset of M, then there is a diffeomorphism from TU to U × Rn which restricts to a linear isomorphism from each tangent space TxU to × Rn. As a manifold, however, TM is not always diffeomorphic to the product manifold M × Rn, when it is of the form M × Rn, then the tangent bundle is said to be trivial. Trivial tangent bundles usually occur for manifolds equipped with a group structure, for instance. The tangent bundle of the circle is trivial because it is a Lie group. It is not true however that all spaces with trivial tangent bundles are Lie groups, just as manifolds are locally modelled on Euclidean space, tangent bundles are locally modelled on U × Rn, where U is an open subset of Euclidean space. If M is a smooth manifold, then it comes equipped with an atlas of charts where Uα is an open set in M and ϕ α, U α → R n is a diffeomorphism
45.
Alternating form
–
In mathematics, the exterior algebra of a vector space is an associative algebra that contains the vector space, and such that the square of any element of the vector space is zero. The exterior algebra is universal in the sense that every embedding of the space or module in an algebra that has these properties may be factored through the exterior algebra. The multiplication operation of the algebra is called the exterior product or wedge product. The term exterior comes from the product of two vectors not being a vector, while the term wedge comes from the shape of the multiplication symbol. The exterior algebra is also named Grassmann algebra after Hermann Grassmann, the exterior product should not be confused with the outer product, which is the tensor product of vectors. The exterior product of two vectors is called a 2-blade, which is in turn a bivector. More generally, the product of any number k of vectors is sometimes called a k-blade. Given a vector space V, its exterior algebra is denoted Λ, the vector subspace generated by the k-blades is known as the kth exterior power of V, and denoted Λ k. The exterior algebra Λ is the sum of the Λ k as modules with the exterior product as additional structure. The exterior product makes the exterior algebra a graded algebra, and is alternating, the exterior algebra is used in geometry to study areas, volumes, and their higher-dimensional analogs. An exterior algebra has the structure of a bialgebra, naturally induced by the space of V. In this context, the Euclidean structure induces on the exterior algebra a richer structure of a Hopf algebra, the exterior algebra is also used in multivariable calculus, as the differential forms of higher degree belong to the exterior algebra of the differential forms of degree one. The Cartesian plane R2 is a space equipped with a basis consisting of a pair of unit vectors e 1 =, e 2 =. Suppose that v = = a e 1 + b e 2, w = = c e 1 + d e 2 are a pair of vectors in R2. There is a parallelogram having v and w as two of its sides. The area of this parallelogram is given by the determinant formula. Note that the coefficient in this last expression is precisely the determinant of the matrix, the fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the area of the parallelogram, the absolute value of the signed area is the ordinary area
46.
Symmetric group
–
Since there are n. possible permutation operations that can be performed on a tuple composed of n symbols, it follows that the order of the symmetric group Sn is n. For the remainder of this article, symmetric group will mean a group on a finite set. The symmetric group is important to diverse areas of such as Galois theory, invariant theory, the representation theory of Lie groups. Cayleys theorem states that every group G is isomorphic to a subgroup of the group on G. The symmetric group on a finite set X is the group elements are all bijective functions from X to X. For finite sets, permutations and bijective functions refer to the same operation, the symmetric group of degree n is the symmetric group on the set X =. The symmetric group on a set X is denoted in various ways including SX,
47.
Covariance and contravariance of vectors
–
In multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In physics, a basis is sometimes thought of as a set of reference axes, a change of scale on the reference axes corresponds to a change of units in the problem. For instance, in changing scale from meters to centimeters, the components of a velocity vector will multiply by 100. Vectors exhibit this behavior of changing scale inversely to changes in scale to the reference axes, as a result, vectors often have units of distance or distance times some other unit. In contrast, dual vectors typically have units the inverse of distance or the inverse of distance times some other unit, an example of a dual vector is the gradient, which has units of a spatial derivative, or distance−1. The components of dual vectors change in the way as changes to scale of the reference axes. That is, the matrix that transforms the vector of components must be the inverse of the matrix that transforms the basis vectors, the components of vectors are said to be contravariant. In Einstein notation, contravariant components are denoted with upper indices as in v = v i e i, for a dual vector to be basis-independent, the components of the dual vector must co-vary with a change of basis to remain representing the same covector. That is, the components must be transformed by the matrix as the change of basis matrix. The components of vectors are said to be covariant. Examples of covariant vectors generally appear when taking a gradient of a function, in Einstein notation, covariant components are denoted with lower indices as in v = v i e i. Curvilinear coordinate systems, such as cylindrical or spherical coordinates, are used in physical. Tensors are objects in multilinear algebra that can have aspects of both covariance and contravariance, in physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list of numbers such as. The numbers in the list depend on the choice of coordinate system, for a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors will transform in a way in passing from one coordinate system to another. A contravariant vector has components that transform as the coordinates do under changes of coordinates, including rotation and dilation. The vector itself does not change under these operations, instead, in other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction and this important requirement is what distinguishes a contravariant vector from any other triple of physically meaningful quantities
48.
Vector space
–
A vector space is a collection of objects called vectors, which may be added together and multiplied by numbers, called scalars in this context. Scalars are often taken to be numbers, but there are also vector spaces with scalar multiplication by complex numbers, rational numbers. The operations of addition and scalar multiplication must satisfy certain requirements, called axioms. Euclidean vectors are an example of a vector space and they represent physical quantities such as forces, any two forces can be added to yield a third, and the multiplication of a force vector by a real multiplier is another force vector. In the same vein, but in a more geometric sense, Vector spaces are the subject of linear algebra and are well characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. Infinite-dimensional vector spaces arise naturally in mathematical analysis, as function spaces and these vector spaces are generally endowed with additional structure, which may be a topology, allowing the consideration of issues of proximity and continuity. Among these topologies, those that are defined by a norm or inner product are commonly used. This is particularly the case of Banach spaces and Hilbert spaces, historically, the first ideas leading to vector spaces can be traced back as far as the 17th centurys analytic geometry, matrices, systems of linear equations, and Euclidean vectors. Today, vector spaces are applied throughout mathematics, science and engineering, furthermore, vector spaces furnish an abstract, coordinate-free way of dealing with geometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifolds by linearization techniques, Vector spaces may be generalized in several ways, leading to more advanced notions in geometry and abstract algebra. The concept of space will first be explained by describing two particular examples, The first example of a vector space consists of arrows in a fixed plane. This is used in physics to describe forces or velocities, given any two such arrows, v and w, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w, when a is negative, av is defined as the arrow pointing in the opposite direction, instead. Such a pair is written as, the sum of two such pairs and multiplication of a pair with a number is defined as follows, + = and a =. The first example above reduces to one if the arrows are represented by the pair of Cartesian coordinates of their end points. A vector space over a field F is a set V together with two operations that satisfy the eight axioms listed below, elements of V are commonly called vectors. Elements of F are commonly called scalars, the second operation, called scalar multiplication takes any scalar a and any vector v and gives another vector av. In this article, vectors are represented in boldface to distinguish them from scalars
49.
Linear functional
–
In linear algebra, a linear functional or linear form is a linear map from a vector space to its field of scalars. The set of all linear functionals from V to k, Homk, forms a space over k with the addition of the operations of addition. This space is called the space of V, or sometimes the algebraic dual space. It is often written V∗ or V′ when the field k is understood, if V is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V is a Banach space, then so is its dual, to distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual. In finite dimensions, every linear functional is continuous, so the dual is the same as the algebraic dual. Suppose that vectors in the coordinate space Rn are represented as column vectors x → =. For each row there is a linear functional f defined by f = a 1 x 1 + ⋯ + a n x n. This is just the product of the row vector and the column vector x →, f =. Linear functionals first appeared in functional analysis, the study of spaces of functions. Let Pn denote the space of real-valued polynomial functions of degree ≤n defined on an interval. If c ∈, then let evc, Pn → R be the evaluation functional, the mapping f → f is linear since = f + g = α f. If x0, …, xn are n+1 distinct points in, then the evaluation functionals evxi, the integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree ≤ n. If x0, …, xn are n+1 distinct points in, then there are coefficients a0, … and this forms the foundation of the theory of numerical quadrature. This follows from the fact that the linear functionals evxi, f → f defined above form a basis of the space of Pn. Linear functionals are particularly important in quantum mechanics, quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a mechanical system can be identified with a linear functional. For more information see bra–ket notation, in the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions
50.
Inner product
–
In linear algebra, an inner product space is a vector space with an additional structure called an inner product. This additional structure associates each pair of vectors in the space with a quantity known as the inner product of the vectors. Inner products allow the introduction of intuitive geometrical notions such as the length of a vector or the angle between two vectors. They also provide the means of defining orthogonality between vectors, inner product spaces generalize Euclidean spaces to vector spaces of any dimension, and are studied in functional analysis. An inner product induces a associated norm, thus an inner product space is also a normed vector space. A complete space with a product is called a Hilbert space. An space with a product is called a pre-Hilbert space, since its completion with respect to the norm induced by the inner product is a Hilbert space. Inner product spaces over the field of numbers are sometimes referred to as unitary spaces. In this article, the field of scalars denoted F is either the field of real numbers R or the field of complex numbers C, formally, an inner product space is a vector space V over the field F together with an inner product, i. e. Some authors, especially in physics and matrix algebra, prefer to define the inner product, then the first argument becomes conjugate linear, rather than the second. In those disciplines we would write the product ⟨ x, y ⟩ as ⟨ y | x ⟩, respectively y † x. Here the kets and columns are identified with the vectors of V and this reverse order is now occasionally followed in the more abstract literature, taking ⟨ x, y ⟩ to be conjugate linear in x rather than y. A few instead find a ground by recognizing both ⟨ ⋅, ⋅ ⟩ and ⟨ ⋅ | ⋅ ⟩ as distinct notations differing only in which argument is conjugate linear. There are various reasons why it is necessary to restrict the basefield to R and C in the definition. Briefly, the basefield has to contain an ordered subfield in order for non-negativity to make sense, the basefield has to have additional structure, such as a distinguished automorphism. More generally any quadratically closed subfield of R or C will suffice for this purpose, however in these cases when it is a proper subfield even finite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional inner product spaces over R or C, such as used in quantum computation, are automatically metrically complete. In some cases we need to consider non-negative semi-definite sesquilinear forms and this means that ⟨ x, x ⟩ is only required to be non-negative
51.
Clifford algebra
–
In mathematics, a Clifford algebra is an algebra generated by a vector space with a quadratic form, and is a unital associative algebra. As K-algebras, they generalize the real numbers, complex numbers, quaternions, the theory of Clifford algebras is intimately connected with the theory of quadratic forms and orthogonal transformations. Clifford algebras have important applications in a variety of fields including geometry, theoretical physics and they are named after the English geometer William Kingdon Clifford. The most familiar Clifford algebra, or orthogonal Clifford algebra, is referred to as Riemannian Clifford algebra. A Clifford algebra is an associative algebra that contains and is generated by a vector space V over a field K. One common way of writing this is to say that the algebra generated by V may be written as the tensor algebra ⊕n≥0 V ⊗. The product induced by the product in the quotient algebra is written using juxtaposition. Its associativity follows from the associativity of the tensor product, the definition of a Clifford algebra endows the algebra with more structure than a bare K-algebra, specifically it has a distinguished subspace V. Such a subspace cannot in general be uniquely determined only a K-algebra isomorphic to the Clifford algebra. The idea of being the freest or most general algebra subject to identity can be formally expressed through the notion of a universal property. Quadratic forms and Clifford algebras in characteristic 2 form an exceptional case, in particular, if char =2 it is not true that a quadratic form uniquely determines a symmetric bilinear form, or that every quadratic form admits an orthogonal basis. Many of the statements in this include the condition that the characteristic is not 2. Clifford algebras are related to exterior algebras. In fact, if Q =0 then the Clifford algebra Cℓ is just the exterior algebra Λ, for nonzero Q there exists a canonical linear isomorphism between Λ and Cℓ whenever the ground field K does not have characteristic two. That is, they are isomorphic as vector spaces. Clifford multiplication together with the subspace is strictly richer than the exterior product since it makes use of the extra information provided by Q. A different way of saying this is that, if one takes the Clifford algebra to be a filtered algebra, then the associated graded algebra is the exterior algebra. More precisely, Clifford algebras may be thought of as quantizations of the exterior algebra, Weyl algebras and Clifford algebras admit a further structure of a *-algebra, and can be unified as even and odd terms of a superalgebra, as discussed in CCR and CAR algebras
52.
Geometric algebra
–
In practice, several derived operations are generally defined, and together these allow a correspondence of elements, subspaces and operations of the algebra with physical interpretations. The scalars and vectors have their usual interpretation, and make up distinct subspaces of a GA, a trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of V, rotations and reflections are represented as elements. Unlike vector algebra, a GA naturally accommodates any number of dimensions, specific examples of geometric algebras applied in physics include the algebra of physical space, the spacetime algebra, and the conformal geometric algebra. Geometric algebra has been advocated, most notably by David Hestenes and Chris Doran, proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity. GA has also found use as a tool in computer graphics and robotics. In 1878, William Kingdon Clifford greatly expanded on Grassmanns work to form what are now usually called Clifford algebras in his honor, for several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. The term geometric algebra was repopularized by Hestenes in the 1960s, given a finite-dimensional real quadratic space V with a symmetric bilinear form g, V × V → R, the geometric algebra for this quadratic space is the Clifford algebra Cl . The algebra product is called the geometric product and it is standard to denote the geometric product by juxtaposition. The above definition of the algebra is abstract, so we summarize the properties of the geometric product by the following set of axioms. Note that in the property above, the real number g need not be nonnegative if g is not positive-definite. An important property of the product is the existence of elements having a multiplicative inverse. If a 2 ≠0 for some vector a, then a −1 exists and is equal to g −1 a, not every nonzero element of the algebra necessarily has a multiplicative inverse. For example, if u is a vector in V such that u 2 =1, then there also exist nontrivial idempotent elements such as 12. It is usual to identify 1 ∈ R with 1 ∈ Cl , for the remainder of this article, this identification is assumed. Throughout, the term refers to an element of V. Pictorially, a and b are parallel if their product is equal to their inner product. In a geometric algebra for which the square of any vector is positive
53.
Differential operator
–
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an operation that accepts a function. This article considers mainly linear operators, which are the most common type, however, non-linear differential operators, such as the Schwarzian derivative also exist. Assume that there is a map A from a function space F1 to another function space F2, the most common differential operator is the action of taking the derivative itself. Common notations for taking the first derivative with respect to a variable x include, d d x, D, D x, and ∂ x. When taking higher, nth order derivatives, the operator may also be written, d n d x n, D n, the derivative of a function f of an argument x is sometimes given as either of the following, ′ f ′. The D notations use and creation is credited to Oliver Heaviside, one of the most frequently seen differential operators is the Laplacian operator, defined by Δ = ∇2 = ∑ k =1 n ∂2 ∂ x k 2. Another differential operator is the Θ operator, or theta operator, as in one variable, the eigenspaces of Θ are the spaces of homogeneous polynomials. In writing, following common mathematical convention, the argument of an operator is usually placed on the right side of the operator itself. Such a bidirectional-arrow notation is used for describing the probability current of quantum mechanics. The differential operator del, also called nabla operator, is an important vector differential operator and it appears frequently in physics in places like the differential form of Maxwells equations. In three-dimensional Cartesian coordinates, del is defined, ∇ = x ^ ∂ ∂ x + y ^ ∂ ∂ y + z ^ ∂ ∂ z. Del is used to calculate the gradient, curl, divergence, and Laplacian of various objects. This definition therefore depends on the definition of the scalar product. In the functional space of functions, the scalar product is defined by ⟨ f, g ⟩ = ∫ a b f g ¯ d x. If one moreover adds the condition that f or g vanishes for x → a and x → b and this formula does not explicitly depend on the definition of the scalar product. It is therefore chosen as a definition of the adjoint operator. When T ∗ is defined according to formula, it is called the formal adjoint of T. A self-adjoint operator is an equal to its own adjoint
54.
Exact sequence
–
An exact sequence is a concept in mathematics, especially in ring and module theory, homological algebra, as well as in differential geometry and group theory. An exact sequence is a sequence, either finite or infinite, of objects, a similar definition can be made for other algebraic structures. For example, one could have a sequence of vector spaces and linear maps, or of modules. More generally, the notion of an exact sequence makes sense in any category with kernels and cokernels, to make sense of the definition, it is helpful to consider what it means in relatively simple cases where the sequence is finite and begins or ends with the trivial group. Traditionally, this, along with the identity element, is denoted 0, when the groups are abelian. The sequence 0 → A → B is exact at A if and only if the map from A to B has kernel, i. e. if and only if that map is a monomorphism. Dually, the sequence B → C →0 is exact at C if and only if the image of the map from B to C is all of C, i. e. if and only if that map is an epimorphism. Therefore, the sequence 0 → X → Y →0 is exact if and only if the map from X to Y is both a monomorphism and epimorphism, and thus, in many cases, an isomorphism. Important are short sequences, which are exact sequences of the form 0 → A → f B → g C →0 As established above, for any such short exact sequence, f is a monomorphism. Furthermore, the image of f is equal to the kernel of g and it follows that if these are abelian groups, B is isomorphic to the direct sum of A and C, B ≅ A ⊕ C. A long exact sequence is a sequence with infinitely many groups. The second operation forms an element in the quotient space, j = i mod 2, here the hook arrow ↪ indicates that the map 2× from Z to Z is a monomorphism, and the two-headed arrow ↠ indicates an epimorphism. This is an exact sequence because the image 2Z of the monomorphism is the kernel of the epimorphism, the image of 2Z through this monomorphism is however exactly the same subset of Z as the image of Z through n ↦ 2n used in the previous sequence. This latter sequence does differ in the nature of its first object from the previous one as 2Z is not the same set as Z even though the two are isomorphic as groups. It is not possible for a group to be mapped by inclusion as a proper subgroup of itself. This example makes use of the fact that 3-dimensional space is topologically trivial, H curl and H div are the domains for the curl and div operators respectively. In this case, we say that the exact sequence splits. The snake lemma shows how a commutative diagram with two exact rows gives rise to an exact sequence
55.
De Rham cohomology
–
It is a cohomology theory based on the existence of differential forms with prescribed properties. The de Rham complex is the complex of exterior differential forms on some smooth manifold M. 0 → Ω0 → d Ω1 → d Ω2 → d Ω3 → ⋯ where Ω0 is the space of functions on M, Ω1 is the space of 1-forms. The converse, however, is not in true, closed forms need not be exact. A simple but significant case is the 1-form of angle measure on the unit circle and we can, however, change the topology by removing just one point. The idea of de Rham cohomology is to classify the different types of closed forms on a manifold. One performs this classification by saying that two closed forms α, β ∈ Ωk are cohomologous if they differ by an exact form and this classification induces an equivalence relation on the space of closed forms in Ωk. One then defines the k-th de Rham cohomology group H d R k to be the set of classes, that is. Note that, for any manifold M with n connected components H d R0 ≅ R n and this follows from the fact that any smooth function on M with zero derivative is constant on each of the connected components of M. One may often find the general de Rham cohomologies of a manifold using the fact about the zero cohomology. Another useful fact is that the de Rham cohomology is a homotopy invariant, let n >0, m ≥0, and I an open real interval. Then H d R k ≃ { R if k =0, n,0 if k ≠0, n, similarly, allowing n >0 here, we obtain H d R k ≃ R. Punctured Euclidean space is simply Euclidean space with the origin removed. Stokes theorem is an expression of duality between de Rham cohomology and the homology of chains and it says that the pairing of differential forms and chains, via integration, gives a homomorphism from de Rham cohomology H d R k to singular cohomology groups H k. De Rhams theorem, proved by Georges de Rham in 1931, states that for a smooth manifold M, the theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. The wedge product endows the direct sum of groups with a ring structure. A further result of the theorem is that the two rings are isomorphic, where the analogous product on singular cohomology is the cup product. Let Ωk denote the sheaf of germs of k-forms on M, by the Poincaré lemma, the following sequence of sheaves is exact,0 → R → Ω0 → d Ω1 → d Ω2 → d ⋯ → d Ω m →0. This sequence now breaks up into short exact sequences 0 → d Ω k −1 → ⊂ Ω k → d d Ω k →0, each of these induces a long exact sequence in cohomology
56.
Jacobian matrix and determinant
–
In vector calculus, the Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. When the matrix is a matrix, both the matrix and its determinant are referred to as the Jacobian in literature. Suppose f, ℝn → ℝm is a function takes as input the vector x ∈ ℝn. Then the Jacobian matrix J of f is an m×n matrix, usually defined and arranged as follows, J = = or, component-wise and this matrix, whose entries are functions of x, is also denoted by Df, Jf, and ∂/∂. This linear map is thus the generalization of the notion of derivative. If m = n, the Jacobian matrix is a matrix, and its determinant. It carries important information about the behavior of f. In particular, the f has locally in the neighborhood of a point x an inverse function that is differentiable if. The Jacobian determinant also appears when changing the variables in multiple integrals, if m =1, f is a scalar field and the Jacobian matrix is reduced to a row vector of partial derivatives of f—i. e. the gradient of f. These concepts are named after the mathematician Carl Gustav Jacob Jacobi, the Jacobian generalizes the gradient of a scalar-valued function of multiple variables, which itself generalizes the derivative of a scalar-valued function of a single variable. In other words, the Jacobian for a multivariate function is the gradient. The Jacobian can also be thought of as describing the amount of stretching, rotating or transforming that a transformation imposes locally, for example, if = f is used to transform an image, the Jacobian Jf, describes how the image in the neighborhood of is transformed. If p is a point in ℝn and f is differentiable at p, compare this to a Taylor series for a scalar function of a scalar argument, truncated to first order, f = f + f ′ + o. The Jacobian of the gradient of a function of several variables has a special name, the Hessian matrix. If m=n, then f is a function from ℝn to itself and we can then form its determinant, known as the Jacobian determinant. The Jacobian determinant is occasionally referred to as the Jacobian, the Jacobian determinant at a given point gives important information about the behavior of f near that point. For instance, the differentiable function f is invertible near a point p ∈ ℝn if the Jacobian determinant at p is non-zero. This is the inverse function theorem, furthermore, if the Jacobian determinant at p is positive, then f preserves orientation near p, if it is negative, f reverses orientation