1.
Mathematics
–
Mathematics is the study of topics such as quantity, structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope, Mathematicians seek out patterns and use them to formulate new conjectures. Mathematicians resolve the truth or falsity of conjectures by mathematical proof, when mathematical structures are good models of real phenomena, then mathematical reasoning can provide insight or predictions about nature. Through the use of abstraction and logic, mathematics developed from counting, calculation, measurement, practical mathematics has been a human activity from as far back as written records exist. The research required to solve mathematical problems can take years or even centuries of sustained inquiry, rigorous arguments first appeared in Greek mathematics, most notably in Euclids Elements. Galileo Galilei said, The universe cannot be read until we have learned the language and it is written in mathematical language, and the letters are triangles, circles and other geometrical figures, without which means it is humanly impossible to comprehend a single word. Without these, one is wandering about in a dark labyrinth, carl Friedrich Gauss referred to mathematics as the Queen of the Sciences. Benjamin Peirce called mathematics the science that draws necessary conclusions, David Hilbert said of mathematics, We are not speaking here of arbitrariness in any sense. Mathematics is not like a game whose tasks are determined by arbitrarily stipulated rules, rather, it is a conceptual system possessing internal necessity that can only be so and by no means otherwise. Albert Einstein stated that as far as the laws of mathematics refer to reality, they are not certain, Mathematics is essential in many fields, including natural science, engineering, medicine, finance and the social sciences. Applied mathematics has led to entirely new mathematical disciplines, such as statistics, Mathematicians also engage in pure mathematics, or mathematics for its own sake, without having any application in mind. There is no clear line separating pure and applied mathematics, the history of mathematics can be seen as an ever-increasing series of abstractions. The earliest uses of mathematics were in trading, land measurement, painting and weaving patterns, in Babylonian mathematics elementary arithmetic first appears in the archaeological record. Numeracy pre-dated writing and numeral systems have many and diverse. Between 600 and 300 BC the Ancient Greeks began a study of mathematics in its own right with Greek mathematics. Mathematics has since been extended, and there has been a fruitful interaction between mathematics and science, to the benefit of both. Mathematical discoveries continue to be made today, the overwhelming majority of works in this ocean contain new mathematical theorems and their proofs. The word máthēma is derived from μανθάνω, while the modern Greek equivalent is μαθαίνω, in Greece, the word for mathematics came to have the narrower and more technical meaning mathematical study even in Classical times
2.
Differential geometry
–
Differential geometry is a mathematical discipline that uses the techniques of differential calculus, integral calculus, linear algebra and multilinear algebra to study problems in geometry. The theory of plane and space curves and surfaces in the three-dimensional Euclidean space formed the basis for development of differential geometry during the 18th century, since the late 19th century, differential geometry has grown into a field concerned more generally with the geometric structures on differentiable manifolds. Differential geometry is related to differential topology and the geometric aspects of the theory of differential equations. The differential geometry of surfaces captures many of the key ideas, Differential geometry arose and developed as a result of and in connection to the mathematical analysis of curves and surfaces. These unanswered questions indicated greater, hidden relationships, initially applied to the Euclidean space, further explorations led to non-Euclidean space, and metric and topological spaces. Riemannian geometry studies Riemannian manifolds, smooth manifolds with a Riemannian metric and this is a concept of distance expressed by means of a smooth positive definite symmetric bilinear form defined on the tangent space at each point. Various concepts based on length, such as the arc length of curves, area of plane regions, the notion of a directional derivative of a function from multivariable calculus is extended in Riemannian geometry to the notion of a covariant derivative of a tensor. Many concepts and techniques of analysis and differential equations have been generalized to the setting of Riemannian manifolds, a distance-preserving diffeomorphism between Riemannian manifolds is called an isometry. This notion can also be defined locally, i. e. for small neighborhoods of points, any two regular curves are locally isometric. In higher dimensions, the Riemann curvature tensor is an important pointwise invariant associated with a Riemannian manifold that measures how close it is to being flat, an important class of Riemannian manifolds is the Riemannian symmetric spaces, whose curvature is not necessarily constant. These are the closest analogues to the plane and space considered in Euclidean and non-Euclidean geometry. Pseudo-Riemannian geometry generalizes Riemannian geometry to the case in which the metric tensor need not be positive-definite, a special case of this is a Lorentzian manifold, which is the mathematical basis of Einsteins general relativity theory of gravity. Finsler geometry has the Finsler manifold as the object of study. This is a manifold with a Finsler metric, i. e. a Banach norm defined on each tangent space. Riemannian manifolds are special cases of the more general Finsler manifolds. A Finsler structure on a manifold M is a function F, TM → [0, ∞) such that, F = |m|F for all x, y in TM, F is infinitely differentiable in TM −, symplectic geometry is the study of symplectic manifolds. A symplectic manifold is an almost symplectic manifold for which the symplectic form ω is closed, a diffeomorphism between two symplectic manifolds which preserves the symplectic form is called a symplectomorphism. Non-degenerate skew-symmetric bilinear forms can only exist on even-dimensional vector spaces, in dimension 2, a symplectic manifold is just a surface endowed with an area form and a symplectomorphism is an area-preserving diffeomorphism
3.
Multivariable calculus
–
A study of limits and continuity in multivariable calculus yields many counter-intuitive results not demonstrated by single-variable functions. Indeed, the function f = x 2 y x 4 + y 2 approaches zero along any line through the origin, however, when the origin is approached along a parabola y = x 2, it has a limit of 0.5. Since taking different paths toward the same point yields different values for the limit, continuity in each argument is not sufficient for multivariate continuity. For instance, in the case of a function with two real-valued parameters, f, continuity of f in x for fixed y and continuity of f in y for fixed x does not imply continuity of f. Consider f = { y x − y if 1 ≥ x > y ≥0 x y − x if 1 ≥ y > x ≥01 − x if x = y >00 else. It is easy to verify that all real-valued functions that are given by f y, = f are continuous in x, similarly, all f x are continuous as f is symmetric with regards to x and y. However, f itself is not continuous as can be seen by considering the sequence f which should converge to f =0 if f was continuous, however, lim n → ∞ f =1. Thus, function is not continuous at, the partial derivative generalizes the notion of the derivative to higher dimensions. A partial derivative of a function is a derivative with respect to one variable with all other variables held constant. Partial derivatives may be combined in interesting ways to more complicated expressions of the derivative. In vector calculus, the del operator is used to define the concepts of gradient, divergence, a matrix of partial derivatives, the Jacobian matrix, may be used to represent the derivative of a function between two spaces of arbitrary dimension. The derivative can thus be understood as a transformation which directly varies from point to point in the domain of the function. Differential equations containing partial derivatives are called differential equations or PDEs. These equations are more difficult to solve than ordinary differential equations. The multiple integral expands the concept of the integral to functions of any number of variables, double and triple integrals may be used to calculate areas and volumes of regions in the plane and in space. Fubinis theorem guarantees that an integral may be evaluated as a repeated integral or iterated integral as long as the integrand is continuous throughout the domain of integration. The surface integral and the integral are used to integrate over curved manifolds such as surfaces and curves. In single-variable calculus, the theorem of calculus establishes a link between the derivative and the integral
4.
Coordinate
–
The order of the coordinates is significant, and they are sometimes identified by their position in an ordered tuple and sometimes by a letter, as in the x-coordinate. The coordinates are taken to be real numbers in elementary mathematics, the use of a coordinate system allows problems in geometry to be translated into problems about numbers and vice versa, this is the basis of analytic geometry. The simplest example of a system is the identification of points on a line with real numbers using the number line. In this system, an arbitrary point O is chosen on a given line. The coordinate of a point P is defined as the distance from O to P. Each point is given a unique coordinate and each number is the coordinate of a unique point. The prototypical example of a system is the Cartesian coordinate system. In the plane, two lines are chosen and the coordinates of a point are taken to be the signed distances to the lines. In three dimensions, three perpendicular planes are chosen and the three coordinates of a point are the distances to each of the planes. This can be generalized to create n coordinates for any point in n-dimensional Euclidean space, depending on the direction and order of the coordinate axis the system may be a right-hand or a left-hand system. This is one of many coordinate systems, another common coordinate system for the plane is the polar coordinate system. A point is chosen as the pole and a ray from this point is taken as the polar axis, for a given angle θ, there is a single line through the pole whose angle with the polar axis is θ. Then there is a point on this line whose signed distance from the origin is r for given number r. For a given pair of coordinates there is a single point, for example, and are all polar coordinates for the same point. The pole is represented by for any value of θ, there are two common methods for extending the polar coordinate system to three dimensions. In the cylindrical coordinate system, a z-coordinate with the meaning as in Cartesian coordinates is added to the r and θ polar coordinates giving a triple. Spherical coordinates take this a further by converting the pair of cylindrical coordinates to polar coordinates giving a triple. A point in the plane may be represented in coordinates by a triple where x/z and y/z are the Cartesian coordinates of the point
5.
Integral
–
In mathematics, an integral assigns numbers to functions in a way that can describe displacement, area, volume, and other concepts that arise by combining infinitesimal data. Integration is one of the two operations of calculus, with its inverse, differentiation, being the other. The area above the x-axis adds to the total and that below the x-axis subtracts from the total, roughly speaking, the operation of integration is the reverse of differentiation. For this reason, the integral may also refer to the related notion of the antiderivative. In this case, it is called an integral and is written. The integrals discussed in this article are those termed definite integrals, a rigorous mathematical definition of the integral was given by Bernhard Riemann. It is based on a procedure which approximates the area of a curvilinear region by breaking the region into thin vertical slabs. A line integral is defined for functions of two or three variables, and the interval of integration is replaced by a curve connecting two points on the plane or in the space. In a surface integral, the curve is replaced by a piece of a surface in the three-dimensional space and this method was further developed and employed by Archimedes in the 3rd century BC and used to calculate areas for parabolas and an approximation to the area of a circle. A similar method was developed in China around the 3rd century AD by Liu Hui. This method was used in the 5th century by Chinese father-and-son mathematicians Zu Chongzhi. The next significant advances in integral calculus did not begin to appear until the 17th century, further steps were made in the early 17th century by Barrow and Torricelli, who provided the first hints of a connection between integration and differentiation. Barrow provided the first proof of the theorem of calculus. Wallis generalized Cavalieris method, computing integrals of x to a power, including negative powers. The major advance in integration came in the 17th century with the independent discovery of the theorem of calculus by Newton. The theorem demonstrates a connection between integration and differentiation and this connection, combined with the comparative ease of differentiation, can be exploited to calculate integrals. In particular, the theorem of calculus allows one to solve a much broader class of problems. Equal in importance is the mathematical framework that both Newton and Leibniz developed
6.
Manifold
–
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, each point of a manifold has a neighbourhood that is homeomorphic to the Euclidean space of dimension n. One-dimensional manifolds include lines and circles, but not figure eights, two-dimensional manifolds are also called surfaces. Although a manifold locally resembles Euclidean space, globally it may not, for example, the surface of the sphere is not a Euclidean space, but in a region it can be charted by means of map projections of the region into the Euclidean plane. When a region appears in two neighbouring charts, the two representations do not coincide exactly and a transformation is needed to pass from one to the other, Manifolds naturally arise as solution sets of systems of equations and as graphs of functions. One important class of manifolds is the class of differentiable manifolds and this differentiable structure allows calculus to be done on manifolds. A Riemannian metric on a manifold allows distances and angles to be measured, symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. After a line, the circle is the simplest example of a topological manifold, Topology ignores bending, so a small piece of a circle is treated exactly the same as a small piece of a line. Consider, for instance, the top part of the circle, x2 + y2 =1. Any point of this arc can be described by its x-coordinate. So, projection onto the first coordinate is a continuous, and invertible, mapping from the arc to the open interval. Such functions along with the regions they map are called charts. Similarly, there are charts for the bottom, left, and right parts of the circle, together, these parts cover the whole circle and the four charts form an atlas for the circle. The top and right charts, χtop and χright respectively, overlap in their domain, Each map this part into the interval, though differently. Let a be any number in, then, T = χ r i g h t = χ r i g h t =1 − a 2 Such a function is called a transition map. The top, bottom, left, and right charts show that the circle is a manifold, charts need not be geometric projections, and the number of charts is a matter of some choice. These two charts provide a second atlas for the circle, with t =1 s Each chart omits a single point, either for s or for t and it can be proved that it is not possible to cover the full circle with a single chart. Viewed using calculus, the transition function T is simply a function between open intervals, which gives a meaning to the statement that T is differentiable
7.
Surface integral
–
In mathematics, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analog of the line integral, given a surface, one may integrate over its scalar fields, and vector fields. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism, let such a parameterization be x, where varies in some region T in the plane. The surface integral can also be expressed in the equivalent form ∬ S f d Σ = ∬ T f g d s d t where g is the determinant of the first fundamental form of the mapping x. So that ∂ r ∂ x =, and ∂ r ∂ y =, one can recognize the vector in the second line above as the normal vector to the surface. Note that because of the presence of the product, the above formulas only work for surfaces embedded in three-dimensional space. This can be seen as integrating a Riemannian volume form on the parameterized surface, consider a vector field v on S, that is, for each x in S, v is a vector. The surface integral can be defined according to the definition of the surface integral of a scalar field. This applies for example in the expression of the field at some fixed point due to an electrically charged surface. Alternatively, if we integrate the normal component of the vector field, imagine that we have a fluid flowing through S, such that v determines the velocity of the fluid at x. The flux is defined as the quantity of flowing through S per unit time. This illustration implies that if the field is tangent to S at each point, then the flux is zero, because the fluid just flows in parallel to S. This also implies that if v does not just flow along S and we find the formula ∬ S v ⋅ d Σ = ∬ S d Σ = ∬ T ∥ ∥ d s d t = ∬ T v ⋅ d s d t. The cross product on the side of this expression is a surface normal determined by the parametrization. This formula defines the integral on the left and we may also interpret this as a special case of integrating 2-forms, where we identify the vector field with a 1-form, and then integrate its Hodge dual over the surface. The transformation of the forms are similar. Then, the integral of f on S is given by ∬ D d s d t where ∂ x ∂ s × ∂ x ∂ t = is the surface element normal to S. Let us note that the integral of this 2-form is the same as the surface integral of the vector field which has as components f x, f y and f z
8.
Orientability
–
In mathematics, orientability is a property of surfaces in Euclidean space that measures whether it is possible to make a consistent choice of surface normal vector at every point. A choice of surface normal allows one to use the rule to define a clockwise direction of loops in the surface. More generally, orientability of a surface, or manifold. Equivalently, a surface is orientable if a figure such as in the space cannot be moved around the space. The notion of orientability can be generalised to higher-dimensional manifolds as well, a manifold is orientable if it has a consistent choice of orientation, and a connected orientable manifold has exactly two different possible orientations. In this setting, various equivalent formulations of orientability can be given, depending on the desired application and level of generality. A surface S in the Euclidean space R3 is orientable if a two-dimensional figure cannot be moved around the surface, an abstract surface is orientable if a consistent concept of clockwise rotation can be defined on the surface in a continuous manner. That is to say that a loop going around one way on the surface can never be continuously deformed to a loop going around the opposite way and this turns out to be equivalent to the question of whether the surface contains no subset that is homeomorphic to the Möbius strip. Thus, for surfaces, the Möbius strip may be considered the source of all non-orientability, for an orientable surface, a consistent choice of clockwise is called an orientation, and the surface is called oriented. For surfaces embedded in Euclidean space, an orientation is specified by the choice of a continuously varying surface normal n at every point, If such a normal exists at all, then there are always two ways to select it, n or −n. More generally, an orientable surface admits exactly two orientations, and the distinction between a surface and an orientable surface is subtle and frequently blurred. Examples Most surfaces we encounter in the world are orientable. Spheres, planes, and tori are orientable, for example, but Möbius strips, real projective planes, and Klein bottles are non-orientable. They, as visualized in 3-dimensions, all have just one side, the real projective plane and Klein bottle cannot be embedded in R3, only immersed with nice intersections. Note that locally an embedded surface always has two sides, so a near-sighted ant crawling on a surface would think there is an other side. The essence of one-sidedness is that the ant can crawl from one side of the surface to the other going through the surface or flipping over an edge. In general, the property of being orientable is not equivalent to being two-sided, however, this holds when the ambient space is orientable. For example, a torus embedded in K2 × S1 can be one-sided, Orientation by triangulation Any surface has a triangulation, a decomposition into triangles such that each edge on a triangle is glued to at most one other edge
9.
Surface (topology)
–
In topology and differential geometry, a surface is a two-dimensional manifold, and, as such, may be an abstract surface not embedded in any Euclidean space. For example, the Klein bottle is a surface, which cannot be represented in the three-dimensional Euclidean space without introducing self-intersections, in mathematics, a surface is a geometrical shape that resembles to a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, the exact definition of a surface may depend on the context. Typically, in geometry, a surface may cross itself, while, in topology and differential geometry. A surface is a space, this means that a moving point on a surface may move in two directions. In other words, around almost every point, there is a patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles a two-dimensional sphere, the concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the properties of an airplane. A surface is a space in which every point has an open neighbourhood homeomorphic to some open subset of the Euclidean plane E2. Such a neighborhood, together with the corresponding homeomorphism, is known as a chart and it is through this chart that the neighborhood inherits the standard coordinates on the Euclidean plane. These coordinates are known as coordinates and these homeomorphisms lead us to describe surfaces as being locally Euclidean. In most writings on the subject, it is assumed, explicitly or implicitly, that as a topological space a surface is also nonempty, second countable. It is also assumed that the surfaces under consideration are connected. The rest of this article will assume, unless specified otherwise, that a surface is nonempty, Hausdorff, second countable and these homeomorphisms are also known as charts. The boundary of the upper half-plane is the x-axis, a point on the surface mapped via a chart to the x-axis is termed a boundary point. The collection of points is known as the boundary of the surface which is necessarily a one-manifold, that is. On the other hand, a point mapped to above the x-axis is an interior point, the collection of interior points is the interior of the surface which is always non-empty. The closed disk is an example of a surface with boundary
10.
Exterior algebra
–
In mathematics, the exterior algebra of a vector space is an associative algebra that contains the vector space, and such that the square of any element of the vector space is zero. The exterior algebra is universal in the sense that every embedding of the space or module in an algebra that has these properties may be factored through the exterior algebra. The multiplication operation of the algebra is called the exterior product or wedge product. The term exterior comes from the product of two vectors not being a vector, while the term wedge comes from the shape of the multiplication symbol. The exterior algebra is also named Grassmann algebra after Hermann Grassmann, the exterior product should not be confused with the outer product, which is the tensor product of vectors. The exterior product of two vectors is called a 2-blade, which is in turn a bivector. More generally, the product of any number k of vectors is sometimes called a k-blade. Given a vector space V, its exterior algebra is denoted Λ, the vector subspace generated by the k-blades is known as the kth exterior power of V, and denoted Λ k. The exterior algebra Λ is the sum of the Λ k as modules with the exterior product as additional structure. The exterior product makes the exterior algebra a graded algebra, and is alternating, the exterior algebra is used in geometry to study areas, volumes, and their higher-dimensional analogs. An exterior algebra has the structure of a bialgebra, naturally induced by the space of V. In this context, the Euclidean structure induces on the exterior algebra a richer structure of a Hopf algebra, the exterior algebra is also used in multivariable calculus, as the differential forms of higher degree belong to the exterior algebra of the differential forms of degree one. The Cartesian plane R2 is a space equipped with a basis consisting of a pair of unit vectors e 1 =, e 2 =. Suppose that v = = a e 1 + b e 2, w = = c e 1 + d e 2 are a pair of vectors in R2. There is a parallelogram having v and w as two of its sides. The area of this parallelogram is given by the determinant formula. Note that the coefficient in this last expression is precisely the determinant of the matrix, the fact that this may be positive or negative has the intuitive meaning that v and w may be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such an area is called the area of the parallelogram, the absolute value of the signed area is the ordinary area
11.
Orientation (vector space)
–
In linear algebra, the notion of orientation makes sense in arbitrary finite dimension. In this setting, the orientation of a basis is a kind of asymmetry that makes a reflection impossible to replicate by means of a simple rotation. As a result, in the three-dimensional Euclidean space, the two possible basis orientations are called right-handed and left-handed, the orientation on a real vector space is the arbitrary choice of which ordered bases are positively oriented and which are negatively oriented. In the three-dimensional Euclidean space, right-handed bases are typically declared to be positively oriented, a vector space with an orientation selected is called an oriented vector space, while one not having an orientation selected, is called unoriented. Let V be a real vector space and let b1. It is a result in linear algebra that there exists a unique linear transformation A, V → V that takes b1 to b2. The bases b1 and b2 are said to have the same orientation if A has positive determinant, the property of having the same orientation defines an equivalence relation on the set of all ordered bases for V. If V is non-zero, there are two equivalence classes determined by this relation. An orientation on V is an assignment of +1 to one equivalence class, every ordered basis lives in one equivalence class or another. Thus any choice of an ordered basis for V determines an orientation. For example, the basis on Rn provides a standard orientation on Rn. Any choice of an isomorphism between V and Rn will then provide an orientation on V. The ordering of elements in a basis is crucial, two bases with a different ordering will differ by some permutation. They will have the same/opposite orientations according to whether the signature of this permutation is ±1 and this is because the determinant of a permutation matrix is equal to the signature of the associated permutation. Similarly, let A be a linear mapping of vector space Rn to Rn. This mapping is orientation-preserving if its determinant is positive, a zero-dimensional vector space has only a single point, the zero vector. Consequently, the basis of a zero-dimensional vector space is the empty set ∅. Therefore, there is an equivalence class of ordered bases, namely
12.
Differential of a function
–
In calculus, the differential represents the principal part of the change in a function y = f with respect to changes in the independent variable. The differential dy is defined by d y = f ′ d x, where f ′ is the derivative of f with respect to x, one also writes d f = f ′ d x. The precise meaning of the variables dy and dx depends on the context of the application, traditionally, the variables dx and dy are considered to be very small, and this interpretation is made rigorous in non-standard analysis. The quotient dy/dx is not infinitely small, rather it is a real number, the use of infinitesimals in this form was widely criticized, for instance by the famous pamphlet The Analyst by Bishop Berkeley. Augustin-Louis Cauchy defined the differential without appeal to the atomism of Leibnizs infinitesimals, in physical treatments, such as those applied to the theory of thermodynamics, the infinitesimal view still prevails. Courant & John reconcile the use of infinitesimal differentials with the mathematical impossibility of them as follows. The differentials represent finite non-zero values that are smaller than the degree of accuracy required for the purpose for which they are intended. Thus physical infinitesimals need not appeal to a corresponding mathematical infinitesimal in order to have a precise sense, following twentieth-century developments in mathematical analysis and differential geometry, it became clear that the notion of the differential of a function could be extended in a variety of ways. In real analysis, it is desirable to deal directly with the differential as the principal part of the increment of a function. This leads directly to the notion that the differential of a function at a point is a functional of an increment Δx. This approach allows the differential to be developed for a variety of more sophisticated spaces, in non-standard calculus, differentials are regarded as infinitesimals, which can themselves be put on a rigorous footing. The differential is defined in modern treatments of calculus as follows. The differential of a function f of a real variable x is the function df of two independent real variables x and Δx given by d f = d e f f ′ Δ x. One or both of the arguments may be suppressed, i. e. one may see df or simply df, if y = f, the differential may also be written as dy. The partial differential is therefore ∂ y ∂ x 1 d x 1 involving the partial derivative of y with respect to x1. The total differential is then defined as d y = ∂ y ∂ x 1 Δ x 1 + ⋯ + ∂ y ∂ x n Δ x n. Since, with this definition, d x i = Δ x i, in measurement, the total differential is used in estimating the error Δf of a function f based on the errors Δx, Δy. of the parameters x, y. As they are assumed to be independent, the analysis describes the worst-case scenario, the absolute values of the component errors are used, because after simple computation, the derivative may have a negative sign
13.
Curl (mathematics)
–
In vector calculus, the curl is a vector operator that describes the infinitesimal rotation of a 3-dimensional vector field. At every point in the field, the curl of that point is represented by a vector, the attributes of this vector characterize the rotation at that point. The direction of the curl is the axis of rotation, as determined by the rule. If the vector represents the flow velocity of a moving fluid. A vector field whose curl is zero is called irrotational, the curl is a form of differentiation for vector fields. The alternative terminology rotor or rotational and alternative notations rot F and ∇ × F are often used for curl F and this is a similar phenomenon as in the 3 dimensional cross product, and the connection is reflected in the notation ∇ × for the curl. The name curl was first suggested by James Clerk Maxwell in 1871, the curl of a vector field F, denoted by curl F, or ∇ × F, or rot F, at a point is defined in terms of its projection onto various lines through the point. As such, the curl operator maps continuously differentiable functions f, ℝ3 → ℝ3 to continuous functions g, in fact, it maps Ck functions in ℝ3 to Ck −1 functions in ℝ3. Implicitly, curl is defined by, ⋅ n ^ = d e f lim A →0 where ∮C F · dr is a line integral along the boundary of the area in question, and | A | is the magnitude of the area. Note that the equation for each component, k can be obtained by exchanging each occurrence of a subscript 1,2,3 in cyclic permutation, 1→2, 2→3, and 3→1. If are the Cartesian coordinates and are the coordinates, then h i =2 +2 +2 is the length of the coordinate vector corresponding to ui. The remaining two components of curl result from cyclic permutation of indices,3,1,2 →1,2,3 →2,3,1. Suppose the vector field describes the velocity field of a fluid flow, if the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis points in the direction of the curl of the field at the centre of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. The notation ∇ × F has its origins in the similarities to the 3-dimensional cross product, such notation involving operators is common in physics and algebra. However, in coordinate systems, such as polar-toroidal coordinates. This expands as follows, i + j + k Although expressed in terms of coordinates, equivalently, = e k ε k l m ∇ l F m where ek are the coordinate vector fields. Equivalently, using the derivative, the curl can be expressed as, ∇ × F = ♯ Here ♭ and ♯ are the musical isomorphisms
14.
Fundamental theorem of calculus
–
The fundamental theorem of calculus is a theorem that links the concept of the derivative of a function with the concept of the functions integral. This part of the guarantees the existence of antiderivatives for continuous functions. This part of the theorem has key practical applications because it simplifies the computation of definite integrals. The fundamental theorem of calculus relates differentiation and integration, showing that two operations are essentially inverses of one another. Before the discovery of this theorem, it was not recognized that two operations were related. Ancient Greek mathematicians knew how to compute area via infinitesimals, an operation that we would now call integration, the first published statement and proof of a rudimentary form of the fundamental theorem, strongly geometric in character, was by James Gregory. Isaac Barrow proved a more generalized version of the theorem, while his student Isaac Newton completed the development of the mathematical theory. Gottfried Leibniz systematized the knowledge into a calculus for infinitesimal quantities, for a continuous function y = f whose graph is plotted as a curve, each value of x has a corresponding area function A, representing the area beneath the curve between 0 and x. The function A may not be known, but it is given that it represents the area under the curve. The area under the curve between x and x + h could be computed by finding the area between 0 and x + h, then subtracting the area between 0 and x, in other words, the area of this “sliver” would be A − A. There is another way to estimate the area of this same sliver, as shown in the accompanying figure, h is multiplied by f to find the area of a rectangle that is approximately the same size as this sliver. So, A − A ≈ f h In fact, this becomes a perfect equality if we add the red portion of the excess area shown in the diagram. So, A − A = f h + Rearranging terms, as h approaches 0 in the limit, the last fraction can be shown to go to zero. This is true because the area of the red portion of region is less than or equal to the area of the tiny black-bordered rectangle. More precisely, | f − A − A h | = | Red Excess | h ≤ h | f − f | h = | f − f |, by the continuity of f, the latter expression tends to zero as h does. Therefore, the left-hand side tends to zero as h does and that is, the derivative of the area function A exists and is the original function f, so, the area function is simply an antiderivative of the original function. Computing the derivative of a function and “finding the area” under its curve are opposite operations and this is the crux of the Fundamental Theorem of Calculus. Intuitively, the theorem states that the sum of infinitesimal changes in a quantity over time adds up to the net change in the quantity
15.
Divergence theorem
–
More precisely, the divergence theorem states that the outward flux of a vector field through a closed surface is equal to the volume integral of the divergence over the region inside the surface. Intuitively, it states that the sum of all sources gives the net out of a region. The divergence theorem is an important result for the mathematics of physics and engineering, in particular in electrostatics, in physics and engineering, the divergence theorem is usually applied in three dimensions. However, it generalizes to any number of dimensions, in one dimension, it is equivalent to the fundamental theorem of calculus. In two dimensions, it is equivalent to Greens theorem, the theorem is a special case of the more general Stokes theorem. If a fluid is flowing in some area, then the rate at which fluid flows out of a region within that area can be calculated by adding up the sources inside the region. The fluid flow is represented by a field, and the vector fields divergence at a given point describes the strength of the source or sink there. So, integrating the fields divergence over the interior of the region should equal the integral of the field over the regions boundary. The divergence theorem says that this is true, suppose V is a subset of R n which is compact and has a piecewise smooth boundary S. If F is a continuously differentiable vector field defined on a neighborhood of V, then we have, the left side is a volume integral over the volume V, the right side is the surface integral over the boundary of the volume V. The closed manifold ∂V is quite generally the boundary of V oriented by outward-pointing normals, the symbol within the two integrals stresses once more that ∂V is a closed surface. By replacing F in the divergence theorem with specific forms, other useful identities can be derived, with F → F g for a scalar function g and a vector field F, ∭ V d V = S g F ⋅ n d S. A special case of this is F = ∇ f , in case the theorem is the basis for Greens identities. With F → F × G for two vector fields F and G, ∭ V d V = S ⋅ d S. With F → f c for a scalar function f and vector c, ∭ V c ⋅ ∇ f d V = S ⋅ d S − ∭ V f d V. The last term on the right vanishes for constant c or any divergence free vector field, with F → c × F for vector field F and constant vector c, ∭ V c ⋅ d V = S ⋅ d S. Suppose we wish to evaluate S F ⋅ n d S, where S is the sphere defined by S =. Since the function y is positive in one hemisphere of W and negative in the other, in an equal and opposite way, the same is true for z, ∭ W y d V = ∭ W z d V =0
16.
Green's theorem
–
In mathematics, Greens theorem gives the relationship between a line integral around a simple closed curve C and a double integral over the plane region D bounded by C. It is named after George Green and is the special case of the more general Kelvin–Stokes theorem. Let C be an oriented, piecewise smooth, simple closed curve in a plane. If L and M are functions of defined on a region containing D and have continuous partial derivatives there. In physics, Greens theorem is used to solve two-dimensional flow integrals. In plane geometry, and in particular, area surveying, Greens theorem can be used to determine the area, the following is a proof of half of the theorem for the simplified area D, a type I region where C1 and C3 are curves connected by vertical lines. A similar proof exists for the half of the theorem when D is a type II region where C2. Putting these two together, the theorem is thus proven for regions of type III. The general case can then be deduced from this case by decomposing D into a set of type III regions. If it can be shown that if ∮ C L d x = ∬ D d A and ∮ C M d y = ∬ D d A are true and we can prove easily for regions of type I, and for regions of type II. Greens theorem then follows for regions of type III, assume region D is a type I region and can thus be characterized, as pictured on the right, by D = where g1 and g2 are continuous functions on. Compute the double integral in, ∬ D ∂ L ∂ y d A = ∫ a b ∫ g 1 g 2 ∂ L ∂ y d y d x = ∫ a b d x, now compute the line integral in. C can be rewritten as the union of four curves, C1, C2, C3, with C1, use the parametric equations, x = x, y = g1, a ≤ x ≤ b. Then ∫ C1 L d x = ∫ a b L d x, with C3, use the parametric equations, x = x, y = g2, a ≤ x ≤ b. Then ∫ C3 L d x = − ∫ − C3 L d x = − ∫ a b L d x, the integral over C3 is negated because it goes in the negative direction from b to a, as C is oriented positively. On C2 and C4, x remains constant, meaning ∫ C4 L d x = ∫ C2 L d x =0, combining with, we get for regions of type I. A similar treatment yields for regions of type II, putting the two together, we get the result for regions of type III. Write F for the vector-valued function F =, start with the left side of Greens theorem, ∮ C = ∮ C ⋅ = ∮ C F ⋅ d r
17.
Stokes' theorem
–
In vector calculus, and more generally differential geometry, Stokes theorem, discovered in its modern form by É. Cartan and first published in 1945, is a statement about the integration of differential forms on manifolds and this modern form of Stokes theorem is a vast generalization of a classical result. Lord Kelvin communicated it to George Stokes in a letter dated July 2,1850, Stokes set the theorem as a question on the 1854 Smiths Prize exam, which led to the result bearing his name, even though it was actually first published by Hermann Hankel in 1861. This classical statement, along with the divergence theorem, the fundamental theorem of calculus. By the choice of F, dF/dx = f, in the parlance of differential forms, this is saying that f dx is the exterior derivative of the 0-form, i. e. function, F, in other words, that dF = f dx. The general Stokes theorem applies to higher differential forms ω instead of just 0-forms such as F, a closed interval is a simple example of a one-dimensional manifold with boundary. Its boundary is the set consisting of the two points a and b, integrating f over the interval may be generalized to integrating forms on a higher-dimensional manifold. Two technical conditions are needed, the manifold has to be orientable, the two points a and b form the boundary of the closed interval. More generally, Stokes theorem applies to oriented manifolds M with boundary, the boundary ∂M of M is itself a manifold and inherits a natural orientation from that of M. For example, the orientation of the interval gives an orientation of the two boundary points. Intuitively, a inherits the opposite orientation as b, as they are at opposite ends of the interval, so, integrating F over two boundary points a, b is taking the difference F − F. In even simpler terms, one can consider that points can be thought of as the boundaries of curves, so the fundamental theorem reads, ∫ f d x = ∫ d F = ∫ − ∪ + F = F − F. Let Ω be a smooth manifold with boundary of dimension n. First, suppose that α is compactly supported in the domain of a single, in this case, we define the integral of α over Ω as ∫ Ω α = ∫ φ ∗ α, i. e. via the pullback of α to Rn. This quantity is well-defined, that is, it does not depend on the choice of the coordinate charts, nor the partition of unity. Stokes theorem reads, If ω is an -form with compact support on Ω, here d is the exterior derivative, which is defined using the manifold structure only. On the right-hand side, a circle is used within the integral sign to stress the fact that the -manifold ∂Ω has no boundary. The right-hand side of the equation is used to formulate integral laws
18.
Topology
–
In mathematics, topology is concerned with the properties of space that are preserved under continuous deformations, such as stretching, crumpling and bending, but not tearing or gluing. This can be studied by considering a collection of subsets, called open sets, important topological properties include connectedness and compactness. Topology developed as a field of study out of geometry and set theory, through analysis of such as space, dimension. Such ideas go back to Gottfried Leibniz, who in the 17th century envisioned the geometria situs, Leonhard Eulers Seven Bridges of Königsberg Problem and Polyhedron Formula are arguably the fields first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, by the middle of the 20th century, topology had become a major branch of mathematics. It defines the basic notions used in all branches of topology. Algebraic topology tries to measure degrees of connectivity using algebraic constructs such as homology, differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to geometry and together they make up the geometric theory of differentiable manifolds. Geometric topology primarily studies manifolds and their embeddings in other manifolds, a particularly active area is low-dimensional topology, which studies manifolds of four or fewer dimensions. This includes knot theory, the study of mathematical knots, Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler and his 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750 Euler wrote to a friend that he had realised the importance of the edges of a polyhedron and this led to his polyhedron formula, V − E + F =2. Some authorities regard this analysis as the first theorem, signalling the birth of topology, further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term Topologie in Vorstudien zur Topologie, written in his native German, in 1847, the term topologist in the sense of a specialist in topology was used in 1905 in the magazine Spectator. Their work was corrected, consolidated and greatly extended by Henri Poincaré, in 1895 he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology. Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a case of a general topological space. In 1914, Felix Hausdorff coined the term topological space and gave the definition for what is now called a Hausdorff space, currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski
19.
De Rham cohomology
–
It is a cohomology theory based on the existence of differential forms with prescribed properties. The de Rham complex is the complex of exterior differential forms on some smooth manifold M. 0 → Ω0 → d Ω1 → d Ω2 → d Ω3 → ⋯ where Ω0 is the space of functions on M, Ω1 is the space of 1-forms. The converse, however, is not in true, closed forms need not be exact. A simple but significant case is the 1-form of angle measure on the unit circle and we can, however, change the topology by removing just one point. The idea of de Rham cohomology is to classify the different types of closed forms on a manifold. One performs this classification by saying that two closed forms α, β ∈ Ωk are cohomologous if they differ by an exact form and this classification induces an equivalence relation on the space of closed forms in Ωk. One then defines the k-th de Rham cohomology group H d R k to be the set of classes, that is. Note that, for any manifold M with n connected components H d R0 ≅ R n and this follows from the fact that any smooth function on M with zero derivative is constant on each of the connected components of M. One may often find the general de Rham cohomologies of a manifold using the fact about the zero cohomology. Another useful fact is that the de Rham cohomology is a homotopy invariant, let n >0, m ≥0, and I an open real interval. Then H d R k ≃ { R if k =0, n,0 if k ≠0, n, similarly, allowing n >0 here, we obtain H d R k ≃ R. Punctured Euclidean space is simply Euclidean space with the origin removed. Stokes theorem is an expression of duality between de Rham cohomology and the homology of chains and it says that the pairing of differential forms and chains, via integration, gives a homomorphism from de Rham cohomology H d R k to singular cohomology groups H k. De Rhams theorem, proved by Georges de Rham in 1931, states that for a smooth manifold M, the theorem of de Rham asserts that this is an isomorphism between de Rham cohomology and singular cohomology. The wedge product endows the direct sum of groups with a ring structure. A further result of the theorem is that the two rings are isomorphic, where the analogous product on singular cohomology is the cup product. Let Ωk denote the sheaf of germs of k-forms on M, by the Poincaré lemma, the following sequence of sheaves is exact,0 → R → Ω0 → d Ω1 → d Ω2 → d ⋯ → d Ω m →0. This sequence now breaks up into short exact sequences 0 → d Ω k −1 → ⊂ Ω k → d d Ω k →0, each of these induces a long exact sequence in cohomology
20.
Differentiable manifold
–
In mathematics, a differentiable manifold is a type of manifold that is locally similar enough to a linear space to allow one to do calculus. Any manifold can be described by a collection of charts, also known as an atlas, one may then apply ideas from calculus while working within the individual charts, since each chart lies within a linear space to which the usual rules of calculus apply. If the charts are suitably compatible, then computations done in one chart are valid in any other differentiable chart, in formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a linear space. In other words, where the domains of overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the charts to one another are called transition maps. Differentiability means different things in different contexts including, continuously differentiable, k times differentiable, smooth, furthermore, the ability to induce such a differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems. A differential structure allows one to define the globally differentiable tangent space, differentiable functions, differentiable manifolds are very important in physics. Special kinds of differentiable manifolds form the basis for theories such as classical mechanics, general relativity. It is possible to develop a calculus for differentiable manifolds and this leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry, the emergence of differential geometry as a distinct discipline is generally credited to Carl Friedrich Gauss and Bernhard Riemann. Riemann first described manifolds in his famous habilitation lecture before the faculty at Göttingen and these ideas found a key application in Einsteins theory of general relativity and its underlying equivalence principle. A modern definition of a 2-dimensional manifold was given by Hermann Weyl in his 1913 book on Riemann surfaces, the widely accepted general definition of a manifold in terms of an atlas is due to Hassler Whitney. A presentation of a manifold is a second countable Hausdorff space that is locally homeomorphic to a linear space. This formalizes the notion of patching together pieces of a space to make a manifold – the manifold produced also contains the data of how it has been patched together, However, different atlases may produce the same manifold, a manifold does not come with a preferred atlas. And, thus, one defines a manifold to be a space as above with an equivalence class of atlases. There are a number of different types of manifolds, depending on the precise differentiability requirements on the transition functions. Some common examples include the following, a differentiable manifold is a topological manifold equipped with an equivalence class of atlases whose transition maps are all differentiable
21.
Vector field
–
In vector calculus, a vector field is an assignment of a vector to each point in a subset of space. A vector field in the plane, can be visualised as, the elements of differential and integral calculus extend naturally to vector fields. Vector fields can usefully be thought of as representing the velocity of a flow in space. In coordinates, a field on a domain in n-dimensional Euclidean space can be represented as a vector-valued function that associates an n-tuple of real numbers to each point of the domain. This representation of a vector field depends on the coordinate system, vector fields are often discussed on open subsets of Euclidean space, but also make sense on other subsets such as surfaces, where they associate an arrow tangent to the surface at each point. More generally, vector fields are defined on manifolds, which are spaces that look like Euclidean space on small scales. In this setting, a field gives a tangent vector at each point of the manifold. Vector fields are one kind of tensor field, given a subset S in Rn, a vector field is represented by a vector-valued function V, S → Rn in standard Cartesian coordinates. If each component of V is continuous, then V is a vector field. A vector field can be visualized as assigning a vector to individual points within an n-dimensional space, in physics, a vector is additionally distinguished by how its coordinates change when one measures the same vector with respect to a different background coordinate system. The transformation properties of vectors distinguish a vector as a distinct entity from a simple list of scalars. Thus, suppose that is a choice of Cartesian coordinates, in terms of which the components of the vector V are V x =, then the components of the vector V in the new coordinates are required to satisfy the transformation law Such a transformation law is called contravariant. Given a differentiable manifold M, a field on M is an assignment of a tangent vector to each point in M. More precisely, a vector field F is a mapping from M into the tangent bundle TM so that p ∘ F is the identity mapping where p denotes the projection from TM to M, in other words, a vector field is a section of the tangent bundle. If the manifold M is smooth or analytic—that is, the change of coordinates is smooth —then one can make sense of the notion of vector fields. The collection of all vector fields on a smooth manifold M is often denoted by Γ or C∞. A vector field for the movement of air on Earth will associate for every point on the surface of the Earth a vector with the wind speed and direction for that point. This can be drawn using arrows to represent the wind, the length of the arrow will be an indication of the wind speed
22.
Chain (algebraic topology)
–
In algebraic topology, a simplicial k-chain is a formal linear combination of k-simplices. Integration is defined on chains by taking the combination of integrals over the simplices in the chain with coefficients typically integers. The set of all k-chains forms a group and the sequence of groups is called a chain complex. The boundary of a chain is the combination of boundaries of the simplices in the chain. The boundary of a k-chain is a -chain, note that the boundary of a simplex is not a simplex, but a chain with coefficients 1 or −1 – thus chains are the closure of simplices under the boundary operator. Example 1, The boundary of a path is the difference of its endpoints. Example 2, The boundary of the triangle is a sum of its edges with signs arranged to make the traversal of the boundary counterclockwise. A chain is called a cycle when its boundary is zero, a chain that is the boundary of another chain is called a boundary. Boundaries are cycles, so form a chain complex, whose homology groups are called simplicial homology groups. Example 3, A 0-cycle is a combination of points such that the sum of all the coefficients is 0. Thus, the 0-homology group measures the number of connected components of the space. Example 4, The plane punctured at the origin has nontrivial 1-homology group since the circle is a cycle. In differential geometry, the duality between the operator on chains and the exterior derivative is expressed by the general Stokes theorem
23.
Measure (mathematics)
–
In mathematical analysis, a measure on a set is a systematic way to assign a number to each suitable subset of that set, intuitively interpreted as its size. In this sense, a measure is a generalization of the concepts of length, area, for instance, the Lebesgue measure of the interval in the real numbers is its length in the everyday sense of the word – specifically,1. Technically, a measure is a function that assigns a real number or +∞ to subsets of a set X. It must further be countably additive, the measure of a subset that can be decomposed into a finite number of smaller disjoint subsets, is the sum of the measures of the smaller subsets. In general, if one wants to associate a consistent size to each subset of a set while satisfying the other axioms of a measure. This problem was resolved by defining measure only on a sub-collection of all subsets, the so-called measurable subsets and this means that countable unions, countable intersections and complements of measurable subsets are measurable. Non-measurable sets in a Euclidean space, on which the Lebesgue measure cannot be defined consistently, are complicated in the sense of being badly mixed up with their complement. Indeed, their existence is a consequence of the axiom of choice. Measure theory was developed in stages during the late 19th and early 20th centuries by Émile Borel, Henri Lebesgue, Johann Radon. The main applications of measures are in the foundations of the Lebesgue integral, in Andrey Kolmogorovs axiomatisation of probability theory, probability theory considers measures that assign to the whole set the size 1, and considers measurable subsets to be events whose probability is given by the measure. Ergodic theory considers measures that are invariant under, or arise naturally from, let X be a set and Σ a σ-algebra over X. A function μ from Σ to the real number line is called a measure if it satisfies the following properties, Non-negativity. Countable additivity, For all countable collections i =1 ∞ of pairwise disjoint sets in Σ, μ = ∑ k =1 ∞ μ One may require that at least one set E has finite measure. Then the empty set automatically has measure zero because of countable additivity, because μ = μ = μ + μ + μ + …, which implies that μ =0. If only the second and third conditions of the definition of measure above are met, the pair is called a measurable space, the members of Σ are called measurable sets. If and are two spaces, then a function f, X → Y is called measurable if for every Y-measurable set B ∈ Σ Y. See also Measurable function#Caveat about another setup, a triple is called a measure space. A probability measure is a measure with total measure one – i. e, a probability space is a measure space with a probability measure
24.
Linear form
–
In linear algebra, a linear functional or linear form is a linear map from a vector space to its field of scalars. The set of all linear functionals from V to k, Homk, forms a space over k with the addition of the operations of addition. This space is called the space of V, or sometimes the algebraic dual space. It is often written V∗ or V′ when the field k is understood, if V is a topological vector space, the space of continuous linear functionals — the continuous dual — is often simply called the dual space. If V is a Banach space, then so is its dual, to distinguish the ordinary dual space from the continuous dual space, the former is sometimes called the algebraic dual. In finite dimensions, every linear functional is continuous, so the dual is the same as the algebraic dual. Suppose that vectors in the coordinate space Rn are represented as column vectors x → =. For each row there is a linear functional f defined by f = a 1 x 1 + ⋯ + a n x n. This is just the product of the row vector and the column vector x →, f =. Linear functionals first appeared in functional analysis, the study of spaces of functions. Let Pn denote the space of real-valued polynomial functions of degree ≤n defined on an interval. If c ∈, then let evc, Pn → R be the evaluation functional, the mapping f → f is linear since = f + g = α f. If x0, …, xn are n+1 distinct points in, then the evaluation functionals evxi, the integration functional I defined above defines a linear functional on the subspace Pn of polynomials of degree ≤ n. If x0, …, xn are n+1 distinct points in, then there are coefficients a0, … and this forms the foundation of the theory of numerical quadrature. This follows from the fact that the linear functionals evxi, f → f defined above form a basis of the space of Pn. Linear functionals are particularly important in quantum mechanics, quantum mechanical systems are represented by Hilbert spaces, which are anti–isomorphic to their own dual spaces. A state of a mechanical system can be identified with a linear functional. For more information see bra–ket notation, in the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions
25.
Cross product
–
In mathematics and vector algebra, the cross product or vector product is a binary operation on two vectors in three-dimensional space and is denoted by the symbol ×. Given two linearly independent vectors a and b, the product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them. It has many applications in mathematics, physics, engineering, and it should not be confused with dot product. If two vectors have the direction or if either one has zero length, then their cross product is zero. The cross product is anticommutative and is distributive over addition, the space R3 together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. If one adds the further requirement that the product be uniquely defined, the cross product of two vectors a and b is defined only in three-dimensional space and is denoted by a × b. In physics, sometimes the notation a ∧ b is used, if the vectors a and b are parallel, by the above formula, the cross product of a and b is the zero vector 0. Then, the n is coming out of the thumb. Using this rule implies that the cross-product is anti-commutative, i. e. b × a = −. By pointing the forefinger toward b first, and then pointing the finger toward a. Using the cross product requires the handedness of the system to be taken into account. If a left-handed coordinate system is used, the direction of the n is given by the left-hand rule. This, however, creates a problem because transforming from one arbitrary reference system to another, the problem is clarified by realizing that the cross product of two vectors is not a vector, but rather a pseudovector. See cross product and handedness for more detail, in 1881, Josiah Willard Gibbs, and independently Oliver Heaviside, introduced both the dot product and the cross product using a period and an x, respectively, to denote them. These alternative names are widely used in the literature. Both the cross notation and the cross product were possibly inspired by the fact that each scalar component of a × b is computed by multiplying non-corresponding components of a and b. Conversely, a dot product a ⋅ b involves multiplications between corresponding components of a and b, as explained below, the cross product can be expressed in the form of a determinant of a special 3 ×3 matrix
26.
Open set
–
In topology, an open set is an abstract concept generalizing the idea of an open interval in the real line. These conditions are very loose, and they allow enormous flexibility in the choice of open sets, in the two extremes, every set can be open, or no set can be open but the space itself and the empty set. In practice, however, open sets are usually chosen to be similar to the intervals of the real line. The notion of an open set provides a way to speak of nearness of points in a topological space. Once a choice of open sets is made, the properties of continuity, connectedness, and compactness, each choice of open sets for a space is called a topology. Although open sets and the topologies that they comprise are of importance in point-set topology. Intuitively, an open set provides a method to distinguish two points, for example, if about one point in a topological space there exists an open set not containing another point, the two points are referred to as topologically distinguishable. In this manner, one may speak of two subsets of a topological space are near without concretely defining a metric on the topological space. Therefore, topological spaces may be seen as a generalization of metric spaces, in the set of all real numbers, one has the natural Euclidean metric, that is, a function which measures the distance between two real numbers, d = |x - y|. Therefore, given a number, one can speak of the set of all points close to that real number. In essence, points within ε of x approximate x to an accuracy of degree ε, note that ε >0 always but as ε becomes smaller and smaller, one obtains points that approximate x to a higher and higher degree of accuracy. For example, if x =0 and ε =1, the points within ε of x are precisely the points of the interval, that is, however, with ε =0.5, the points within ε of x are precisely the points of. Clearly, these points approximate x to a degree of accuracy compared to when ε =1. The previous discussion shows, for the case x =0, in particular, sets of the form give us a lot of information about points close to x =0. Thus, rather than speaking of a concrete Euclidean metric, one may use sets to describe points close to x, thus, we find that in some sense, every real number is distance 0 away from 0. It may help in case to think of the measure as being a binary condition, all things in R are equally close to 0. In general, one refers to the family of sets containing 0, used to approximate 0, as a neighborhood basis, in fact, one may generalize these notions to an arbitrary set, rather than just the real numbers. In this case, given a point of that set, one may define a collection of sets around x, of course, this collection would have to satisfy certain properties for otherwise we may not have a well-defined method to measure distance
27.
Smoothness
–
In mathematical analysis, the smoothness of a function is a property measured by the number of derivatives it has which are continuous. A smooth function is a function that has derivatives of all orders everywhere in its domain, differentiability class is a classification of functions according to the properties of their derivatives. Higher order differentiability classes correspond to the existence of more derivatives, consider an open set on the real line and a function f defined on that set with real values. Let k be a non-negative integer, the function f is said to be of class Ck if the derivatives f′, f′′. The function f is said to be of class C∞, or smooth, if it has derivatives of all orders. The function f is said to be of class Cω, or analytic, if f is smooth, Cω is thus strictly contained in C∞. Bump functions are examples of functions in C∞ but not in Cω, to put it differently, the class C0 consists of all continuous functions. The class C1 consists of all differentiable functions whose derivative is continuous, thus, a C1 function is exactly a function whose derivative exists and is of class C0. In particular, Ck is contained in Ck−1 for every k, C∞, the class of infinitely differentiable functions, is the intersection of the sets Ck as k varies over the non-negative integers. The function f = { x if x ≥0,0 if x <0 is continuous, because cos oscillates as x →0, f ’ is not continuous at zero. Therefore, this function is differentiable but not of class C1, the functions f = | x | k +1 where k is even, are continuous and k times differentiable at all x. But at x =0 they are not times differentiable, so they are of class Ck, the exponential function is analytic, so, of class Cω. The trigonometric functions are also analytic wherever they are defined, the function f is an example of a smooth function with compact support. Let n and m be some positive integers, if f is a function from an open subset of Rn with values in Rm, then f has component functions f1. Each of these may or may not have partial derivatives, the classes C∞ and Cω are defined as before. These criteria of differentiability can be applied to the functions of a differential structure. The resulting space is called a Ck manifold, if one wishes to start with a coordinate-independent definition of the class Ck, one may start by considering maps between Banach spaces. A map from one Banach space to another is differentiable at a point if there is a map which approximates it at that point
28.
Directional derivative
–
It therefore generalizes the notion of a partial derivative, in which the rate of change is taken along one of the curvilinear coordinate curves, all other coordinates being constant. The directional derivative is a case of the Gâteaux derivative. The directional derivative of a function f = f along a vector v = is the function defined by the limit ∇ v f = lim h →0 f − f h. In the context of a function on a Euclidean space, some texts restrict the vector v to being a unit vector, without the restriction, this definition is valid in a broad range of contexts, for example where the norm of a vector is undefined. Intuitively, the derivative of f at a point x represents the rate of change of f with respect to time when moving past x at velocity v. Some authors define the derivative to be with respect to an arbitrary nonzero vector v after normalization, thus being independent of its magnitude. This definition gives the rate of increase of f per unit of distance moved in the given direction. In this case, one has ∇ v f = lim h →0 f − f h | v |, many of the familiar properties of the ordinary derivative hold for the directional derivative. These include, for any functions f and g defined in a neighborhood of, suppose that f is a function defined in a neighborhood of p, and differentiable at p. If v is a tangent vector to M at p, then the derivative of f along v, denoted variously as df, ∇ v f, L v f, or v p. Let γ, → M be a curve with γ = p. We translate a covector S along δ then δ′ and then subtract the translation along δ′, instead of building the directional derivative using partial derivatives, we use the covariant derivative. The rotation operator for an angle θ, i. e. See for example Neumann boundary condition, if the normal direction is denoted by n, then the directional derivative of a function f is sometimes denoted as ∂ f ∂ n. The directional directive provides a way of finding these derivatives. The definitions of directional derivatives for various situations are given below and it is assumed that the functions are sufficiently smooth that derivatives can be taken. Let f be a real valued function of the vector v, mathematical methods for physics and engineering
29.
Partial derivative
–
In mathematics, the symmetry of second derivatives refers to the possibility under certain conditions of interchanging the order of taking partial derivatives of a function f of n variables. This is sometimes known as Schwarzs theorem or Youngs theorem, in the context of partial differential equations it is called the Schwarz integrability condition. This matrix of partial derivatives of f is called the Hessian matrix of f. The entries in it off the diagonal are the mixed derivatives. In most real-life circumstances the Hessian matrix is symmetric, although there are a number of functions that do not have this property. Mathematical analysis reveals that symmetry requires a hypothesis on f that goes further than simply stating the existence of the derivatives at a particular point. Schwarz theorem gives a sufficient condition on f for this to occur, in symbols, the symmetry says that, for example, ∂ ∂ x = ∂ ∂ y. This equality can also be written as ∂ x y f = ∂ y x f, alternatively, the symmetry can be written as an algebraic statement involving the differential operator Di which takes the partial derivative with respect to xi, Di. From this relation it follows that the ring of operators with constant coefficients. But one should naturally specify some domain for these operators and it is easy to check the symmetry as applied to monomials, so that one can take polynomials in the xi as a domain. In fact smooth functions are possible, the partial differentiations of this function are commutative at that point. One easy way to establish this theorem is by applying Greens theorem to the gradient of f, a weaker condition than the continuity of second partial derivatives which nevertheless suffices to ensure symmetry is that all partial derivatives are themselves differentiable. The theory of distributions eliminates analytic problems with the symmetry, the derivative of an integrable function can always be defined as a distribution, and symmetry of mixed partial derivatives always holds as an equality of distributions. The use of integration by parts to define differentiation of distributions puts the symmetry question back onto the test functions. In more detail, = − = f = f = − =, another approach, which defines the Fourier transform of a function, is to note that on such transforms partial derivatives become multiplication operators that commute much more obviously. The symmetry may be if the function fails to have differentiable partial derivatives. An example of non-symmetry is the function, This function is everywhere continuous, however, the second partial derivatives are not continuous at, and the symmetry fails. In fact, along the x-axis the y-derivative is ∂ y f | = x, vice versa, along the y-axis the x-derivative ∂ x f | = − y, and so ∂ y ∂ x f | = −1
30.
Linear function
–
In linear algebra and functional analysis, a linear function is a linear map. In calculus, analytic geometry and related areas, a function is a polynomial of degree one or less. When the function is of one variable, it is of the form f = a x + b. The graph of such a function of one variable is a nonvertical line, a is frequently referred to as the slope of the line, and b as the intercept. For a function f of any number of independent variables, the general formula is f = b + a 1 x 1 + … + a k x k. A constant function is also considered linear in this context, as it is a polynomial of degree zero or is the zero polynomial and its graph, when there is only one independent variable, is a horizontal line. In this context, the meaning may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning is a kind of affine map. In linear algebra, a function is a map f between two vector spaces that preserves vector addition and scalar multiplication, f = f + f f = a f. Here a denotes a constant belonging to some field K of scalars and x and y are elements of a vector space, some authors use linear function only for linear maps that take values in the scalar field, these are also called linear functionals. The linear functions of calculus qualify as linear maps when f =0, or, equivalently, geometrically, the graph of the function must pass through the origin. Homogeneous function Nonlinear system Piecewise linear function Linear interpolation Discontinuous linear map Izrail Moiseevich Gelfand, Lectures on Linear Algebra, Interscience Publishers, ISBN 0-486-66082-6 Thomas S. Shores, Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6 James Stewart, Calculus, Early Transcendentals, edition 7E, ISBN 978-0-538-49790-9 Leonid N. Vaserstein, Linear Programming, in Leslie Hogben, ed. Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap
31.
Linear map
–
In mathematics, a linear map is a mapping V → W between two modules that preserves the operations of addition and scalar multiplication. An important special case is when V = W, in case the map is called a linear operator, or an endomorphism of V. Sometimes the term linear function has the meaning as linear map. A linear map always maps linear subspaces onto linear subspaces, for instance it maps a plane through the origin to a plane, Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of algebra, a linear map is a module homomorphism. In the language of category theory it is a morphism in the category of modules over a given ring, let V and W be vector spaces over the same field K. e. that for any vectors x1. Am ∈ K, the equality holds, f = a 1 f + ⋯ + a m f. It is then necessary to specify which of these fields is being used in the definition of linear. If V and W are considered as spaces over the field K as above, for example, the conjugation of complex numbers is an R-linear map C → C, but it is not C-linear. A linear map from V to K is called a linear functional and these statements generalize to any left-module RM over a ring R without modification, and to any right-module upon reversing of the scalar multiplication. The zero map between two left-modules over the ring is always linear. The identity map on any module is a linear operator, any homothecy centered in the origin of a vector space, v ↦ c v where c is a scalar, is a linear operator. This does not hold in general for modules, where such a map might only be semilinear, for real numbers, the map x ↦ x2 is not linear. Conversely, any map between finite-dimensional vector spaces can be represented in this manner, see the following section. Differentiation defines a map from the space of all differentiable functions to the space of all functions. It also defines an operator on the space of all smooth functions. If V and W are finite-dimensional vector spaces over a field F, then functions that send linear maps f, V → W to dimF × dimF matrices in the way described in the sequel are themselves linear maps. The expected value of a variable is linear, as for random variables X and Y we have E = E + E and E = aE
32.
Derivative
–
The derivative of a function of a real variable measures the sensitivity to change of the function value with respect to a change in its argument. Derivatives are a tool of calculus. For example, the derivative of the position of an object with respect to time is the objects velocity. The derivative of a function of a variable at a chosen input value. The tangent line is the best linear approximation of the function near that input value, for this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives may be generalized to functions of real variables. In this generalization, the derivative is reinterpreted as a transformation whose graph is the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables and it can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of variables, the Jacobian matrix reduces to the gradient vector. The process of finding a derivative is called differentiation, the reverse process is called antidifferentiation. The fundamental theorem of calculus states that antidifferentiation is the same as integration, differentiation and integration constitute the two fundamental operations in single-variable calculus. Differentiation is the action of computing a derivative, the derivative of a function y = f of a variable x is a measure of the rate at which the value y of the function changes with respect to the change of the variable x. It is called the derivative of f with respect to x, If x and y are real numbers, and if the graph of f is plotted against x, the derivative is the slope of this graph at each point. The simplest case, apart from the case of a constant function, is when y is a linear function of x. This formula is true because y + Δ y = f = m + b = m x + m Δ x + b = y + m Δ x. Thus, since y + Δ y = y + m Δ x and this gives an exact value for the slope of a line. If the function f is not linear, however, then the change in y divided by the change in x varies, differentiation is a method to find an exact value for this rate of change at any given value of x. The idea, illustrated by Figures 1 to 3, is to compute the rate of change as the value of the ratio of the differences Δy / Δx as Δx becomes infinitely small
33.
One-form
–
In linear algebra, a one-form on a vector space is the same as a linear functional on the space. The usage of one-form in this context usually distinguishes the one-forms from higher-degree multilinear functionals on the space, in differential geometry, a one-form on a differentiable manifold is a smooth section of the cotangent bundle. Equivalently, a one-form on a manifold M is a mapping of the total space of the tangent bundle of M to R whose restriction to each fibre is a linear functional on the tangent space. Symbolically, α, T M → R, α x = α | T x M, T x M → R where αx is linear, often one-forms are described locally, particularly in local coordinates. From this perspective, a one-form has a covariant transformation law on passing from one system to another. Thus a one-form is an order 1 covariant tensor field, many real-world concepts can be described as one-forms, Indexing into a vector, The second element of a three-vector is given by the one-form. That is, the element of is · = y. Mean, The mean element of an n-vector is given by the one-form and that is, mean = ⋅ v. Sampling, Sampling with a kernel can be considered a one-form, where the one-form is the kernel shifted to the appropriate location. Net present value of a net flow, R, is given by the one-form w, = −t where i is the discount rate. That is, N P V = ⟨ w, R ⟩ = ∫ t =0 ∞ R t d t, the most basic non-trivial differential one-form is the change in angle form d θ. This is defined as the derivative of the angle function θ, integrating this derivative along a path gives the total change in angle over the path, and integrating over a closed loop gives the winding number. In the language of geometry, this derivative is a one-form, and it is closed but not exact. This is the most basic example of such a form, let U ⊆ R be open, and consider a differentiable function f, U → R, with derivative f. The differential df of f, at a point x 0 ∈ U, is defined as a linear map of the variable dx. Specifically, d f, d x ↦ f ′ d x, hence the map x ↦ d f sends each point x to a linear functional df. This is the simplest example of a differential form, in terms of the de Rham complex, one has an assignment from zero-forms to one-forms i. e. f ↦ d f. Two-form Reciprocal lattice Tensor Inner product
34.
Euclidean vector
–
In mathematics, physics, and engineering, a Euclidean vector is a geometric object that has magnitude and direction. Vectors can be added to other vectors according to vector algebra, a Euclidean vector is frequently represented by a line segment with a definite direction, or graphically as an arrow, connecting an initial point A with a terminal point B, and denoted by A B →. A vector is what is needed to carry the point A to the point B and it was first used by 18th century astronomers investigating planet rotation around the Sun. The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space. Vectors play an important role in physics, the velocity and acceleration of a moving object, many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances, their magnitude and direction can still be represented by the length, the mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the system include pseudovectors and tensors. The concept of vector, as we know it today, evolved gradually over a period of more than 200 years, about a dozen people made significant contributions. Giusto Bellavitis abstracted the basic idea in 1835 when he established the concept of equipollence, working in a Euclidean plane, he made equipollent any pair of line segments of the same length and orientation. Essentially he realized an equivalence relation on the pairs of points in the plane, the term vector was introduced by William Rowan Hamilton as part of a quaternion, which is a sum q = s + v of a Real number s and a 3-dimensional vector. Like Bellavitis, Hamilton viewed vectors as representative of classes of equipollent directed segments, grassmanns work was largely neglected until the 1870s. Peter Guthrie Tait carried the standard after Hamilton. His 1867 Elementary Treatise of Quaternions included extensive treatment of the nabla or del operator ∇, in 1878 Elements of Dynamic was published by William Kingdon Clifford. Clifford simplified the quaternion study by isolating the dot product and cross product of two vectors from the complete quaternion product and this approach made vector calculations available to engineers and others working in three dimensions and skeptical of the fourth. Josiah Willard Gibbs, who was exposed to quaternions through James Clerk Maxwells Treatise on Electricity and Magnetism, the first half of Gibbss Elements of Vector Analysis, published in 1881, presents what is essentially the modern system of vector analysis. In 1901 Edwin Bidwell Wilson published Vector Analysis, adapted from Gibbs lectures, in physics and engineering, a vector is typically regarded as a geometric entity characterized by a magnitude and a direction. It is formally defined as a line segment, or arrow